report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
According to State, the OAS is the primary inter-American political forum through which the United States engages with other countries in the Western Hemisphere to promote democracy, human rights, security, and development. While PAHO, IICA, and PAIGH are independent organizations, the Charter of the Organization of American States (OAS Charter) directs them to take into account the recommendations of the OAS General Assembly and Councils. PAHO, a specialized international health agency for the Americas, works with member countries throughout the region to improve and protect people’s health and serves as the Regional Office for the Americas of the World Health Organization, the United Nations agency on health. IICA, among other things, supports its member states’ efforts to achieve agricultural development and rural well- being through consultation and the administration of agricultural projects through agreements with the OAS and other entities. PAIGH specializes in regional cartography, geography, history, and geophysics and has facilitated the settlement of regional border disputes. Member states collectively finance these organizations by providing assessed contributions, in accordance with the organizations’ regulations. The member states’ assessed contributions are intended to finance the organizations’ regular budgets, which generally cover the organizations’ day-to-day operating expenses, such as facilities and salaries. Member states of each organization meet to review and approve the organizations’ budgets. The exact dollar amount each member state is responsible for providing corresponds to its assessed percentage of the total approved assessment for any given year. The budgets are based on total approved quota assessment and other projected income. The OAS’s system for determining member states’ quotas is used to calculate member states’ assessed contributions by the other three organizations. Thus, any change in the OAS’s assessed quota structure should be reflected at PAHO, IICA, and PAIGH, according to their respective processes regarding the determination of assessed contributions. Member states also finance certain OAS, PAHO, and IICA activities through voluntary contributions. Member states generally target these contributions toward specific programs or issue areas. According to U.S. officials, the United States provides voluntary contributions to the OAS, IICA, and PAHO primarily through grants for specific projects from State, the U.S. Agency for International Development, the U.S. Department of Agriculture, and the Department of Health and Human Services. For example, according to OAS documentation, in 2015, State contributed slightly more than $200,000 to the OAS to fund judicial training to combat money laundering. According to U.S. agency officials, the organizations’ regional knowledge and technical expertise make them effective implementing partners for projects serving U.S. national interests and priorities throughout the hemisphere. The Reform Act directs the Secretary of State to submit “a multiyear strategy that…identifies a path toward the adoption of necessary reforms that would lead to an assessed fee structure in which no member state would pay more than 50 percent of the OAS’s assessed yearly fees.” According to the Reform Act, it is the sense of Congress that it is in the interest of the United States, OAS member states, and a modernized OAS that the OAS move toward an assessed quota structure that (1) assures the financial sustainability of the organization and (2) establishes, by October 2018, that no member state pays more than 50 percent of the organization’s assessed contributions. The United States’ assessed contributions constituted over 57 percent of total assessed contributions by member states to the OAS, PAHO, IICA, and PAIGH from 2014 through 2016, as shown in table 1. During this time, the United States’ assessed quota for these organizations did not change, and the total assessed contributions for all member states of these organizations remained about the same; thus, the actual amounts assessed to the United States generally remained the same. All four organizations apply a similar assessed quota structure that uses the relative size of member states’ economies, among other things, to help determine each member state’s assessed contributions. The OAS determines the assessed quota for each member state based on the United Nations’ methodology, as adapted for the OAS, using criteria that include gross national income, debt burden, and per capita income. In addition, the OAS applies a minimum assessed quota of 0.022 percent and a maximum assessed quota of 59.470 percent. According to State officials, the OAS last made a major revision to its assessed quota structure in 1990 when Canada joined the organization, and the United States’ and other members’ assessed quotas were reduced as a result. OAS officials said that while member states seek, as far as possible, to adjust the assessed quota structure through consensus, the OAS General Assembly may force a vote and adopt changes with a two-thirds majority. The United States also provided voluntary contributions totaling about $105 million to the OAS, PAHO, and IICA from calendar years 2014 through 2016, as shown in table 2. In 2014, the United States contributed $37 million in voluntary contributions, or approximately 22 percent of the total of $168 million in such contributions from all member states. In 2015, the United States contributed $36 million, or approximately 29 percent of the total of $123 million from all member states. In 2016, the United States contributed $32 million, or approximately 22 percent of the total of $143 million from all member states. According to U.S. officials, levels of U.S. voluntary contributions vary year-to-year due to factors that include the schedule of multiyear grant disbursements, member states’ priorities, and sudden crises. For example, the U.S. Agency for International Development made a $200,000 contribution to PAHO in 2016 for post- earthquake reconstruction and resilience-building in Ecuador. State is working with other member states toward reforming the OAS’s quota structure for assessed contributions so that no member state provides more than 50 percent of the organization’s annual assessed contributions, but State officials told us that reaching consensus among OAS member states will be difficult. In response to the Reform Act, State developed a strategy that identified a path toward the adoption of necessary reforms that would lead to an assessed quota structure in which no member state would pay more than 50 percent of the OAS’s annual assessed contributions. State officials told us that they submitted the strategy to Congress in April 2014. The strategy included efforts to engage member state governments to explore options for reforming the quota structure and to examine the extent to which the OAS’s quota- setting methodology reflects member states’ capacity to finance the organization’s activities. According to the OAS and State’s 2015 report to Congress, achieving quota structure reform will require one or more of the other major contributors to accept an increase in their quotas—the percentages of total annual assessed contributions that they agree to provide. State officials told us that they have been working to implement this strategy. For example, State officials told us they engaged with other OAS member states, including Canada and Mexico, to explore options for quota structure reform. According to State officials, Canada led a modernization committee that produced a strategic plan that included quota structure reform. State officials added that they also reached out to member states from the Caribbean to discuss the importance of quota structure reform while highlighting OAS development programs that benefit Caribbean nations. In addition, officials at the U.S. Mission to the OAS worked with their counterparts from Mexico to review the OAS’s assessed quota structure and to consult on alternatives that would adjust all member states’ quotas so that no member pays more than 50 percent of the OAS’s assessed contributions. According to State officials, the four largest contributing member states, including the United States, have agreed on the importance of quota structure reform. State officials added that quota structure reform efforts were bolstered by the selection of a reform-minded OAS Secretary General in 2015. However, State officials told us that while it will be difficult, it is not impossible for OAS member states to reach consensus on reforming the organization’s assessed quota structure by October of 2018. Several issues among the member states have impeded the progress of State’s strategy, according to State and OAS officials. These issues include the following: State and OAS officials told us that regional political tensions have complicated OAS member states’ ability to reach consensus on quota structure reform. According to State officials, Venezuela’s contentious political relationship with the OAS has hindered progress on various efforts promoted by the United States, including quota structure reform. State officials added that Venezuela has actively worked against the OAS to undermine the normal procedures of the organization. State officials told us that some member states have at times supported Venezuela during committee votes. For example, according to State officials, some member states voted against bringing proceedings against Venezuela for violating the Inter- American Democratic Charter in 2016. In this context, State officials emphasized that it was important for member states other than the United States to officially propose resolutions on quota structure reform. State officials told us that certain member states’ nonpayment of their assessed contributions also has impeded the quota structure reform effort, as well as contributing to financial difficulties at the OAS. Venezuela has expressed publically its unwillingness to pay its assessed contributions, according to State officials. Additionally, as of November 2016, the OAS projected that five member states would be more than $17 million in arrears on their assessed contributions to the OAS by the end of 2016. Brazil—the OAS’s second largest contributor—and Venezuela had not fully paid their assessed contributions for 2015 and 2016, which accounted for approximately 99 percent of the more than $17 million in arrears that member states owed the OAS. State officials told us that the large amounts owed by a few member states had contributed to smaller OAS member states’ reluctance to increase their annual assessed quotas to ensure that no member state provides more than 50 percent by 2018. According to State officials, the United States repeatedly urged the Brazilian government to pay its arrears and 2016 contribution as soon as possible. Brazil recently paid its arrears in full for 2015 and 2016 and its assessed contribution for 2017. Thus, as of April 5, 2017, the remaining arrears for all member states at the OAS were lowered to just over $7 million. State officials told us that on April 28, 2017, Venezuela officially notified the OAS of its intent to withdraw from the organization. According to OAS officials, the OAS currently lacks a mechanism to penalize member states for not paying assessed contributions, unlike the other three organizations. OAS officials told us that OAS committees are discussing the potential for defining negative consequences for member states in arrears. According to State officials, the OAS’s next opportunity to discuss quota structure reform at the ministerial level will be at its General Assembly meeting in Mexico City in June 2017. The Mexican government announced that the main theme of the meeting will be strengthening dialogue and cooperation in the OAS. State officials said that high-level engagement between member states’ officials will be needed to promote quota structure reform. They informed us that efforts to reform the assessed quota structure continue at the working level and that they are seeing some progress toward reform. We provided a draft of this report for comment to State, the Departments of Agriculture and Health and Human Services, the U.S. Agency for International Development, the OAS, PAHO, IICA, and PAIGH. The Departments of Agriculture, Health and Human Services, and the U.S. Agency for International Development stated that they did not have any comments on our report. State provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of State, the Secretary of Agriculture, the Secretary of Health and Human Services, the Administrator of the U.S. Agency for International Development, the Secretary General of the Organization of American States, the Secretary General of the Pan American Health Organization, the Director General of the Inter-American Institute for Cooperation on Agriculture, the Secretary General of the Pan-American Institute of History and Geography, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-9601, or melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report responds to a request for GAO to review several issues related to the Organization of American States (OAS), the Pan American Health Organization (PAHO), the Inter-American Institute for Cooperation on Agriculture (IICA), and the Pan-American Institute of Geography and History (PAIGH). In this report, we (1) determine the amounts and percentages of U.S. contributions assessed by these organizations and voluntary contributions paid to them in calendar years 2014 to 2016, and (2) describe the Department of State’s (State) efforts to comply with requirements in the Organization of American States Revitalization and Reform Act of 2013 (Reform Act) regarding a strategy for reform of the assessed quota structure of the OAS. To determine the amounts and percentages of contributions assessed by the OAS, PAHO, IICA, and PAIGH to the United States and other member states, as well as the amounts and percentages of additional voluntary contributions paid to these organizations, we reviewed externally audited budget reports for calendar years 2014 and 2015. For calendar year 2016, we reviewed budget documents from the four organizations and corroborated the accuracy of the data with the organizations and the U.S. agencies that provide funds to these organizations. For assessed contributions, we reviewed the organizations’ assessed quota structures. We report the quota structure percentage assessed to the United States over these 3 years and the corresponding United States’ assessed contribution amounts for this same time period, based on our analysis of data provided by the organizations. We determined the 2014 and 2015 data to be sufficiently reliable for the purpose of reporting the United States’ assessed contributions and quota percentages because these data had been externally audited. To determine the reliability of the 2016 data, we reviewed budget documents that have not yet been audited, discussed these data with knowledgeable officials at the organizations and U.S. agencies, and corroborated them with these officials and U.S. agencies. We determined the data were sufficiently reliable for the purpose of reporting the United States’ assessed contributions and quota percentages in 2016. These data reflect the quotas assessed to the United States and do not reflect total payments made by the U.S. government to the organizations’ regular budget, which include other miscellaneous payments. For voluntary contributions, we reviewed the same externally audited reports and data from the organizations to obtain the amounts contributed by member states and calculated the proportion of the United States’ voluntary contributions compared with those of the other member states. The four organizations under review have different categories of voluntary funds, depending on their source and intended use. For consistency purposes, we worked with State and officials from the OAS, PAHO, and IICA to establish our definition of voluntary contributions as funds given from governments to the organizations for implementing specific projects outside their respective countries. In accordance with this definition, we considered the following categories of voluntary contributions: “specific funds” at the OAS, “government financing of voluntary contributions” at PAHO, and “external resources by financing source” for each member state at IICA. We determined the 2014 and 2015 data to be sufficiently reliable for the purpose of reporting the United States’ voluntary contributions as a percentage of all members’ voluntary contributions because they had been externally audited. To determine the reliability of the 2016 data, we reviewed budget documents that have not yet been audited, discussed these data with knowledgeable officials at the organizations and U.S. agencies, and corroborated them with these officials and U.S. agencies. We determined that the data were sufficiently reliable for the purpose of reporting the United States’ voluntary contributions as a percentage of all members’ voluntary contributions in 2016. To describe State’s efforts to comply with the Reform Act, we analyzed documents from State regarding its strategy for reform of the assessed quota structure in response to the Reform Act. We also interviewed officials from the U.S. Mission to the Organization of American States, State’s Bureau of International Organization Affairs, and the OAS Secretariat for Administration and Finance. We conducted this performance audit from July 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Member States of Four Inter- American Multilateral Organizations organization on April 28, 2017. In addition to the contact named above, Pierre Toureille (Assistant Director), Julia Jebo Grant (Analyst-in-Charge), Paul Sturm, Leslie Stubbs, Kira Self, and Rhonda Horried made key contributions to this report. In addition, David Dayton, Martin de Alteriis, Neil Doherty, and Alex Welsh provided technical assistance. | The United States belongs to several inter-American organizations, including the OAS, PAHO, IICA, and PAIGH, which promote democracy, security, health care, agricultural development, and scientific exchange in the Western Hemisphere. The United States helps finance these organizations' operating expenses through assessed contributions (fees) that are based in part on the size of the U.S. economy relative to those of other members. The Reform Act required State to submit a strategy identifying, among other things, a path toward the adoption of reforms to the OAS's assessed quota structure to ensure that no member will pay more than 50 percent of OAS assessed contributions. In addition, the United States also provides the OAS, PAHO, and IICA with project-specific voluntary contributions. GAO was asked to review U.S. financial contributions to these four organizations. In this report, GAO (1) determines the amounts and percentages of U.S. contributions assessed by these organizations and voluntary contributions paid to them in calendar years 2014 to 2016, and (2) describes State's efforts to comply with the Reform Act's requirements regarding a strategy for reform of the assessed quota structure of the OAS. GAO analyzed documents and interviewed officials from State, the Department of Health and Human Services, the U.S. Agency for International Development, the U.S. Department of Agriculture, and the four organizations. GAO also analyzed the four organizations' annual audited financial reports. The United States' assessed contributions constituted over 57 percent of total assessed contributions by member states to four inter-American organizations from 2014 to 2016. These organizations are the Organization of American States (OAS), the Pan American Health Organization (PAHO), the Inter-American Institute for Cooperation on Agriculture (IICA), and the Pan-American Institute of Geography and History (PAIGH). During this time, the annual U.S. percentages (or quotas) of these organizations' assessed contributions have remained about the same. The United States also provided voluntary contributions to three of these organizations, as shown in the table. In response to a requirement in the Organization of American States Revitalization and Reform Act of 2013 (Reform Act), the Department of State (State) submitted to Congress a strategy that included working with OAS member states toward ensuring that the OAS would not assess any single member state a quota of more than 50 percent of all OAS assessed contributions. State officials told GAO that reaching member state agreement on assessed quota reform by 2018 will be difficult, although not impossible. State officials informed GAO that State continues to implement a strategy that includes engaging with other OAS member states, such as Canada and Mexico, to explore assessed quota reform options. For example, State officials have consulted with their counterparts from Mexico to review the OAS's assessed quota structure and to consult on alternatives that would adjust all member states' quotas so that no member state's quota exceeds 50 percent of the OAS's assessed contributions. According to State and OAS officials, obstacles to assessed quota reform include tensions among member states. For example, State officials noted that Venezuela's contentious political relationship with the OAS has hindered progress on various reforms, including assessed quota reform. State officials explained that some member states' failure to fully pay assessed contributions from previous years and smaller member states' reluctance to increase their annual assessed contributions have also impeded assessed quota reform efforts. |
A few organizations have outsourced parts of their operations for many years. Recently, however, interest in more widespread use of outsourcing has increased dramatically. This trend is documented in a recent research report which states that outsourcing has become “a growing business phenomenon and possibly even a cultural phenomenon.” Early outsourcing focused on relatively low-skilled support functions such as janitorial, food service, guard, or data entry services. However, a recent international outsourcing study found that outsourcing in some areas, such as information technology—including information systems development—is growing rapidly. Some organizations have also begun to outsource functions dependent on information technology, such as customer service, research and development, logistics management, and finance and accounting. While outsourcing is growing, the concept is not clearly or uniformly defined. Definitions of outsourcing can be viewed as ranging from the prolonged use of consultants to perform a simple task to transferring the responsibility for performing an entire internal function to a third party. In our recently issued glossary of terms associated with government privatization initiatives, we state that “under outsourcing a government entity remains fully responsible for the provision of affected services . . . while another entity operates the function or performs the service.” We also state that this approach “includes contracting out, granting of franchises to private firms, and the use of volunteers to deliver public services.” Consistent with this definition, we have defined outsourcing for this report as applied to nonfederal organizations as contracting out the continuous performance of a process, activity, or task that was previously performed within the organization. One form of outsourcing used in the federal arena is cross-servicing, an arrangement where one agency provides support services to another agency on a reimbursable basis. Cross-servicing can range from providing computer and software timesharing services to full-service administrative processing. An analogous arrangement in the private sector is the use of shared service centers, which are locations or organizations within a large organization that provide common services to operating locations or business units. In accordance with the definition of outsourcing used in this report, these intra-organization arrangements are discussed as alternative strategies that nonfederal organizations have used to improve their financial operations. In addition, when considering outsourcing, an organization may focus on an entire function or portions of a function. To illustrate, an organization’s finance and accounting function is comprised of processes, activities, and tasks. The payroll process includes various activities, such as calculating employees’ gross compensation for the pay period, determining and deducting amounts from gross compensation to calculate net pay, and printing and distributing payroll checks. Each activity, in turn, includes one or more tasks. The activity of calculating employee compensation, for example, includes such tasks as collecting time cards, tabulating time worked or leave taken per employee, and multiplying hours worked or leave taken per employee by the appropriate pay rate. An organization would have the option to outsource an entire process, one or more of the activities, or merely one or more of the tasks. OMB Circular A-76, first issued in 1966, encourages agencies to obtain reliable, internal cost and performance information before acquiring goods and services from the private sector through outsourcing. The circular established the policy and procedures federal agencies must follow in determining whether existing federal government commercial activities should be outsourced. OMB officials told us that they consider outsourcing to be a viable tool for improving financial management in the federal government and stated that they would like to see federal agencies outsource as much of their accounting and finance functions as possible to other government agencies or the private sector. At present, a number of federal agencies have their payroll processed by the Department of Agriculture’s National Finance Center through a cross-servicing arrangement. Several other agencies, including the Department of Justice and the Agency for International Development, have outsourced portions of their financial operations. Recently, there has been considerable interest in outsourcing DOD’s support activities. In August 1995, the Deputy Secretary of Defense directed the military services to make outsourcing of support activities a priority. A May 1995 report by the Commission on Roles and Missions of the Armed Forces identified financial management as a prime candidate for outsourcing in DOD. Further, an August 1996 Defense Science Board (DSB) Task Force report on outsourcing found that functions such as accounting, payroll, travel reimbursement, invoicing, debt management, and other support functions are routinely performed in the private sector by a range of outside vendors and recommended that those functions be outsourced. A November 1996 DSB report estimated that such finance and accounting outsourcing could result in substantial savings for DOD. However, subsequently, while agreeing that the potential for savings exists, we questioned the size of DSB’s savings estimates. In addition, a number of studies to determine the feasibility of outsourcing certain DOD finance and accounting activities, such as travel processing, payroll, and contract disbursements, have been requested by the Congress and are now under way. The objectives of our review are to develop information on (1) the extent to which selected private sector and nonfederal public organizations used outsourcing as a strategy to improve financial operations and reduce costs, (2) existing outsourcing vendor capacity to perform finance and accounting operations, and (3) factors associated with successful outsourcing. To accomplish our reporting objectives, we obtained information on finance and accounting outsourcing from an outside consultant with unique information on the business process outsourcing market. We also conducted an extensive literature and Internet search on the subject of outsourcing, and we interviewed individuals from a number of organizations representing varying perspectives on outsourcing finance and accounting operations. Our interviews included 12 judgmentally selected private sector corporations, 1 large international quasi-government organization, 1 state government organization, and 1 city government organization. These organizations represent a broad cross-section of U.S. industries ranging from commodities to the transportation and manufacturing industries. We selected the organizations based on either (1) literature citations indicating that they had outsourced one or more accounting functions or (2) size, industry, and level of accessibility. All but one of these organizations had annual revenues in excess of $1 billion and about two-thirds had annual revenues exceeding $15 billion. Our interviews of cognizant key officials in these organizations focused on first determining if they used finance and accounting outsourcing to improve their financial management organization. If they used this type of outsourcing, we asked them to describe the factors they considered important to a successful outsourcing arrangement. We also asked executives at each organization about outsourcing trends they were aware of in their respective industries. We contracted with G-2 Research, Inc., to provide us with detailed information related to the finance and accounting outsourcing market. G-2 Research is the market research firm that identified business process outsourcing (BPO), which includes finance and accounting outsourcing, as an emerging market approximately 5 years ago. At the time of our fieldwork, G-2 specialized in tracking and compiling data on the BPO market. G-2 informed us that it uses annual interviews of nearly 3,000 corporate executives as well as its regular contacts with major outsourcing vendors, to track the outsourcing market and identify trends and major market participants. Through an extensive literature and Internet search, we identified outsourcing vendors, customers, trade organizations, consultants, and knowledgeable academicians and obtained information on the finance and accounting outsourcing industry capabilities and trends. We subsequently talked with 13 major outsourcing vendors and 12 consultants who are active in the outsourcing industry. Many of our discussions with private sector organizations addressed information of a sensitive business or proprietary nature. To protect this type of information, our report does not identify either the outsourcing service providers or end-users that we talked to. Finally, we synthesized and analyzed the numerous documents acquired from our search or provided by the various organizations we interviewed to determine procedures and factors that are generally accepted as vital to successful outsourcing. We provided a draft of this report to our consultants at G-2 Research, Inc. and the President of the Private Sector Council. We incorporated the technical clarifications they provided as appropriate in the report. We also provided relevant sections of the report to those organizations included in our review that are referred to in specific examples throughout the report and incorporated their comments as appropriate. Our work was conducted in accordance with generally accepted government auditing standards from July 1996 through September 1997. Organizations have a number of options for improving their financial management operations. In addition to outsourcing they can, among other things, reengineer their business processes, consolidate the performance of functions in shared service centers, or implement new enterprisewide accounting and information systems. Over the past several years, organizations have to varying degrees and in varying combinations used all of these financial management improvement approaches. The use of outsourcing to help improve finance and accounting activities is growing and there are clear indicators that more large private sector organizations are actively considering it as an option to improve efficiency and drive down administrative costs. A 1996 American Management Association (AMA) member survey found that finance and accounting outsourcing, while used less frequently than other types of outsourcing, has grown rapidly since 1994. Eighteen percent of the 619 responding firms were outsourcing all or part of one or more finance and accounting functions other than payroll. Payroll was outsourced in whole or in part by 38 percent of the responding firms. In addition, the survey found that larger firms—those with 10,000 or more employees—were more likely to outsource one or more financial processes. This buttresses the results of a 1995 survey, conducted for an outsourcing firm, of 400 senior managers of medium and large firms on business change strategies. That survey found that 25 percent of the respondents considered payroll and accounting processes best suited for outsourcing. A research organization has predicted that business process outsourcing (including finance and accounting outsourcing) will grow by over 20 percent a year until the year 2000. Representatives of all of the 15 organizations we spoke to said they had considered outsourcing as a management strategy for improving their financial management operations, and 12 organizations had outsourced portions of their finance and accounting functions. However, only 3 of the 12 organizations outsourced one or more entire processes such as accounts payable, pension payments, general ledger accounting, fixed asset accounting, or excise and property tax administration. Thirteen of the 15 organizations indicated that they had also used other options to improve their financial operations, such as reengineering all or parts of their accounting and finance functions, establishing a shared service center, or upgrading their financial systems. For example, officials from one company stated that their approach to improving financial management consisted of: (1) consolidating the accounting function into as few locations as possible and having each location move to a single system to accomplish the function, (2) simplifying existing processes, (3) developing systems that capture data at the point the transaction originated, regardless of the location within the organization, and (4) outsourcing all or parts of processes that could be done more efficiently or effectively by a third party. Through these steps, the company was able to reduce the number of accounting staff by approximately two-thirds in a 15-year period. However, according to organization officials, most of the efficiencies achieved were due to actions other than outsourcing, and they estimated that outsourcing accounted for less than 10 percent of the total savings. Officials from another organization told us that they were able to reduce the number of personnel involved in processing accounting transactions by an estimated 90 percent over a 12-year period through the use of shared service centers, consolidating systems, and reengineering their accounting processes. Even companies that outsourced entire processes had reengineered these processes or obtained new accounting systems prior to or at the same time as outsourcing. For example, one company reengineered its accounting processes concurrently with outsourcing a number of accounting processes. According to a company official, the reengineered processes along with the outsourcing arrangements contributed greatly to large productivity improvements. Most organizations that have used outsourcing for portions of their finance and accounting operations—particularly larger companies—have contracted for services that typically involve discrete, repetitive, labor-intensive tasks. According to the AMA 1996 outsourcing survey, over 70 percent of the organizations that used outsourcing for clerical, bookkeeping, or data processing portions of their finance or accounting functions only outsourced parts of these processes. A good example of this task-oriented type of outsourcing is in the accounts payable process, where a company might handle all the activities and tasks associated with managing accounts payable in-house, but contract with an outsourcing vendor to carry out the check printing and mailing tasks. Another example is payroll processing, where a company might handle the human resource and payroll tasks of entering data and computing employee gross pay amounts in-house, but contract with a vendor for net pay computation and paycheck printing and distribution tasks. Such arrangements might also require the vendor to do other tasks, such as accumulating employee pay information and preparing and distributing W-2 statements at the end of the year. Recent research has shown that, in general, although outsourcing organizations did not fully achieve the benefits they envisioned, most achieved at least partial benefits. For example, the 1996 AMA member survey found that less than 25 percent of the responding member firms that outsourced finance and accounting activities and established cost reduction, time reduction, or quality improvement goals believed they had fully achieved their goals. However, most respondents indicated that they had partially met their goals for these areas. Some of the 12 outsourcing organizations we contacted, while not willing to share specific results with us, indicated that they had realized their anticipated benefits, while others indicated that they had not. The outsourcing vendor of one of the organizations, with its client’s consent, told us that it reduced the number of staff processing accounts payable by almost one-third, cut the amount of time to process accounts payable transactions by over two-thirds, and was able to implement a computer matching process for about 30 percent of the firm’s purchase transactions. In contrast, one company that outsourced pension payments believed that its costs actually increased. In addition, to the extent that cost reduction is an outsourcing goal, reductions in the number of an organization’s finance and accounting personnel does not in itself translate into reducing the organization’s overall costs because such reductions may be offset by increased outsourcing vendor contract costs. The generally limited use of outsourcing for repetitive, labor-intensive tasks may be attributed, in part, to the lack of a mature vendor marketplace with sufficient capacity to provide the larger scale, more complex finance and accounting services often required by large organizations. However, there are indications that the outsourcing market may be on the verge of dramatic growth. Some experts in the field have estimated that in 3 to 5 years, organizations with large, complex finance and accounting operations will be able to outsource their entire accounting or finance function. To date, existing capacity concerns appear to have been a significant factor in organizations with large, complex finance and accounting operations moving relatively slowly toward outsourcing. Three organizations that outsourced portions of their accounting function, for example, found only one vendor that was capable of providing the breadth of service they required. According to one consulting firm, while large organizations have been able to find vendors to outsource portions of their finance and accounting functions, they have not been able to find vendors that could take over the entire function. The firm’s study of payroll practices at over 50 firms confirmed that payroll outsourcing may not be a viable option for larger operations because outsourcing vendors presently cannot offer them payroll services at cost-effective rates. One organization found that it could not outsource its payroll process because it was too complex for payroll vendors. One state government decided to consider outsourcing its payroll processing to a third party and requested pricing and service information from major payroll outsourcing vendors. None of the vendors could perform the proposed outsourcing at what the state deemed to be a competitive price. The proposed fees were above what it cost the state to do payroll internally, and the vendors refused to consider a long-term contract with the state. We have previously reported that the lack of a competitive marketplace affects the cost savings that can be achieved through outsourcing.Consequently, if available outsourcing vendors cannot provide desired services at a competitive rate, an organization procuring outsourcing services may not achieve its outsourcing objectives. Our work with outsourcing users, vendors, and consultants identified the following five key factors often associated with successful decisions to outsource finance and accounting operations. A corporate outsourcing policy can ensure that all factors associated with an outsourcing decision are identified and addressed. Concerns over whether and the extent to which outsourcing may affect an organization’s goals and operations must be carefully considered. Such a policy should be explicit on the extent to which outsourcing will be used to reduce costs, improve efficiency, or increase organizational flexibility. The overall view gleaned from the 15 organizations and outsourcing experts interviewed is that the outsourcing decision and implementation should involve the same type of rigorous analysis, careful planning, and management involvement as any other major business decision. Many experts believe that a corporate policy that describes and requires a structured outsourcing process is necessary for successful outsourcing. For example, on the basis of extensive research, one organization developed an outsourcing policy to guide and provide a structured process for deciding whether to outsource. In part, the policy requires (1) clear objectives for outsourcing, (2) a recognition of all available service delivery options (e.g., internal staff versus third party), (3) a rigorous cost/benefit analysis, (4) buy-in by all affected parties, and (5) communication with employees throughout the outsourcing decision-making process. Under another organization’s outsourcing policy, the designated outsourcing team was to identify those functions that were candidates for outsourcing and apply specified criteria for deciding what functions to outsource. Outsourcing proposals were to include a detailed risk analysis that addressed the proposed outsourcing’s potential impact on such key areas as cost, savings, service quality, system conversion, retraining of personnel, and the potential for disruption of services. Management believed that the outsourcing arrangements developed under this policy were successful in that the company met its outsourcing goals. In contrast, two organizations we talked to had negative outsourcing experiences, which they attributed, in part, to not having an outsourcing policy in place that clearly prescribed a methodology for analyzing costs and for considering all relevant risks associated with such an outsourcing decision. Not until one of these organizations had obtained bids from vendors and was near awarding a contract were concerns raised about the validity of cost estimates and the increased legal and computer security risks. The organization’s president decided to cancel the planned outsourcing until the organization had an outsourcing policy and cost estimation methodology in place. Another organization made the decision to outsource in a rapid fashion without going through a structured process that would be specified in a corporate outsourcing policy. The resulting outsourcing arrangement was not well received by the organization’s employees and resulted in confusion over the purpose and extent of the arrangement. Decisions on outsourcing are becoming part of the organizational strategic planning process with the goal of increasing competitiveness in the world market. In considering whether to outsource, organizations have assessed functions strategically in terms of their relationship to core competencies. Core competencies, as defined by G-2 Research, Inc., a firm specializing in business process outsourcing market research, are the essential, defining functions of an organization—those things that if given to an external party, would create a competitor or result in the dissolution of the company. A hospital’s core competencies, for example, would be those directly associated with caring for patients. Core competencies have also been defined as those few functions within a company where the company can dominate, that are important to the customers, and that are embedded in the organization’s systems. Involvement of an organization’s senior management in the process was often identified as essential to ensuring that an organization’s competencies are assessed strategically on an organizationwide basis rather than by function. Other functions are considered non-core and can be considered either critical or noncritical. Non-core critical functions are important to an organization but are not directly linked to what the organization perceives as its primary mission. If not performed at world-class levels, however, these functions can place an organization at a competitive disadvantage or even endanger its existence. For most organizations, such functions would include finance, accounting, and human resources administration. Noncritical functions are those that supply no competitive advantage and that even if performed poorly, may not seriously harm an organization. Examples generally include cafeteria services, groundskeeping, and laundry. Outsourcing arrangements for many organizations usually start with noncritical functions. As the organization becomes more accustomed to relying on others to perform simple, noncritical functions, the organization tends to consider outsourcing a more diverse and critical set of activities. Reasons that an organization might want to outsource one or more of its critical but non-core functions (such as finance and accounting) include the potential for (1) significant cost savings, (2) access to needed skills and expertise, (3) access to the latest technology or world-class capabilities, (4) accelerated implementation of planned improvements, and (5) freeing management resources for other purposes. For example, after identifying its core competencies, one organization decided that outsourcing should be used as an option to improve business performance through reducing costs in non-core areas, leveraging the expertise of best-in-class service providers, and providing a better career path for employees in non-core areas. As part of its determinations, senior management decided that any function that was not a core competency could be subject to outsourcing. Factors considered included whether or not a function (1) involves decision-making, (2) adds significant value to the company’s bottom line, or (3) interfaces directly with its customers. Company officials informed us that they believed there was no long-term risk to the company’s competitive position in outsourcing these types of processes. Another organization we spoke to also identified its core and non-core activities within the accounting and financial management function. In this case, management decided not to outsource any finance and accounting activity that was important to maintaining control of the business, was important to maintaining the company’s competitive position, involved company confidential information, involved a critical expertise that the company could not afford to lose, or was used to develop staff for managerial advancement. Based on these criteria, the organization concluded that managerial analysis and decision support work were core activities, but that other activities, such as the clerical aspects of the accounts payable and payroll processes, were non-core and therefore candidates for outsourcing. Officials of this organization also pointed out the need to maintain control of the outsourced activities or tasks and said it was important to keep some level of knowledge and expertise in-house. Benchmarking generally involves identifying organizations that have developed world-class processes and then, using applicable performance measures (such as cost per transaction, average processing time, or error rate), comparing an organization’s performance to that of the world-class organization. Benchmarking lets an organization know how well it is doing and puts it in a better position to assess which improvement initiative, if any, best fits its situation. A key result of an effective benchmarking process will be a full understanding of the extent and nature of any existing deficiencies in an organization’s finance and accounting operations. As discussed later in this report, this understanding of deficiencies in an organization’s finance and accounting function is critical to successfully establishing and monitoring an outsourcing contract. A recent international outsourcing study found that the most important single factor contributing to successful outsourcing was that the activity was well defined. Many of the companies we talked to have used benchmarking to help them determine if any finance or accounting processes or activities they perform were in need of improvement and the extent of improvement needed. Once processes have been benchmarked, an organization is able to determine areas that need improvement and decide on a means of improvement. While some organizations have chosen outsourcing as a first-line means of process improvement, many others have relied more heavily on reengineering processes, installing enterprisewide systems, or establishing shared service centers as their primary approach to improving financial management. If through benchmarking an organization finds a particular internal process to be world-class, it might decide that little could be gained by outsourcing. In situations like this, some organizations may offer their services externally and turn the function into a profit center. If all or part of an organization’s finance and accounting function were determined to be world class, and if the organization determines that the function is not one of its core competencies, it may decide to outsource the function to free management resources. Consistent, objective, and measurable baseline data on operations compared with baseline data from a world-class organization is essential to a reliable benchmarking assessment. One outsourcing industry consultant said, for example, that an organization must develop reliable, quantitative data on costs as well as other objective measures of the area being considered for outsourcing. Failure to obtain reliable data can increase the risk that data will be manipulated to achieve a desired result. For example, the organizational component being considered for outsourcing may have an incentive to exclude relevant costs so that the costs of its operations appear to be lower than they actually are. Outsourcing vendors performing this analysis, on the other hand, may be inclined to include as many costs as possible. As discussed previously, vendor capacity for large, complex accounting and finance functions is a consideration in the outsourcing decision process. Although vendor capacity is expected to grow rapidly over the next several years, the existence of a competitive marketplace for outsourcing services is a factor that will affect the efficiencies and cost savings that can be achieved. A number of vendors, outsourcing users, and outsourcing experts we talked to recommended that large organizations with complex processes pilot test a segment of a process before attempting to outsource an entire process. The segment chosen for the pilot should be one that is most amenable to outsourcing. Then, after a successful pilot, the organization could gradually expand the scope of the outsourcing arrangement. A pilot test approach to finance and accounting outsourcing would give the organization time to streamline its outsourcing process while allowing the vendor marketplace to build up the capacity to perform services for large organizations. Organizations must also research the quality of vendor services. An official of one organization that had outsourced an accounting process stated that its vendor had a turnover rate much higher than the organization’s internal staff that previously performed the process. He said that the high vendor turnover presented a problem as his employees were constantly dealing with new vendor employees who did not know the organization’s business. One large company decided to bring its outsourced payroll process back in-house and do its own payroll processing because of the poor service it received from its vendor. Organizations must also consider if process improvement is an institutional goal and whether or not the services offered by vendors represent an improvement over the effectiveness and efficiency of existing processes. One organization credited its outsourcing vendor with bringing “cutting edge” technology to some of the organization’s accounting and finance processes. Another organization for which process improvements were important decided not to outsource its accounts payable after determining that none of the potential accounts payable vendors would be able to improve upon its current business operations. Organizations cited outsourcing’s potential impact on personnel as a particularly sensitive issue in considering whether and what finance and accounting operations to outsource. Outsourcing is likely to result in a reduction in the number of an organization’s employees. Addressing sensitive issues associated with potential job loss and other possible adverse personnel impacts will be critical to dealing with potential resistance to outsourcing and to building momentum for change. According to one organization, the issue of job loss resulting from outsourcing is the most difficult hurdle to overcome in reaching a decision to outsource all or part of an organization’s finance and accounting operations. In some instances, we were told, organizations determined that internal opposition to outsourcing was so widespread and vocal that planned outsourcing was halted until employee concerns were addressed. For example, we were told that addressing the concerns expressed by labor unions was considered to be of paramount importance to one organization in reaching a decision to outsource all or part of a function. In addition, a member of the organization’s management stated that it is difficult to convince an organization’s senior managers to reduce their “power” (based on the number of people reporting to them) by firing or laying off personnel in conjunction with outsourcing all or part of a function. When to tell employees outsourcing is being considered, how to involve employees in the outsourcing process, and whether to require the vendor to offer employment opportunities to the displaced employees were repeatedly identified as essential elements to any outsourcing decision. One organization, citing its corporate philosophy supporting its employees as its most valuable asset, told us that it has strived to avoid employee layoffs even when selected jobs were phased out through process improvement or outsourcing. Instead, it has relied primarily on attrition and job transfers to reduce numbers of employees and has offered, at certain times, retirement incentive packages to employees whose jobs have been phased out. The organization’s top management advised employees when it was seriously considering outsourcing and sought appropriate employee input in the outsourcing decision. One organization’s officials told us that they have delayed outsourcing specific activities because the affected employees may not have the necessary skills to transfer to other areas in the accounting department. Organization officials stated that, rather than displacing existing staff, they will continue to pursue internal operating efficiencies and will wait for the employees to leave through reassignment or normal attrition before outsourcing those positions. Officials from another organization that adopted this approach also stated that they believed this strategy contributed to successful outsourcing. The organization believed that because it kept employees informed, it was able to prevent rumors and speculative gossip from becoming a barrier to outsourcing its general ledger accounting processes. The organization also arranged for the outsourcing vendor to offer employment to virtually all of the displaced employees. The vast majority of the employees transferred to the vendor and, for the most part, continued to perform the same duties they had before outsourcing. The outsourcing organization expressed its belief that the employees were treated fairly and equitably and now have more career opportunities because the vendor has multiple career paths. In contrast, we were told that some of another organization’s employees may have first learned about the potential outsourcing from its outsourcing vendor. This created a great deal of resentment and ill-feelings and was a major barrier to the outsourcing arrangement, particularly because none of the outsourced employees were to be retained by the outsourcing vendor. Another risk posed by outsourcing is the loss of important corporate knowledge. One organization that outsourced accounting activities required the vendor to hire key employees—those who had decision-making responsibilities related to the outsourced area. However, the outsourcing vendor found, while training its new staff, that it needed the unique accounting process knowledge held by lower-level staff. The outsourcing vendor then hired the former lower-level employees to train its new employees and discovered that many of these lower-level jobs required a much longer time to learn than planned. We identified two key factors that, once organizations decided to outsource, were associated with an increased likelihood of the outsourcing arrangements’ ultimate success. These key factors are (1) maintaining sufficient expertise and controls to effectively oversee outsourced operations and (2) establishing a well-defined, results-based contract with the outsourcing vendor. An organization needs to address these critical factors not only to help ensure that it is meeting its cost reduction and/or process improvement goals, but also to avoid an increased risk of unexpected cost increases, poor quality services, or even fraud. In addition, effective oversight controls are critical to ensuring that outsourcing vendors effectively discharge their fiduciary responsibilities for the funds and other resources entrusted to them. Ensuring that an organization maintains sufficient expertise and has an effective set of controls in place to oversee the vendor’s operations were identified as essential to successful outsourcing. In recent testimony, we stressed the need for effective contract monitoring and oversight to evaluate contractor compliance and performance in outsourcing arrangements, but found that such monitoring was not always done effectively. The PA Consulting Group’s report on its 1996 survey also raised concerns about the adequacy of contract oversight. The study found that contract monitoring was very important in ensuring that organizations reach their outsourcing goals and that service quality and customer satisfaction were the areas most commonly monitored by organizations. However, the study also found that the level of skill and sophistication necessary for effectively monitoring outsourcing arrangements may exceed the capability of many organizations. One organization’s experience illustrates the importance of maintaining sufficient in-house expertise to maintain effective oversight controls over an outsourcing vendor’s performance. The human resources department had outsourced the pension payment process believing that they could “wipe their hands clean of it.” However, the organization has subsequently had to consider bringing the process back in house, in part because it has become increasingly concerned about whether it can retain sufficient expertise in-house to effectively oversee the outsourcing vendor to ensure that the organization’s retirees are properly paid. Among common contract monitoring techniques organizations have used to maintain control over the outsourcing vendor’s operations were retaining the right to audit the vendor’s operations and periodic reports of cost and service performance and meetings to discuss performance. For example, one organization stipulated in all of its agreements with vendors that its internal audit department must be able to audit the vendor’s records, procedures, policies, and controls related to the outsourced function. Another organization that outsourced a number of accounting processes received monthly performance reports that tracked vendor performance against contractual expectations. In addition, the organization had ongoing meetings with the vendor to discuss performance concerns and any potential process improvement changes and conducted periodic evaluations to determine if outsourcing goals were met and to establish future expectations. A well-defined results-based contract—based on clearly defined, results-oriented performance measures rather than on the processes to be followed—is recognized as one of the primary ways of helping to ensure that an organization will achieve the desired level of benefits. While a results-based contract will not guarantee good contractor performance, it will help in measuring the performance and the extent to which expected benefits have been achieved. Service providers, consultants, and end-users agreed that outsourcing contracts should be results- rather than process-based. According to one service provider, process-based requests for proposals and contract documents often preclude providers from developing the most appropriate outsourcing solution. Officials of one organization that had outsourced major portions of its accounting and finance functions told us that they developed a results-oriented request for proposal that was very descriptive of both current functions and what was expected from the vendor in terms of improved performance. The contract detailed performance expectations for the outsourced processes, including the timing of reports and the presentation, availability, and quality of accounting and finance information and established a related set of performance measures intended to help determine the extent to which the goals of the outsourcing arrangement were achieved. Outsourcing contracts that are process rather than results driven may also tend to reinforce any existing finance and accounting deficiencies and limit the vendor in implementing efficiencies and changing processes to improve operations and reduce costs. For example, more staff would be required to process accounts payable if the contract required the vendor to manually match the purchase order, receiving report, and invoice than if the contract specified the vendor was responsible for making timely and correct payments for 99.5 percent of the dollar value of the invoices processed. In the latter case, the vendor would be able to take advantage of such techniques as electronic data interchange and evaluated receipts, thus enabling payments to be based upon efficient and accurate computerized matches of key elements on the purchase order and receiving report. We are sending copies of this letter to the Ranking Minority Member of the Senate Committee on Armed Services and the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations, the House Committee on National Security, the House Committee on Government Reform and Oversight, and the Subcommittee on Government, Management, Information and Technology of the House Committee on Government Reform and Oversight. We are also sending copies to the Director, Office of Management and Budget, and the Secretary of Defense. Copies will be made available to others on request. If you or your offices have any questions concerning this report, please contact either Lisa G. Jacobson or David R. Warren at (202) 512-9095, or (202) 512-8412, respectively. Major contributors to this report are listed in appendix II. Accounts Payable Processing and paying vendor invoices for business expenditures incurred. Activity begins when an invoice is coded and approved for payment. Excludes purchasing and receiving activities. Invoice Processing Match invoice, purchase order and receiving report; resolve discrepancies; approve and code invoices for payment; maintain appropriate files. Payment Processing Prepare checks, electronic payments, and wire transfers; initiate and process recurring payments; respond to vendor inquiries. Benefits Plan Accounting Activities to account, track, and report on benefit plans. Billing Revenue accounting and the documentation and issuance of bills for products sold and services rendered. Cash Application Recording and tracking payments received from customers. Credit and Collection The extension of credit to customers and collecting of slow pay and past due receivables from customers. Fixed Asset Accounting Recording and controlling the physical records and financial activities related to fixed assets of the corporation. General Accounting Overseeing, coordinating, and controlling the accounting records and closing activities of the corporation. Includes maintaining the general ledger, preparing the trial balance and other finance reports, and related activities. Inventory Accounting The accounting for and valuation of raw, intermediate, and finished materials, spare parts, supplies, or products received, transferred, retired, or sold. Payroll The payment of wages, salaries, and pensions in accordance with organizational policies. Activity begins at the point of entry into the payroll system. Does not include benefits administration. Time and Attendance Processing The input of employee time cards into the payroll system. Travel and Entertainment Accounting Overseeing and processing expense reports and cash advances. Cash Advances Approve and disburse cash advances; resolve cash advance problems. Expense Reports Verify that expense reports meet guidelines; approve expense reports; prepare payments; resolve travel expense problems; distribute travel and entertainment expenses. (continued) Travel and Entertainment (T&E) Card Administration Oversee issuance of T&E cards, monitor use of T&E cards. Financial Budgeting and Forecasting Establishing long-term and short-term financial plans, budgets, and forecasts. The focus is on developing detailed financial budgets and controlling actual expenses by comparing them to an historical budget. External/Consolidated Reporting Reporting consolidated financial information as dictated by generally accepted accounting principles, Securities and Exchange Commission regulations, and statutory, subsidiary, and international reporting requirements. Banking and Cash Management The activities involved in the handling of cash flows and bank relations for noninvestment accounts. Cost Accounting Calculating product or service fixed, variable, and semi-variable costs. Developing allocation schemes and analyzing cost variances. Financial Analysis and Management Reporting Analyzing financial and operational information to assess, interpret, and predict business performance to support management decisions. Evaluating capital investment decisions. Gathering, evaluating, and presenting financial, operating, and contractual information about proposed business transactions for internal management. Tax Planning Examining tax issues for the corporation to optimize tax effectiveness of management decisions. Treasury and Trust Management The activities associated with securing funds to meet the corporation’s cash flow needs and investing any excess funds. Neal Gottlieb, Auditor-in-Charge Adrienne Friedman, Senior Auditor Lenny Moore, Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the use of outsourcing to achieve cost savings, management efficiencies, and operating flexibility in finance and accounting operations, focusing on: (1) the extent to which selected private-sector and nonfederal public organizations used outsourcing as a strategy to improve financial operations and reduce costs; (2) existing outsourcing vendor capacity to perform finance and accounting operations; and (3) factors associated with successful outsourcing. GAO noted that: (1) GAO's analysis of the experiences of 15 private-sector organizations coupled with discussions with industry experts and outsourcing vendor officials and a literature review revealed that nonfederal organizations use a variety of strategies to improve their financial operations and reduce costs; (2) while all the private-sector organizations GAO reviewed considered outsourcing as a financial improvement option, they have relied principally on other strategies, such as consolidating systems and operating locations or reengineering business processes, to achieve their financial improvement objectives; (3) to the extent that these private organizations have outsourced any portion of their finance and accounting operations, such outsourcing was generally limited to routine, mechanical tasks, such as check writing or payroll processing; (4) only 3 of the 15 organizations GAO contacted had outsourced an entire process within a finance and accounting function; (5) the existing limited capacity of outsourcing vendors to perform larger, more complex finance and accounting operations may have constrained wider use of outsourcing by these organizations; (6) experts in the outsourcing field have estimated that it may be 3 to 5 years before this type of capacity is widely available; (7) the experiences of the organizations in GAO's review and GAO's analysis of pertinent literature may provide some lessons for future federal agency outsourcing decisions; (8) factors considered as part of the outsourcing decision process and often associated with successful outsourcing were: (a) establishing an outsourcing policy that specifies what process and criteria to follow in making the outsourcing decision that will achieve the organization's overall goals; (b) performing a strategic analysis to determine the organization's core competencies; (c) benchmarking the organization's processes against those of world-class organizations to determine comparable costs and identify any deficiencies in its operations; (d) performing market research to determine whether a competitive market exists for the outsourcing services the organization needs; and (e) considering carefully the ramifications of potential job loss or other possible adverse personnel impacts that could occur as a result of outsourcing; and (9) after an organization decided to outsource, two key factors identified with successful outsourcing arrangements were: (a) maintaining sufficient expertise and control to effectively oversee the outsourcing vendor; and (b) establishing a results-oriented contract that included appropriate performance measures. |
The NASA Authorization Act of 2010 directed NASA to develop a Space Launch System, to continue development of a crew vehicle, and prepare infrastructure at Kennedy Space Center to enable processing and launch of the launch system. To fulfill this direction, NASA formally established the SLS program in 2011. Then, in 2012, the Orion project transitioned from its development under the Constellation program—a program that was intended to be the successor to the Space Shuttle but was canceled in 2010 due to factors that included cost and schedule growth—to a new development program aligned with SLS. To transition Orion from Constellation, NASA adapted the requirements from the former Orion plan with those of the newly created SLS and the associated ground systems programs. In addition, NASA and the European Space Agency (ESA) agreed that ESA would provide a portion of the service module for Orion. Figure 1 provides details about the heritage of each SLS hardware element and its source as well as identifies the major portions of the Orion crew vehicle. The EGS program was established to renovate portions of the Kennedy Space Center to prepare for integrating hardware from the three programs as well as launching SLS and Orion. EGS is made up of nine major components, including: the Vehicle Assembly Building, Mobile Launcher, software, Launch Pad 39B, Crawler-Transporter, Launch Equipment Test Facility, Spacecraft Offline Processing, Launch Vehicle Offline Processing, and Landing and Recovery. See figure 2 for pictures of the Mobile Launcher, Vehicle Assembly Building, Launch Pad 39B, and Crawler-Transporter. NASA established an agency baseline commitment—the cost and schedule baselines against which the program may be measured—for each program. NASA has committed to be ready to conduct one test flight, EM-1, no later than November 2018. During EM-1, the SLS vehicle is scheduled to launch an uncrewed Orion to a distant orbit some 70,000 kilometers beyond the moon. All three programs—SLS, Orion, and EGS—must be ready on or before this launch readiness date to support this integrated test flight. While the SLS and EGS program cost and schedule baselines are tied to the uncrewed EM-1 mission, the Orion program’s cost and schedule baselines are tied to a second, crewed mission—EM-2. See table 1 for program baseline information. All three programs are entering the integration and test phase of the development life cycle—which our prior work has shown to be when problems are commonly found and schedules tend to slip. In general, programs have schedule and cost reserves in order to address challenges that arise during development. Funded schedule reserve is extra time, with the money to pay for it, in the program’s overall schedule in the event that there are delays or unforeseen problems. Cost reserves are additional funds that can be used to mitigate problems during the development of a program. For example, cost reserves can be used to buy additional materials to replace a component or, if a program needs to preserve schedule, cost reserves can be used to accelerate work by adding extra shifts to expedite manufacturing and save time. With less than two years until the committed November 2018 launch readiness date for EM-1, the three human exploration programs—Orion, SLS, and EGS—are making progress, but schedule pressure is escalating as technical challenges continue to cause schedule delays. All three programs face development challenges in completing work, and each has little to no schedule reserve remaining to the EM-1 date— meaning they will have to complete all remaining work with minimal delay during the most challenging stage of development. This includes completing design, production, and integration work at each program as well as integrating the hardware and software from the three programs in preparation for launch. Integration and testing is the phase where problems are most likely to be found, and the amount of potential problems is increased due to the two levels of integration—each inherently complex program must be integrated individually and then as an interdependent, combined enterprise. Because all three programs must be ready for launch to occur, a redesign of a single program’s component, a test failure, or a significant hardware or software integration issue in any one area could delay the launch readiness date for all three programs. The schedule pressure is intensified by the low levels of cost reserves held by all three programs to mitigate problems during development. In some cases, however, even if the programs held higher levels of cost reserves, using them to gain back schedule would be difficult because— at this late stage of development—work has become more sequential and there are fewer opportunities for workarounds, which the programs have relied on until now to preserve schedule. With little to no schedule or cost reserves remaining as the programs finalize production and enter integration and testing activities, the EM-1 launch readiness date is in a precarious position. While NASA officials told us they are assessing factors that could contribute to an EM-1 schedule slip, they have not committed to a timeline for completing that assessment or proposing an amended launch schedule, if needed. Therefore, it is unclear when Congress will be informed of NASA’s findings and any impact those findings might have on NASA’s fiscal year 2018 budget request. Since we last reported on the human exploration programs in July 2016, the programs have made progress toward completing development, including the following: Orion: After changing the heatshield design following a December 2014 test flight in which NASA determined that not all aspects of the original monolithic design would meet the more stringent requirements for EM-1 and EM-2 when the capsule will be exposed to greater temperature variance and longer durations, the Orion program and contractor reported that production of heatshield blocks is underway and production quality is very high. Orion officials also stated that the risks associated with the ESM main engine—a heritage in-space maneuvering engine from the Space Shuttle program—have been largely addressed. The program was concerned with the state of internal components given their age; however, the engines have completed rounds of acceptance and vibration testing following replacement of valves and other components. In addition, the program and contractor stated that they have addressed all probable causes of crew module airbag anomalies from the December 2014 test flight. These bags are designed to inflate upon touchdown in the ocean to properly orient the crew module; however, some did not properly inflate or leaked due in part to the bags inflating before they were outside of the vehicle, which placed the bags under stress. While the root cause for the failures remains unknown, given the mitigation steps being taken, program officials now have high confidence in the system’s performance going forward. SLS: Program officials stated that the solid rocket boosters have completed the second of two planned qualification tests and that the program has also implemented a design change to address an issue where the solid rocket propellant might loosen from the insulation on the inside of the booster casing, which may increase the risk of booster failure. The program office is performing analysis to ensure the mitigation meets expected safety margins. The program is producing test and flight unit core stage hardware and, according to program officials, has made progress manufacturing panels which will be bolted together to create the intertank section of the core stage. They also stated that early attempts to bend some of the thick materials necessary for the core stage led to unrepairable cracks, but the contractor has updated its processes, which has allowed recently produced panels to pass inspections. The program has also begun integrated structural testing of its EM-1 in-space propulsion stage, Launch Vehicle Stage Adapter, and Orion Stage Adapter. EGS: Program officials stated that work for the Crawler Transporter and Launch Vehicle Offline Processing facility is complete. In addition, all 10 of the platforms that will allow access to the integrated SLS and Orion vehicles during final assembly in the Vehicle Assembly Building have been installed, according to EGS officials. Additionally, they have started verification and validation, the process by which the program assesses whether systems are capable of meeting their intended purpose and are being developed according to agency requirements, at the Multi-Payload Processing Facility—where spacecraft fueling will be performed. Program officials stated that 8 of the 20 pieces of launch equipment and accessories—for example, umbilical connections from the launch tower to the vehicle—have been built, finished testing, and are ready for installation onto the Mobile Launcher. The magnitude of the schedule delays that the programs have experienced amid this progress, however, foreshadows a likely schedule slip for the November 2018 EM-1 launch readiness date. In addition, each program is facing risks that will likely consume what little schedule reserve exists, and low cost reserves limit mitigation options to achieve the planned launch readiness date. These ongoing challenges include the following: Orion: The Orion program has no schedule reserve to EM-1 and the delivery of the European Service Module (ESM) and completion of flight software are the primary and secondary critical paths—or the path of longest duration through the sequence of activities that determines the earliest completion date—for both Orion and EM-1 as a whole. In December 2015, Orion officials stated that the program had zero schedule reserve to EM-1 and we reported in July 2016 that the program had already experienced several ESM development delays that impacted the ESM delivery, and that further delays could cause the EM-1 launch to slip. As of the ESM’s critical design review in summer 2016, ESA has delayed ESM delivery to the Orion program from January 2017 to April 2017, and senior NASA officials stated the delivery will likely slip to August 2017 or later. Program officials stated that the delays are largely due to NASA, ESA, and the ESA contractor underestimating the time and effort necessary to address design issues for the first production ESM and the availability of parts from suppliers and subcontractors. For example, the contractor found welding failures in the ESM’s propulsion tanks, and a number of parts deliveries have been late. Orion program officials told us that following delivery from ESA, they will need the service module for 12 months for integration with the crew module and testing prior to providing the completed Orion spacecraft to the ground systems at Kennedy Space Center. This means if ESA’s delivery date of the service module slips to August 2017, the Orion program will not be ready to deliver Orion to Kennedy Space Center until August 2018. NASA officials stated that they would not be able to maintain a launch readiness date of November 2018 if Kennedy Space Center receives the Orion spacecraft after July 2018. As a result, the November 2018 launch readiness date is likely unachievable unless NASA identifies further mitigation steps to accommodate delays. In addition, the Orion program faces a number of other technical challenges including software delays and hardware design, but has limited cost reserves to address them until fiscal year 2018 when more cost reserves will be available. As we found in July 2016, the Orion program continues to employ most of its available budget to fund current work and holds most of its cost reserves in fiscal years 2019 and 2020. Program officials told us that the Orion program is schedule-constrained at this point, meaning that even if additional funding were available, it could not alleviate all schedule pressure to EM-1. SLS: The SLS program currently reports having the most program-level schedule reserve of the three programs—approximately 80 days— however, schedule pressure is mounting as the program completes production and integration and test events grow near. Development of the core stage—which functions as the SLS’s fuel tank and structural backbone—is the program’s critical path, meaning any delay in its development reduces schedule reserves for the whole program. A number of important events must successfully take place before the core stage, or the vehicle at large, are ready for EM-1. First, the contractor is scheduled to complete production of the core stage flight unit and deliver it to Stennis Space Center for testing by September 2017. However, officials stated that they have exhausted schedule reserve for this delivery date to address “expected unknowns” with hardware processing due to this being the first time they have built the core stage. Further, according to officials, welding on the core stage was stopped for months due to low weld strength in the liquid oxygen and liquid hydrogen tanks caused by a program and contractor decision to change the weld tool configuration during fabrication. The altered configuration produced different welds that the program has had to confirm are within specification. While officials indicate that they now have a corrective action plan in place, and welding resumed in April 2017, they did not provide detail on the impact to program schedule reserve. Once production of the core stage flight unit is complete, the program plans to deliver it to Stennis Space Center for testing. At Stennis Space Center, the core stage will be filled with cryogenic hydrogen and oxygen for the first time—a considerable process on its own, as officials stated they were finding and mitigating hydrogen leaks for the entire life of the Shuttle program—and will undergo a “green run” test. During the green run, the core stage flight model—integrated with four RS-25 engines—will be fired for about 500 seconds to test a flight-like engine-use profile. Following this, the program has 20 days of reserves—less any delivery delays and delays from issues that arise during testing—from the completion of the green run test until it must be shipped to Kennedy Space Center to begin integration with the boosters, upper stage, and Orion as well as all EGS equipment. Should further challenges arise during final production and testing, the program’s 80 days of reserve will likely be reduced. As we found in July 2016, NASA baselined the SLS program with cost reserves of less than 2 percent, even though guidance for Marshall Space Flight Center—the NASA center with responsibility for the SLS program— establishes standard cost reserves for launch vehicle programs of 20 percent when the baseline is approved. NASA has not changed its cost reserve posture for this program since that time, meaning the program still has limited cost reserves to address risks and challenges. EGS: EGS program officials stated they used the majority of the 6 months of schedule reserve the program had when we reported in July 2016 to address, among other issues, complications at the Launch Equipment Test Facility and with the Mobile Launcher’s ground support equipment installation. The program now has 28 days of schedule reserve, which program officials stated is being held for integrated operations before EM- 1, and zero days remaining for any further delays for EGS-specific projects. Without any schedule margin remaining for the EGS-specific projects, the program will be challenged to complete its remaining work that includes umbilical testing, ground support equipment and umbilical installation, and verification and validation testing. These efforts all carry schedule risks with expected delays that, if not mitigated, total 14 months. Program officials stated that they are actively trying to mitigate these schedule risks; however, they acknowledged that some mitigation tactics they are considering—such as performing some portion of these efforts concurrently—increase the complexity of these efforts. In addition, EGS officials indicated they are planning to consolidate some verification and validation testing to streamline the test flow, which they said would increase schedule risk but not technical risk. The program is also considering implementing additional work shifts to create additional schedule margin. The internal EGS delays and the cascading delays from Orion and SLS that EGS may have to absorb—as the program responsible for final integration of the three programs—contribute to NASA likely not achieving the November 2018 launch readiness date. Similar to Orion and SLS, we previously found in July 2016 that the EGS program is operating with limited cost reserves to address future construction and software risks. For example, we found that when NASA approved the program’s baseline, the program had cost reserves of only 4 percent. While Kennedy Space Center—which is responsible for the EGS program—does not have cost reserve guidance in place, guidance from other NASA centers establishes higher levels of cost reserves at this stage of development. Further, according to EGS officials, the program used all of its fiscal year 2017 reserves in recent years, and has limited reserves in fiscal year 2018, hindering EGS’s ability to address any remaining challenges. GAO’s work on acquisition best practices has shown that success in development efforts such as these programs depends on establishing an executable business case based on matching requirements and resources before committing to a new product development effort. In our prior reviews of NASA’s human exploration programs, we have found that all three programs have been using aggressive schedules and that SLS and EGS have low reserve levels compared to NASA standards. We have also previously found that both SLS and Orion cost and schedule estimates—which inform their cost and schedule baselines—were unreliable when compared to best practices. In July 2016, we recommended that the NASA administrator direct the Human Exploration and Operations Mission Directorate to re-evaluate SLS and EGS cost reserves as it finalized its schedule and plans for EM- 1 during a planned build-to-synchronization review—in summer 2016—in order to take advantage of all available time and resources and maximize the benefit of available cost reserves, and to verify that the November 2018 launch readiness date remained feasible. This review was intended to demonstrate that the integrated launch vehicle, crew vehicle, and ground systems will perform as expected to meet EM-1 objectives. NASA concurred with our recommendation and as of January 2017, senior NASA officials told us that they have reordered integration activities to try to meet the EM-1 launch schedule, but further analysis indicates that they will have to delay the launch readiness date. These officials stated that no decision has been made and NASA has not committed to a timeline in which to report its findings. Fiscal year 2018 is the last year before the November 2018 EM-1 launch readiness date. With fiscal year 2018 budget discussions ongoing, until it receives updated EM-1 schedule information, the Congress will be in the position of determining NASA’s appropriations based on a launch readiness date that is likely not achievable. Should NASA determine it is likely to exceed its cost estimate baseline by 15 percent or miss a milestone by 6 months or more, NASA is required to report those increases and delays—along with their impacts—to the Congress. Given that these three human space exploration programs represent more than half of NASA’s current portfolio development cost baseline, a cost increase or delay could have substantial repercussions for not only these programs but NASA’s entire portfolio. A principle of federal internal controls is that managers should externally communicate the necessary quality information to achieve an entity’s objective and address related risks. If NASA’s ongoing assessment of the November 2018 EM-1 launch readiness date reveals that a new, more realistic, date is warranted, prolonging any decisions regarding the extent of delays and cost overruns—no matter the magnitude—until after deliberations on NASA’s fiscal year 2018 budget request would increase the risk that both NASA and the Congress continue making decisions potentially involving hundreds of millions of taxpayer dollars based on schedules that may no longer be feasible. Human spaceflight and exploration programs are complex and require significant time and effort to design and develop hardware and software. While the Orion, SLS, and EGS programs are working toward a target EM-1 launch readiness date of November 2018, the threats to each program’s schedule continue to mount, and the schedule reserve of each program is either very limited or nonexistent. In addition, as the target EM-1 launch readiness date nears—now less than two years away—the flexibility of the schedule to allow for replanning is likewise reduced. To this point, the programs have replanned program-level efforts and scheduled concurrent work despite the risks involved, and NASA is replanning integration efforts at the enterprise-level in an attempt to find additional schedule margin. However, beyond that, the programs have little to no cost reserves remaining to deal with challenges that may arise. By continuing to work toward this deadline, these programs are positioned to make potentially risky decisions in attempting to meet a schedule that is likely unachievable. Until NASA completes an analysis of factors that could contribute to an EM-1 schedule slip and reports on the feasibility of either its current or revised schedule, program managers will remain under pressure to achieve a goal that may be untenable, and the Congress will continue to base important budget decisions on an unclear picture of the time and money needed to support future human space exploration efforts. In order to ensure that the Congress is able to make informed resource decisions regarding a viable EM-1 launch readiness date, we recommend that the NASA Administrator or Acting Administrator direct the Human Exploration and Operations Mission Directorate to take the following two actions: Confirm whether the EM-1 launch readiness date of November 2018 is achievable, as soon as practicable but no later than as part of its fiscal year 2018 budget submission process; and Propose a new, more realistic EM-1 date if warranted and report to Congress on the results of its EM-1 schedule analysis. NASA provided written comments on a draft of this report. In the written comments, NASA concurred with both recommendations and stated that maintaining the November 2018 launch readiness date is no longer in the best interest of the programs. Further, NASA stated that it is reassessing the launch readiness schedule and anticipates proposing a new date by September 2017. These comments are reprinted in Appendix II. NASA also provided technical comments, which were incorporated as appropriate. In its response, NASA stated that “many of the specific concerns referenced in the report are no longer concerns, and new ones have appeared and caution should be used in referencing the report on the specific technical issues, but the overall conclusions are valid.” We agree with NASA that the situation with these programs is dynamic and that risks and challenges change over time. However, in commenting on the report, NASA did not provide us with evidence that they have overcome specific technical issues that we highlight. Further, in at least the instance of the European Service Module, the situation has deteriorated for the program since we sent the draft copy of the report to NASA for comment. At the time we sent the report for comment, the delivery date for the service module was April 2017, and officials anticipated it could slip to August 2017 or later. The delivery date is now September 2017 with a risk of an additional 2-month delay. We continue to believe that NASA is facing several technical issues across all three programs that will contribute to a delay for Exploration Mission-1. We are sending copies of this report to NASA’s Acting Administrator and to appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. To assess the extent to which the National Aeronautics and Space Administration’s (NASA) Orion Multi-Purpose Crew Vehicle (Orion), Space Launch System (SLS), and Exploration Ground Systems (EGS) programs have risks that affect their progress towards meeting their Exploration Mission 1 (EM-1) cost and schedule commitments, we compared current program status information against program cost and schedule baselines. To assess the risks for the Orion, SLS, and EGS programs and the extent to which those risks may impact cost and schedule commitments, we obtained and reviewed quarterly reports and the programs’ risk registers, which list the top program risks and their potential cost and schedule impacts, including mitigation efforts to-date. We interviewed program and contractor officials on technical risks, potential impacts, and risk mitigation efforts underway and planned. To evaluate the program’s performance in preparing for EM-1, we reviewed program plans and schedules and compared them to actual program performance data found in quarterly program status reviews and program update briefings to assess whether program components and software were progressing as expected. We also compared current program data against program budget information to assess funding needs and cost growth. To determine the programs’ cost and schedule posture and to assess the availability of the programs’ cost and schedule reserves approaching EM-1, we analyzed its budget documentation, interviewed program officials from all three programs with insight into the programs’ budget and schedule and discussed how reserves were being used to mitigate known risks. Our work was performed at Johnson Space Center in Houston, Texas; Marshall Space Flight Center in Huntsville, Alabama; Kennedy Space Center in Titusville, Florida; Lockheed Martin Space Systems Company in Houston, Texas; and NASA headquarters in Washington, DC. We conducted this performance audit from July 2016 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Cristina T. Chaplain (202) 512-4841 or chaplainc@gao.gov. In addition to the contact named above, LaTonya Miller (Assistant Director), Molly Traci (Assistant Director), Juli Digate, Susan Ditto, Laura Greifner, Carrie Rogers, Ryan Stott, Roxanna T. Sun, and Marie Ahearn made key contributions to this report. | NASA is undertaking a trio of closely related programs to continue human space exploration beyond low-Earth orbit: the SLS vehicle; the Orion capsule, which will launch atop the SLS and carry astronauts; and EGS, the supporting ground systems. NASA's current exploration efforts are estimated to cost almost $24 billion—to include two Orion flights and one each for SLS and EGS—and constitute more than half of NASA's current portfolio development cost baseline. All three programs are necessary for EM-1 and are working toward a launch readiness date of November 2018. In a large body of work on this issue, including two separate July 2016 reports, GAO has found that these programs have a history of working to aggressive schedules. The House Committee on Appropriations report accompanying H.R. 2578 included a provision for GAO to assess the acquisition progress of the Orion, SLS, and EGS, programs. This report assesses the extent to which these programs have risks that affect their progress toward meeting their commitments for EM-1. To do this work, GAO assessed documentation on schedule and program risks and interviewed program and NASA officials. With less than 2 years until the planned November 2018 launch date for its first exploration mission (EM-1), the National Aeronautics and Space Administration's (NASA) three human exploration programs—Orion Multi-Purpose Crew Vehicle (Orion), Space Launch System (SLS), and Exploration Ground Systems (EGS)—are making progress on their respective systems, but the EM-1 launch date is likely unachievable as technical challenges continue to cause schedule delays. All three programs face unique challenges in completing development, and each has little to no schedule reserve remaining between now and the EM-1 date, meaning they will have to complete all remaining work with little margin for error for unexpected challenges that may arise. The table below lists the remaining schedule reserve for each of the programs. The programs all face challenges that may impact their remaining schedule reserve. For instance the Orion program's European Service Module is late and is currently driving the program schedule; the SLS program had to stop welding on the core stage—which functions as the SLS's fuel tank and structural backbone—for months after identifying low weld strengths. Program officials stated that welding resumed in April 2017 following the establishment of a corrective action plan; the EGS program is considering performing concurrent hardware installation and testing, which officials acknowledge would increase complexity; and each program must integrate its own hardware and software individually, after which EGS is responsible for integrating all three programs' components into one effort at Kennedy Space Center. Low cost reserves further intensify the schedule pressure. Senior NASA officials said they are analyzing the launch schedule and expect that the EM-1 date will have to slip, but they have yet to make a decision on the feasibility of the current date or report on their findings. With budget discussions currently ongoing for fiscal year 2018, the last year prior to launch, Congress does not yet have insight into the feasibility of the EM-1 launch date, or the repercussions that any cost increase or delays could have in terms of cost and schedule impacts for NASA's entire portfolio. Unless NASA provides Congress with up-to-date information on whether the current EM-1 date is still achievable, as of the time the agency submits its 2018 budget request, both NASA and Congress will continue to be at risk of making decisions based on less than the entire picture and on likely unachievable schedules. NASA should confirm whether the current EM-1 date is still achievable no later than as part of its fiscal year 2018 budget submission, and propose a new, realistic EM-1 launch readiness date, if warranted, and report its findings to Congress. NASA concurred with both recommendations and agreed that EM-1 will be delayed. |
For more than a decade, we have reported that the lack of a modern integrated financial management system to produce accurate and reliable information has hampered NASA’s ability to oversee contracts and develop good cost estimates for NASA’s programs. In 1990 NASA’s lack of effective systems and processes for overseeing contractor’s activities prompted us to identify NASA’s contract management as a high-risk area. In July 2002 we reported that the accuracy of NASA’s $5 billion cost growth estimate for the International Space Station was questionable and that the agency might have difficulty preparing a reliable life-cycle cost estimate because a modern integrated financial management system was not available to track and maintain the data needed for estimating and controlling costs. NASA’s lack of a fully integrated financial management system has also hurt the agency’s ability to collect, maintain, and report the full cost of its projects and programs. For example, in March 2002 we testified that NASA was unable to provide us with detailed support for the amounts that it reported to the Congress as obligated against space station and related shuttle program cost limits as required by the National Aeronautics and Space Administration Authorization Act of 2000. IFMP is designed as an integrated system to replace the separate and incompatible financial management systems used by NASA’s 10 centers. According to the IFMP Program Director, the new system will provide better decision data, consistent information across centers, and improved functionality. Unlike NASA’s previous efforts to modernize its financial management system, IFMP does not rely on a single contractor. NASA selected System Applications and Products (SAP) to provide its “best of suite” software and contracted for implementation services under a separate contract. NASA has also broken the project into modules that will be implemented individually—instead of all at once—on the basis of the availability of proven commercial-off-the-shelf software products. IFMP initially segmented implementation into 14 modules but has since reorganized the program into 9. Some of these modules may be further broken out and others added, depending on the scope of OMB’s e-Government initiatives and other considerations. Table 1 describes the modules that currently comprise the system and their status. When NASA announced in June 2003 that the Core Financial module had been implemented at all of its centers, only about two-thirds of the financial events needed for day-to-day financial operations and external reporting had been implemented. In addition, we found that NASA deferred implementation of other key core financial module capabilities and created new problems in recording certain financial transactions. Thus, full functionality of the system has been deferred, increasing the risk of additional costs and potentially affecting the implementation of future modules. As we reported in April 2003, NASA is not following key best practices for acquiring and implementing IFMP. For example, NASA has not analyzed the interdependencies between selected and proposed IFMP components, and it does not have a methodology for doing so. By acquiring IFMP components without first understanding system component relationships, NASA has increased its risk of implementing a system that will not optimize mission performance and will cost more and take longer to implement than necessary. In addition, in implementing the Core Financial module, NASA faces risks in the areas of user needs and requirements management because the agency did not consider the information needs of key system users and is relying on a requirements management process that does not require the documentation of detailed system requirements prior to system implementation and testing. The reliability of the current life-cycle cost estimate—which has fluctuated since the initial estimate and is 14 percent greater than the previous estimate established in February 2002—is uncertain because disciplined cost-estimating processes required by NASA and recognized as best practices were not used in preparing the estimate. Specifically, IFMP’s life-cycle cost estimate did not include the full cost likely to be incurred during the life of the program. In addition, breakdowns of work to be performed—or Work Breakdown Structure (WBS)—were not consistently used in preparing the cost estimate. In cases where work breakdowns were used to prepare the estimate, the agency did not always provide a clear audit trail. NASA has made some improvements in the program’s financial management, such as hiring personnel to provide oversight and consistency for the cost-estimating process. However, until NASA uses more disciplined processes such as breakdowns of work in preparing the program’s cost estimate, the reliability of the life-cycle cost estimate will be uncertain and the program will have difficulty with controlling costs. Since the program began, cost estimates for IFMP’s 10-year life cycle— fiscal years 2001 through 2010—have fluctuated and increased overall, as shown in figure 1. NASA’s current IFMP life-cycle cost estimate totals $982.7 million—an increase of $121.8 million, or 14 percent, over the previous IFMP life-cycle cost estimate. The estimate comprises IFMP direct program costs, NASA’s enterprise support, and civil service salaries/benefits. (See table 2.) Although direct program costs decreased by $9.5 million, these costs were shifted to the enterprise support component of the estimate with the program’s decision to fund only 1 year’s worth of operations and maintenance, rather than 2 years’ worth from the direct program budget. In addition, NASA anticipates that operations costs for fiscal years 2007 through 2010—estimated at $137.8 million—will be funded by the NASA Shared Services Center (NSSC), a planned initiative to consolidate various agency services such as purchasing and human resources. (See table 3.) As a result, the fiscal year 2004 budget for the IFMP direct program portion of implementing the system is $497.5 million. In March 2003 an independent cost estimate team concluded that there is an 85 percent confidence level that the direct program portion can be successfully completed with the available funding of $497.5 million. However, the direct program portion represents only about half of the total life-cycle cost estimate. In addition, the team’s conclusion was contingent on two optimistic assumptions: that there would be no schedule disruptions and no increase in requirements. Reflecting OMB guidance and the best practices of government and industry leaders, NASA requires that life-cycle cost estimates be prepared on a full-cost basis, that estimates be summarized according to the current breakdown of work to be performed, and that major changes be tracked to the life-cycle cost. OMB guidance calls for a disciplined budget process to ensure that performance goals are met with the least risk and the lowest life-cycle cost, which includes direct and indirect costs, operations and maintenance, and disposal. The Software Engineering Institute (SEI) echoes the need for reliable cost-estimating processes in managing software implementations—identifying tasks to be estimated, mapping the estimates to the breakdown of work to be performed, and having a clear audit trail are among SEI’s requisites for producing reliable cost estimates. Despite NASA requirements and OMB and SEI guidance, IFMP did not prepare a full life-cycle cost estimate—that is, all direct and indirect costs for planning, procurement, operations and maintenance, and disposal were not included. For example, the life-cycle cost estimate does not include the following: the cost to operate and maintain the system beyond 2010; the cost of retiring the system; enterprise travel costs, which are provided monthly by the NASA centers; and the cost of nonleased NASA facilities for housing IFMP. In addition, IFMP did not prepare WBS estimates for active modules—that is, those currently being implemented. According to NASA guidance, breaking down work into smaller units helps facilitate cost estimating and project and contract management, and helps ensure that relevant costs are not omitted. The guidance also states that the WBS should encompass both in-house and contractor efforts. According to the IFMP Deputy Program Director, WBS estimates are not prepared for active modules because information such as contract task orders can be used to prepare the cost estimates. However, there is not one overriding contract where each module is considered a deliverable at a fixed price. Rather, numerous contracts at both the project and center level for a module’s implementation—many of which can be awarded for a level of effort at agreed-upon fixed rates at various phases in the implementation. Without a WBS estimate for the project as a whole, NASA cannot ensure that all relevant contractor costs are included in the cost estimate. In addition, using contract task orders to prepare the cost estimate would not ensure that government in-house costs are included in the life-cycle cost estimate. Finally, for modules in the planning phase, the program utilized NASA’s subject matter experts and professional cost estimators to prepare business case analyses. However, although these analyses contained WBS cost estimates, the audit trail from the WBS estimate to the program’s life- cycle cost estimate was not always clear. Without a clear audit trail, it is difficult to determine whether the differences between the detailed WBS estimates and the official program cost estimate are appropriate. The lack of a clear audit trail has been a weakness since the inception of the program. For example, IFMP was unable to provide us with traceable support for its baseline cost estimate for direct program costs. NASA has made some improvements that should help the program prepare better cost estimates. In May 2002 the NASA Administrator appointed an executive to provide leadership and accountability in the direction and operation of the system. The NASA headquarters program office also hired a business manager to oversee and provide consistency for the cost-estimating process and provide an analyst to review enterprise support costs. Although NASA guidance requires sufficient program schedule margins to manage risks, efforts to complete the integrated system as quickly as possible might have resulted in a schedule that is too compressed to accommodate program challenges, such as personnel shortages and uncertainties about software’s availability. If the program schedule margin is too compressed, the program could incur additional risks, including added cost growth as well as failure to meet IFMP’s schedule objectives. OMB’s e-Government initiatives—which aim to streamline agency business processes and eliminate redundant systems governmentwide— could also provide challenges for NASA’s IFMP planning. As a result, the program schedule may be optimistic. While implementing the Core Financial module (see table 1), IFMP has faced human resource challenges, and the program continues to face these challenges with other modules. For example, personnel shortages at Marshall Space Flight Center for several months affected the Core Financial project and other projects. In this case, a schedule slip was avoided, but during fiscal year 2002, the shortages resulted in nearly $400,000 for extra hours worked by center employees. Human resource challenges are also affecting the Budget Formulation module. The simultaneous implementation of this module with the Core Financial module—an action advised against by a contractor conducting a lessons-learned study—placed heavy demand on already scarce resources and added complexity to the program. As a result the schedule for implementing the Budget Formulation module has already slipped. Sometimes, relying more on contractor personnel can alleviate shortfalls in civil service personnel, but a recent Budget Formulation project status report indicated that the implementation contractor might also have difficulties acquiring and/or retaining qualified personnel. The implementation schedules for the remaining modules overlap, putting the program at further risk of schedule slippages. Uncertainty regarding software availability also puts the program at risk for completing the integrated system on schedule. For example, complete software solutions and requirements for IFMP’s Contract Administration module have not yet been determined. Although contract-document- generation software is available and tailored to meet the unique interface and reporting requirements of the federal government, the “best of suite” software solution—SAP—does not currently meet these requirements. NASA faces the same challenge with IFMP’s Human Resources Management module. NASA’s monthly status reports show that the program is working with SAP to develop a software solution for the Human Resources Management module that will meet federal government requirements, but the outcome is uncertain. In addition, the program could adopt an e-Government solution for its Human Resources Management module rather than the SAP solution. Inserting e-Government solutions into IFMP planning—which calls for using “best of suite” software—could create more difficult interface development and a less-integrated system, thus interrupting the program’s cost and schedule. E-Government initiatives are already affecting NASA’s planning for the payroll, procurement, and travel modules in the integrated system. For example, the payroll function, which was once part of the Human Resources Management module, will likely become a separate module under e-Government. Similarly, the Contract Administration module has been split into two components: one for procurement document generation, for which software is available although requirements are not finalized, and one for the remainder of NASA’s Contract Administration requirements, for which requirements and software are currently unknown. Furthermore, e-Travel could replace the Travel Management module, which has already been implemented. According to the program’s fiscal year 2002 Independent Annual Review, e-Government initiatives are forcing the program into a reactionary mode, thus increasing risk to the program’s success. The review specifically noted that (1) the benefits of a fully integrated system could be lost under e-Government, (2) the scope of IFMP and timing of future projects’ implementation have become uncertain, and (3) cost increases and schedule slippage to accommodate directives may occur. In addition to the uncertain reliability of IFMP’s life-cycle cost estimates and optimistic schedules, NASA cannot ensure that the funding set aside for program contingencies is sufficient because the program did not consistently perform in-depth analyses of the potential cost impact of risks and unknowns specific to IFMP, as required by NASA guidance. Moreover, the program did not quantify the cost impact of identified risks, link its risks to funding reserves, or consistently set aside cost contingencies for these risks. NASA guidance stipulates that programs incorporate financial reserves, schedule margins, and technical performance margins to provide the flexibility needed to manage risks. According to the guidance, financial reserves are to be established and maintained commensurate with programmatic, technical, cost, and schedule risks. In other words, cost contingencies should be tailored to the specific risks associated with a particular program or project. In addition, NASA guidance suggests that tools such as Probabilistic Risk Assessment can help in analyzing risk. Although NASA’s business case analyses include a risk assessment and recommended reserve levels, we found no evidence that these recommended levels were used in establishing the actual reserve levels for the IFMP module projects. Regardless, the actual levels established did not match the recommended levels in the business case analyses in most cases. We found that reserves for some IFMP modules—both in the planning and active phase—were based not on IFMP-specific risks but on reserve levels for other high-risk NASA programs. For example, for a number of IFMP modules, reserves were set at levels used for spacecraft implementations—typically about 30 percent—because industry experience showed that large cost overruns in system implementations such as IFMP are common. Yet it is unclear whether this reserve margin is adequate for IFMP because the effect of IFMP-specific risks and assumptions—such as uncertainties relating to software, schedule, and OMB’s e-Government initiatives—were not analyzed. In addition, some of the enterprises supporting the module projects described their method of establishing funding reserves as a combination of rules of thumb and guesswork. The Budget Formulation module has already experienced shortfalls in its reserves, and project officials expressed concerns that the module’s functionality may have to be reduced. As of April 2003, the module had expended its baseline reserves, which were established at about 20 percent on the basis of the level of risk for space flight missions—not on the risks specific to the module. Although the project was able to bring its budget back into balance by obtaining an agreement with SAP to limit overtime pay to time in excess of 50 hours per week, its remaining reserves total only $83,000 to cover all contingencies—including those that could require changes to the Budget Formulation module. NASA requires programs to quantify the cost impact of high-criticality risks and to determine to what extent reserves may be exhausted, should the risks become reality. According to SEI, estimating the potential cost and schedule impact for all identified risks is an element of good estimating practice. Quantifying the cost impact of identified risks and clearly and consistently linking the risk database to funding reserves helps programs develop realistic budget estimates. While IFMP identifies program risks, analyzes their severity, and plans mitigation actions, the program typically does not prepare a cost impact analysis for identified risks nor does it consistently link identified risks to funding reserves to ensure that funds are available, should the risk occur. For example, in February 2003, the Travel Management Project found that some components of the Travel Management module might not satisfy individual centers, be funded, or be technically feasible. However, the cost impact of this risk, as well as others, was not quantified. Similarly, in June 2003, the Budget Formulation module did not quantify the cost impact of a number of identified risks. Without estimating the potential cost impact of these risks, NASA cannot determine whether it has sufficient reserves to cover the risks—which is particularly problematic for Budget Formulation, since virtually no reserves remain for this module. Furthermore, in its July 2003 monthly status report, the IFMP headquarters office identified three high-criticality risks that could have a cost impact on the overall program; however, no liens were set aside against reserves for these risks: Reductions to out-year budgets could affect the implementation of future integrated modules or the ongoing evolution of existing modules. An e-Government solution may be adopted for human resources management rather than the IFMP solution, resulting in more difficult interface development and a less-than-integrated solution. E-Government initiatives and policy decisions could disrupt IFMP modules, resulting in delays or additional resource impacts. An independent cost estimate team identified and quantified the impact of two IFMP program risks, indicating that the cost and schedule impact of a risk on a program or project can be sizeable. First, the team identified a high-probability risk that NASA’s “full cost requirement”—in which all direct and indirect agency costs, including civil service personnel costs, are tied to individual programs and projects—could affect the Budget Formulation module. The team estimated this risk at $2 million to $3 million, with a potential schedule slip of 3 to 6 months. The Budget Formulation Project is currently trying to determine what impact it may have. The second risk identified by the independent cost review team— that the Core Financial module may be transitioned to operations before all integration points are addressed—could be more costly. The team estimated this risk at $10.5 million to $20 million, also with a potential 3- to 6-month schedule slip. However, the team considered this risk as having a low probability of occurrence. NASA is at a critical juncture and faces major challenges in improving contract management and controlling costs. These challenges seriously affect the agency’s ability to effectively manage its largest and most costly programs. A modern integrated financial management system, as envisioned in IFMP, is critical to ensuring that NASA has accurate and reliable information to successfully meet these challenges. NASA has made some improvements during the past year, such as hiring personnel to provide the cost-estimating process with oversight and consistency. However, if IFMP continues to ignore disciplined processes in estimating program costs and impacts, it is unlikely that the program will meet its goals. To ensure that IFMP’s life-cycle cost estimate conforms to NASA guidance and best practices, we recommend that the NASA Administrator direct IFMP to do the following: Prepare cost estimates by the current Work Breakdown Structure for the remaining modules. Provide a clear audit trail between detailed WBS estimates and the program’s cost estimate for the remaining modules. Prepare a full life-cycle cost estimate for the entire IFMP that meets NASA’s life-cycle cost and full cost guidance. To ensure that contingencies are funded in accordance with NASA guidance and best practices, we recommend that the NASA Administrator direct IFMP to do the following: Utilize a systematic, logical, and comprehensive tool, such as Probabilistic Risk Assessment, in establishing the level of financial reserves for the remaining module projects and tailor the analysis to risks specific to IFMP. Quantify the cost impact of at least all risks with a high likelihood of occurrence and a high magnitude of impact to facilitate the continuing analysis necessary to maintain adequate reserve levels. Establish a clear link between the program’s risk database and financial reserves. Although NASA concurred with our recommendations for corrective action, NASA indicated that its current processes are adequate for (1) preparing WBS cost estimates, (2) estimating life-cycle costs, and (3) establishing reserves on the basis of IFMP-specific risks. The agency cited its business case analyses as the methodology through which it is accomplishing these tasks. We disagree that NASA’s current processes are adequate, and our recommendations are aimed at improving these processes. As discussed in this report, while NASA prepares WBS cost estimates for IFMP modules in the planning phases by using business case analyses, it does not prepare WBS cost estimates for active modules. And although IFMP indicates that preparing cost estimates by using contract task orders is an appropriate methodology, this approach will not ensure that all relevant costs, including both contractor and government in-house costs, are included in the life-cycle cost estimate. Regarding contract costs, there is not one overriding contract where each module is considered a deliverable at a fixed price. Rather, there are numerous contracts at both the project and center level for implementing modules—many of which can be awarded for a level of effort at agreed-upon fixed rates at various phases in the implementation. Without a WBS estimate for the project as a whole, NASA cannot ensure that all relevant contractor costs are included in the cost estimate. In addition, using contract task orders to prepare the cost estimate would not ensure that government in-house costs are included in the life-cycle cost estimate. According to NASA, IFMP will improve its business case analyses by providing better estimates of operational costs through the expected life of the module, retirement costs, and other full life-cycle costs. However, as discussed in this report, an audit trail is needed between the detailed estimates contained in the business case analyses and the program’s life- cycle cost estimate to ensure that these improvements are reflected in the program’s official cost estimate. Finally, as discussed in this report, although NASA’s business case analyses include recommended reserve levels, we found no evidence that these recommended levels were used in establishing the actual reserve levels for the IFMP module projects. Regardless, the actual levels established did not match the recommended levels in most cases. We found that the program established funding reserves on the basis of reserve levels set by other high-risk NASA programs, rather than on IFMP- specific risks as required by NASA guidance. To assess the reliability of NASA’s methodology for preparing the current cost estimate for IFMP, we reviewed program and project-level documentation to obtain an understanding of NASA’s current cost estimate and its major components and the methodology used to develop the estimate. We also interviewed program and project officials to clarify our understanding of the cost estimate and how NASA derived it. In addition, we compared the program’s cost-estimating methodology with SEI best practices, OMB requirements, and NASA’s own procedures and guidance. Finally, we reviewed internal and independent analyses of the cost estimate. We did not attempt to validate NASA’s estimate; rather, we reviewed NASA’s processes for preparing its estimate. To determine whether NASA’s current schedule is reasonable in terms of progress to date and available resources, we reviewed the program’s schedule objectives and NASA’s policies for managing program and project schedules. We monitored the schedule and risks to the schedule through our review of the program’s monthly status reports and internal NASA briefings. We interviewed program and project officials to ascertain NASA’s progress against the schedule. To evaluate NASA’s processes for ensuring the adequacy of cost contingencies to mitigate the potential impact of identified program risks and unknowns, we reviewed governmentwide and NASA policies and SEI best practices for managing risk and establishing cost contingencies. We also interviewed program officials at NASA headquarters and project managers to obtain an understanding of how reserve levels were established and maintained for the program. We then compared IFMP’s processes for ensuring adequate cost contingencies with processes dictated by OMB and NASA guidance and by best practices. To accomplish our work, we visited NASA headquarters, Washington, D.C.; Marshall Space Flight Center, Alabama; and Goddard Space Flight Center, Maryland. We also contacted officials at Glenn Research Center, Ohio. We performed our review from April through September 2003 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to interested congressional committees; the NASA Administrator; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or lia@gao.gov. Key contributors to this report are acknowledged in appendix I. Staff making key contributions to this report were Jerry Herley, Erin Schoening, LaTonya Miller, and Karen Sloan. | The National Aeronautics and Space Administration (NASA) has struggled to implement a fully integrated financial management system. The lack of such a system has affected the agency's ability to control program costs, raising concerns about the management of its most costly programs, including the space shuttle program and the International Space Station. In April 2000 NASA initiated the Integrated Financial Management Program (IFMP)--its third effort to improve the agencywide management of its resources. Implementation is expected by fiscal year 2006 with an estimated life-cycle cost of nearly $1 billion. This report (1) assesses NASA's methodology for preparing the current life-cycle cost estimate for implementing IFMP, (2) determines whether NASA's current schedule is reasonable, and (3) evaluates NASA's processes for ensuring adequate cost contingencies. The uncertain reliability of cost estimates, optimistic schedules, and insufficient processes for ensuring adequate funding reserves have put NASA's latest financial management modernization effort at risk. Over the past several years, IFMP's life-cycle cost estimates have fluctuated, and NASA's current estimate is 14 percent greater than the previous estimate. The reliability of these estimates is uncertain because disciplined costestimating processes required by NASA and recognized as best practices were not used in preparing them. For example, IFMP's current life-cycle cost estimate did not include the full cost likely to be incurred during the life of the program, including certain operations costs and costs to retire the system. In addition, NASA did not consistently use breakdowns of work in preparing the cost estimate, as recommended by NASA guidance. In cases where work breakdowns were used, the agency did not always show the connection between the work breakdown estimates and the official program cost estimate. This has been a weakness since the inception. Although more than half of the IFMP modules have been implemented--including the Core Financial module, which is considered the backbone of IFMP--the system may not be fully implemented by the end of fiscal year 2006 as planned. Efforts to complete the integrated system as quickly as possible might have resulted in schedule margins that are insufficient to manage program challenges--such as personnel shortages, uncertainties about software availability, and Office of Management and Budget (OMB) initiatives to implement electronic systems for agency business processes governmentwide. These OMB initiatives have put IFMP in a reactive mode and are already affecting planning for the payroll, procurement, and travel components of the integrated system, which could result in additional schedule delays and cost growth. Finally, reserve funding for IFMP contingencies may be insufficient, which is particularly problematic, given the program's unreliable cost estimates and optimistic schedule. One module--Budget Formulation--is already experiencing potential shortfalls in its reserves, and project officials expressed concerns that the module's functionality may have to be reduced. Yet the program continues to establish funding reserves based on reserve levels set by other high-risk NASA programs, such as NASA's space flight program--not on analyses of the potential cost impact of risks and unknowns specific to IFMP, as required by NASA guidance. Moreover, the program did not quantify the cost impact of high-criticality risks--also required by NASA--or link its risks to funding reserves to help IFMP develop realistic budget estimates. |
After an internal assessment initiated in January 2010, the Secretary of Homeland Security announced in January 2011 that she had directed CBP to end the SBInet program as originally conceived. According to DHS, the Secretary’s decision was informed by an independent analysis of cost- effectiveness, a series of operational tests and evaluations, and Border Patrol input. The prime contractor is to continue limited performance under the SBInet contract using a 1-year option for SBInet operations and maintenance services in Arizona beginning on April 1, 2011, with a possible 6-month extension. Further, according to CBP and the contractor, following a March 2010 decision by the Secretary halting further deployment of SBInet beyond the Tucson and Ajo Border Patrol stations, no additional SBInet deployments are expected. In addition, the Secretary’s decision to end the SBInet program limited Block 1 deployments to the Tucson and Ajo stations in the Tucson Sector, but did not affect the current SBInet Block 1 capability, which was developed based on updated requirements from the Border Patrol. The Block 1 capability consists of 15 sensor towers (with day/night cameras and radar) and 10 communication towers, which transmit surveillance signals to the Common Operating Pictures (COP) at station command centers. This capability remains deployed and operational in Arizona, as part of the Border Patrol Tucson Sector’s overall technology portfolio. According to contractor and Border Patrol officials, there were several original SBInet concepts that were not included in the Block 1 capability due to early design/cost trade-offs and Border Patrol agent feedback that they did not need them to perform their mission. Also, certain elements proved technically difficult and costly to include in the Block 1 capability. For example, the concepts to integrate transmissions from RVSS and MSS units into the COP, transmitting COP images into agents’ laptops in their vehicles and tracking Border Patrol agent deployments on the geographic display were not included. OTIA and Border Patrol officials told us that the SBInet program’s Block 1 capability has been useful since being deployed in February 2010 at the Tucson station and August 2010 at the Ajo station. For example, a shift commander at the Tucson station described the capability as considerably better than the technology that was available at the sector prior to the SBInet deployment. Further, according to COP operators in Tucson, the current SBInet sensor package is responsive to key mission requirements by giving them the capability to achieve persistent wide-area surveillance and situational awareness. Officials at Border Patrol headquarters stated that the Block 1 capability gave them a capability they did not have before. These officials also stated that, most importantly, the Block 1 capability helped them achieve persistent surveillance and situational awareness to enable an appropriate response to border intrusions and choose the location of interdiction, which they described as a tactical advantage. They also noted that the height of the towers allows for additional surveillance into terrain and brush thereby allowing the Border Patrol to shift personnel to gap areas where surveillance does not exist. Other examples of system usefulness offered by Border Patrol officials included a centralized point of data integration (through the COP), increased probability of arrest upon detection (by controlling the point of interdiction by means of camera and radar), improved agent safety when responding to potential threats, verification of whether a ground-sensor indicated a threat or not, efficiency and effectiveness in directing agent responses, and a tiered deployment of technology. For example, at the Ajo Station, a Border Patrol official explained that tiered deployment included mobile technology units that are positioned at the border line, and Block 1 sensor towers that are deployed off the line where they can monitor intruders who might have eluded interdiction at the border. The Secretary’s January 2011 announcement also stated that the SBInet capability had generated some advances in technology that had improved Border Patrol agents’ ability to detect, identify, track, deter, and respond to threats along the border. It further stated that the new border technology deployment plan would also include, where deemed appropriate by the Border Patrol, elements of the now-ended SBInet program that have proven successful. On the basis of limited data, the operational availability of deployed SBInet components has been consistent with the relevant requirement that expects SBInet to be operationally available 85 percent of the time. According to prime contractor operations and maintenance statistics for a 1-week period in January 2011, SBInet in the Tucson and Ajo Stations was operational over 96 percent of the time. According to the contractor’s logistics manager who oversees the operation and maintenance of SBInet, since the deployment is relatively recent, a full year’s worth of data would be needed to make conclusive determinations about long-term operational reliability and identify areas of persistent problems. The times that SBInet was not available were due primarily to camera malfunctions and power failures. According to Border Patrol and prime contractor officials, the SBInet Block 1 capability is receiving new features from the contractor in response to ongoing user input and feedback. These features include adding an “eye-safe” laser target illuminator (the eye-safe feature minimizes the potential for injury to a person exposed to the laser), adding a “standby” mode to the radar (wherein scanning is suspended until needed), and integrating the next-generation unattended ground sensors into the COP. However, this applies only to new sensors intended for Block 1—the Border Patrol has not selected a vendor for next-generation sensors for elsewhere along the border and outside of SBInet. The usefulness of SBInet’s Block 1 capability notwithstanding, OTIA and Border Patrol officials told us that it has certain shortcomings. These shortcomings include not having the mobility to respond to shifts in risk, facing terrain coverage (line-of-sight) gaps, some of which are mitigated through other technologies, and performing poorly in adverse weather. Further, according to OTIA, the SBInet capability as configured by the prime contractor is a proprietary and not an open architecture. Thus, it is unable to incorporate, for example, next-generation radar and cameras without significant integration work and cost. In addition, the SBInet capability has been costly to deploy and maintain. Specifically, the total task-order cost for the Block 1 deployment in Arizona was about $164 million. The operations and maintenance costs for the deployment are estimated to be up to about $1.5 million per month, or about $18 million per year. DHS is implementing a new approach for acquiring and deploying border security technology called “Alternative (Southwest) Border Technology” to replace the SBInet program. As part of this approach DHS is to deploy a mix of technologies, including RVSS, MSS, and hand-held equipment for use by Border Patrol agents. It also is to include a new Integrated Fixed Tower system that is slated for deployment along the border where the Border Patrol deems it appropriate, beginning with five high-risk areas in Arizona at an estimated cost of $570 million. While other elements of the plan may be deployed sooner, the deployment schedule for the Integrated Fixed Towers envisioned by OTIA and the Border Patrol is planned to begin in 2013, depending on funding availability. This plan suggests that OTIA and the Border Patrol have determined that the Integrated Fixed Tower system is a cost-effective solution in certain locations. However, due to the questions we have about how the Analysis of Alternatives (AOA) analyses and conclusions were factored into planning and budget decisions, the basis for DHS’s technology deployment plan is not yet clear. Further, the results of independent analyses were not complete at the time of the Secretary’s decision to end the SBInet program, thus any results on SBInet’s operational effectiveness could not inform the decisions to proceed with a possibly similar Integrated Fixed Tower system. According to the Border Patrol, its operational assessment for Arizona calls for deploying Integrated Fixed Tower systems to five high-threat areas in the state, beginning with the Nogales, Douglas, and Casa Grande Stations as part of this approach. These deployments will include 52 sensor towers, which is less than the 91 sensor towers envisioned under the original SBInet deployment plan. Border Patrol officials explained that they reviewed the contractor’s original analysis of where to put the towers and determined that other solutions, such as RVSSs and MSSs, were more appropriate due to terrain and other factors such as population density. According to OTIA and Border Patrol officials, depending on the availability of funding, the deployments of the Integrated Fixed Tower system component of the Arizona technology plan are expected to begin around March 2013 and be completed by the end of 2015 (or possibly early 2016), with other sector deployments sequentially following the Arizona sector. OTIA estimates that the entire Integrated Fixed Tower system acquisition for Arizona would cost about $570 million, including funding for design and development, equipment procurement, production and deployment, systems engineering and program management, and a national operations center. In this regard, the President’s fiscal year 2012 DHS budget request for BSFIT calls for $242 million to fund the first three Integrated Fixed Tower system deployments for Arizona, which include 36 sensor towers. Border Patrol officials told us that the existing SBInet capability and the requested Integrated Fixed Tower systems are intended to form the “baseline or backbone” of its evolving technology portfolio, where appropriate in high-risk areas in Arizona, with some exceptions. For example, in the urban areas of the Douglas and Naco Stations, RVSS units would likely be considered the backbone because they are better suited for populated areas where SBInet’s radar capability is not as effective. A Border Patrol official said that Integrated Fixed Tower systems could be an important technology component in additional areas along the southwest border, but that the agency had not yet made those determinations, pending the outcome of forthcoming operational assessments. In one of its first actions following the Secretary of Homeland Security’s announcement to end SBInet, DHS issued a Request for Information (RFI) in January 2011 to industry regarding the commercial availability of surveillance systems based on the Integrated Fixed Tower system concept, consistent with its stated intent to acquire future border technologies in its new plan through full and open competitions. OTIA and Border Patrol officials explained that the RFI would engender competition and better options for the government, in terms of finding out about state-of-the-art industry capabilities and obtaining feedback on requirements to help refine them. However, they expect similar benefits in terms of capability, performance, and cost that such competition would yield, as compared to the SBInet Block 1 capability. For example, OTIA and Border Patrol officials acknowledged that the surveillance system sought by the RFI is essentially the same as the one deployed in Block 1 in terms of expected capability and performance in meeting operational and effectiveness requirements. In February 2011, DHS conducted an “Industry Day” to provide potential vendors with a better understanding of Border Patrol’s technology needs on the southwest border and collect information about potential capabilities. During the session, DHS provided information on potential procurements for Integrated Fixed Tower systems and a range of other surveillance technology, such as RVSS and unattended ground sensors. Following its information-collection activities, should DHS decide to move forward with requests for proposal for various types of technology, including the Integrated Fixed Tower system, these actions should be timed in such a way as to make maximum use of the results from the cost- effectiveness analyses discussed below. While the initial deployment actions will be in Arizona, it is envisioned that the contracts could be used to deploy technology anywhere on the southwest border. However, to accomplish this, DHS will need to ensure that the requirements specified in the request for proposal are sufficient for deployment not just in Arizona but throughout the border. According to OTIA and Border Patrol officials, the Secretary’s decision on the future of SBInet and the Integrated Fixed Tower system was informed by an AOA that analyzed the cost-effectiveness of four options—mobile (e.g., MSS), fixed (Integrated Fixed Towers), agent (e.g., hand-held equipment), and aviation (Unmanned Aerial Vehicles). On the basis of our review of available information about the AOA to date, there are several areas that raise questions about how the AOA results were used to inform Border Patrol judgments about moving forward with technology deployments, including the Integrated Fixed Tower system. As we continue our work for the committee, we plan to examine each of the following areas in detail to obtain additional insights into DHS’s decision making regarding the cost-effectiveness of a range of border technology options. Specifically, It is not clear how DHS used the AOA results to determine the appropriate technology plans for Arizona. For instance, the AOA identified uncertainties in costs and effectiveness of the four technology alternatives in each of the four geographic analysis areas, meaning that there was no clear-cut cost-effective technology alternative for any of the analysis areas. Yet, the AOA observed that a fixed tower alternative may represent the most effective choice only in certain circumstances. Because of the need to complete the first phase of the AOA in 6 weeks, the AOA was limited in its scope. For instance, the AOA did not consider the combination of technology approaches in the same geographic area and did not consider technology solutions, such as RVSS units. Urban areas were outside the scope of the AOA. Hence, it is unclear how DHS made decisions for proposed technology deployments in such areas. Further, the first AOA did not examine as an alternative the use of only existing Border Patrol equipment and agents without the addition of any new technology approaches. The AOA should have assessed the technology approaches based on the incremental effectiveness provided above the baseline technology assets in the geographic areas evaluated. According to study officials, the omission of a baseline alternative was corrected in the second AOA and did not change the conclusions of the first AOA. A more robust AOA could result in conclusions that differ not just in the Border Patrol sectors yet to be evaluated in future AOAs, but also in the Tucson and Yuma sectors considered in the first AOA. While the primary purpose of the second phase of the AOA was to expand the analysis to three additional Border Patrol sectors (San Diego, El Paso, and Rio Grande Valley), being able to conduct the analysis over several months allowed the study team more time to consider additional measures of effectiveness and technology options. DHS plans to conduct another AOA that would cover the remainder of the southwest border. According to study officials, while the potential for different results existed, the results from the second AOA did not significantly affect the findings from the first AOA. Further, we have questions about how the AOA analyses and conclusions were factored into planning and budget decisions regarding the optimal mix of technology deployments in Arizona. Specifically, according to OTIA and Border Patrol officials, the AOA was used to develop the Arizona technology deployment plan and related procurement plans and to provide cost data to be used for the Border Patrol’s operational assessment and the fiscal year 2012 budget request for Integrated Fixed Tower systems. However, because AOA results were somewhat inconclusive, it is not yet clear to us the basis for including three of the four alternatives in the manner prescribed in the budget request (the Unmanned Aerial Vehicle alternative was not). For a program of this importance and cost, the process used to assess and select technology needs to be transparent. The uncertainties noted above raise questions about the decisions that informed the budget formulation process. We have not yet examined the Border Patrol’s operational assessment to determine how the results of the AOA were considered in developing technology deployment planning in Arizona and, in turn, the fiscal year 2012 budget request. The Army Test and Evaluation Command (ATEC) was to independently test SBInet’s Block 1 capability and evaluate the results to determine its operational effectiveness and suitability (i.e., the extent to which the system fits it its operational environment and is useful to Border Patrol to meet the agency’s mission). Because the Integrated Fixed Tower system could be similar to the sensor towers and COP used in SBInet Block 1, the ATEC could inform DHS’s decision about moving forward with technology deployments. However, the testing and evaluation was not complete at the time DHS reached its decision regarding the future of SBInet or requested fiscal year 2012 funding to deploy the new Integrated Fixed Tower systems, as discussed earlier. An initial briefing on the emerging results from the testing was provided to DHS on March 2, 2011, with a final report due sometime in April 2011. As our work proceeds, we will further address the questions raised about the AOA process, the test and evaluation results, and CBP’s proposed new acquisition strategy. We will also continue to assess the status of the SBInet program in light of the Secretary’s decision and the actions emanating from this decision. Chairwoman Miller, Ranking Member Cuellar, and members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have. For questions about this statement, please contact Richard M. Stana at (202) 512-8777 or stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Seto J. Bagdoyan, Charles W. Bausell, Jr., Courtney Catanzarite, Justin Dunleavy, Christine Hanson, Michael Harmond, Richard Hung, Robert Rivas, and Ronald Salo. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Securing the nation's borders from illegal entry of aliens, contraband, terrorists and weapons of mass destruction, is a long-term challenge. In November 2005, the Department of Homeland Security (DHS) launched the Secure Border Initiative network (SBInet)--a program which was to provide the Border Patrol, within DHS's U.S. Customs and Border Protection (CBP), with the tools to detect breaches and make agent deployment decisions by installing surveillance systems along the border. Alternative (Southwest) Border Technology is DHS's new plan to deploy a mix of technology to protect the border. This testimony is based on GAO's ongoing work conducted for the House Committee on Homeland Security and provides preliminary observations on (1) the status of SBInet and user views on its usefulness, and (2) the Alternative (Southwest) Border Technology plan and associated costs. GAO reviewed planning, budget, and system documents, observed operations along the southwest border, and interviewed DHS officials. In January 2011, the Secretary of Homeland Security directed CBP to end the SBInet program as originally conceived because it did not meet cost-effectiveness and viability standards, and to instead focus on developing terrain- and population-based solutions utilizing existing, proven technology, such as camera-based surveillance systems, for each border region. According to DHS, the Secretary's decision on SBInet was informed by (1) an independent analysis of alternatives (AOA) to determine the program's cost-effectiveness; (2) a series of operational tests and evaluations by the U.S. Army's Test and Evaluation Command (ATEC) to determine its operational effectiveness and suitability; and (3) an operational assessment by the Border Patrol to provide user input. The Secretary also stated that while the Alternative (Southwest) Border Technology plan should include elements of the former SBInet program where appropriate, she did not intend for DHS to use the current contract to procure any technology systems under the new plan, but rather would solicit competitive bids. SBInet's current surveillance capability continues to be used in Arizona. Specifically, there are 15 sensor towers (with cameras and radar) and 10 communication towers (which transmit the sensor signals to computer consoles for monitoring), currently deployed in the Border Patrol's Tucson Sector. In addition, on the basis of user feedback, the Border Patrol considers the current SBInet capability to be useful, including providing continuous surveillance in border areas where none existed before and enhancing agent safety when responding to potential threats. There are certain shortcomings including coverage gaps and radar performance limitations in adverse weather. The Alternative (Southwest) Border Technology plan is to incorporate a mix of technology, including an Integrated Fixed Tower surveillance system similar to that used in the current SBInet capability, beginning with high-risk areas in Arizona. But, due to a number of reasons, the cost-effectiveness and operational effectiveness and suitability of the Integrated Fixed Tower system is not yet clear. First, the AOA cited a range of uncertainties, and it is not clear how the AOA analyses and conclusions were factored into planning and budget decisions regarding the optimal mix of technology deployments in Arizona. Second, the ATEC independent analyses were not complete at the time of the Secretary's decision, thus any results on SBInet's operational effectiveness and suitability could not inform the decisions to proceed with the Integrated Fixed Tower system. The President's fiscal year 2012 budget request calls for $242 million to fund three of five future deployments of the Integrated Fixed Tower systems in Arizona, although, depending on funding, the earliest DHS expects the deployments to begin is March 2013 with completion anticipated by 2015 or later. Consistent with its intent to solicit competitive bids, CBP has initiated a new acquisition cycle, asking industry for information about the commercial availability of the Integrated Fixed Tower system. GAO will continue to assess this issue and report the final results later this year. GAO is not making any new recommendations in this statement but has made prior recommendations to strengthen SBInet. While DHS generally agreed most information in this statement, it did not agree with GAO's observations on the AOA and the potential usefulness of ATEC's analyses. GAO continues to believe its observations are valid. DHS also provided technical comments which were incorporated, as appropriate. |
The United States, along with its coalition partners and various international organizations and donors, has embarked on a significant effort to rebuild Iraq following multiple wars and years of neglect. In April 2003, Congress passed the Emergency Wartime Supplemental Appropriations Act, which created the Iraq Relief and Reconstruction Fund and appropriated approximately $2.48 billion for reconstruction activities. These funds—referred to as IRRF I—were to be used by USAID, State, DOD, Treasury, and Health and Human Services for a broad range of humanitarian and reconstruction efforts. In November 2003, Congress enacted an additional emergency supplemental appropriations act, which provided approximately $18.4 billion for reconstruction activities in Iraq. This appropriation—referred to as IRRF II—focused on security and infrastructure, and the funding was allocated across multiple sectors. Additionally, the November 2003 act required that full and open competition be used to enter into contracts using IRRF funds unless the use of an authorized statutory exception was properly documented and approved, and the specified congressional committees notified. As of August 29, 2006, about 94 percent, or approximately $20 billion, of all IRRF funds had been obligated by all agencies. The Competition in Contracting Act of 1984 (CICA) generally requires that federal contracts be awarded on the basis of full and open competition. This process is intended to permit the government to rely on competitive market forces to obtain needed goods and services at fair and reasonable prices. However, the law and implementing regulations recognize that there may be circumstances under which full and open competition would be impracticable, such as when contracts need to be awarded quickly to respond to urgent needs or when there is only one source for the required product or service. In such cases, agencies are given authority by law to award contracts without providing for full and open competition (e.g., using limited competition or on a sole-source basis), provided that the proposed approach is appropriately justified, approved, and documented. Additionally, regarding task orders issued under an existing contract, the competition law does not require competition beyond that obtained for the initial contract award, provided the task order does not increase the scope of the work, period of performance, or maximum value of the contract under which the order is issued. While no single, comprehensive system currently tracks governmentwide Iraq reconstruction contract data, we obtained competition information on $10 billion of the total $11.6 billion in obligations for Iraq reconstruction contracts collectively awarded by DOD, USAID, and State from October 1, 2003, through March 31, 2006, and found that about $9.1 billion, or 91 percent, of the obligations was for competitive awards. We obtained information on approximately $7 billion of the $8.55 billion DOD obligated and found that competition occurred for nearly all of the obligations. Both USAID and State provided information on all of their IRRF obligations made during the period of our review. However, where USAID information showed that almost all of its Iraq reconstruction contract obligations were for competitive awards, State information showed that few of its contract action obligations were for competitive awards. Figure 1 shows a breakdown of the three agencies’ competed and noncompeted contract actions based on available data. Based on available data, we found that the majority of DOD’s IRRF contract obligations incurred during the period we reviewed were for competitive awards. Competition information was available for approximately 82 percent of DOD’s total $8.55 billion in Iraq reconstruction contract obligations. Of this, we found that DOD competitively awarded about $6.83 billion, and noncompetitively awarded about $189 million. Most of the DOD offices we spoke with reported that, when possible, contract actions were competed. JCC-I/A—the office performing the majority of Iraq contracting for the DOD offices we reviewed—and its predecessor organizations, including the Project and Contracting Office and Program Management Office, obligated $3.82 billion, of which $3.81 billion was obligated for competitive awards. Additionally, the other DOD offices we reviewed, including the Army Corps of Engineers’ Gulf Region Division and Transatlantic Programs Center; the Army’s TACOM Life Cycle Management Command; and the Air Force Center for Environmental Excellence obligated approximately $2.25 billion, and of these obligations, approximately $2.08 billion were for competitive awards and $177 million for noncompetitive awards. Furthermore, the Army Corps of Engineers’ Southwestern Division competitively awarded two contracts to rebuild Iraqi oil infrastructure with obligations totaling $941 million. Complete information on DOD’s contract actions and competition type for the period of our review was not available, in part because not all offices consistently tracked or reported this information. Currently, DOD is transitioning its contract-writing systems to interface with the Federal Procurement Data System-Next Generation (FPDS-NG). Until this transition is completed, the majority of DOD components are expected to use DD Form 350s to report contract actions. However, we found that while the DD 350 system tracks competition information by contract, not all offices report their contracts to the system. For example, the JCC-I/A and its predecessor organizations did not fully input detailed, individual contract action information into DOD-wide systems including DD 350, which would provide information on competition. Furthermore, according to JCC-I/A officials, JCC-I/A did not track competition information until after May 2005. Consequently, we relied on multiple sources in order to obtain competition information for the DOD components within our review. USAID provided competition information for 100 percent of the $2.27 billion in IRRF contract obligations that the agency reported incurring between October 1, 2003, and March 31, 2006. These data indicated that USAID competitively awarded contract actions for about $2.25 billion, or 99 percent, of the approximately $2.27 billion it obligated; approximately $20.4 million, or about 1 percent, of these obligations were noncompetitively awarded. Agency contracting staff reported that USAID has pursued competition with very few exceptions when awarding contracts and issuing task orders for Iraq reconstruction. During our contract file review, we identified three instances in which the competition information provided by USAID was inaccurate. In two cases, USAID reported contracts as being awarded competitively when they were actually awarded under limited competition. In the third case, USAID reported a contract as “not competed,” when it was actually awarded competitively. In each of these instances, we used corrected competition information for our analysis. State obligated the smallest portion of IRRF funding among the three agencies, however, it incurred most if its obligations for noncompetitive awards. State provided competition information for 100 percent of the $762 million in IRRF contract obligations that the agency reported incurring between October 1, 2003, and March 31, 2006. These data indicated that State incurred obligations of approximately $73 million in competitive awards, or approximately 10 percent, of the approximate $762 million it obligated for IRRF contract actions; approximately $688 million, or about 90 percent, of these obligations were incurred under noncompetitive awards. In several of these cases, State cited urgency as the reason for awarding the contract actions noncompetitively. Specifically, justifications in two of the contract files we reviewed cited FAR § 6.302-2, unusual and compelling urgency, as the basis for using other than full and open competitive procedures. Additionally, one task order we reviewed was an unauthorized commitment that had to be ratified by State. The ratification amounted to the issuance of a noncompetitive task order to the contractor. During our contract file review, we identified three instances in which the competition information provided by State was inaccurate. In two of these cases, we found that contracts that were reported as awarded competitively were actually awarded noncompetitively. In the third case, State misclassified the competition type reported for a contract that was awarded competitively. In each of these instances, we used corrected competition information for our analysis. We reviewed 51 contract actions totaling $1.55 billion—35 at DOD, 11 at USAID, and 5 at State. We found that the agencies generally followed the FAR and the applicable agency supplements regarding documentation requirements for contract actions but did not always comply with congressional notification requirements. Of the 51 contract actions that we reviewed, 22 were awarded noncompetitively, while 29 were awarded competitively. Only 1 of the 22 noncompetitive contract action files did not contain justifications or other documentation as required in the FAR or agency supplements. Of the 29 competed contract actions, DOD was unable to provide documentation that competition had occurred, such as evidence of bidders or price negotiation memos, in 4 cases. Additionally, of the 22 noncompeted contract actions, State should have notified Congress of 2 actions it awarded using other than full and open competition in accordance with the notification requirements. While State failed to provide the required notifications, State officials told us that they have taken steps to address the problem for future awards. Within our sample, we did not find any additional instances where DOD, USAID, and State should have notified Congress of a noncompeted award but did not. Of the 35 DOD IRRF contract actions we reviewed, 15 were indicated as noncompeted and 20 indicated as competed. The files for the 15 noncompeted contract actions contained documentation required by the FAR, Defense Federal Acquisition Regulation Supplement and the Army Federal Acquisition Regulation Supplement. Of the 15 noncompeted actions, 4 were sole source contract awards; 4 were awarded using limited competition; 3 were noncompeted task orders under a multiple award IDIQ contract; 3 were sole source awards under the 8(a) program; and 1 was an out of scope modification. Based on our review, all of the contract actions that were awarded non-competitively had justification and approval documentation citing the reason for either limiting competition or using a sole source award when required. For example, JCC-I/A partially terminated an IDIQ contract used to rebuild hospitals in Iraq. In order to complete the remaining work, JCC-I/A awarded a series of sole source contracts to the remaining Iraqi subcontractors to complete the work. In another example, the Project and Contracting Office awarded a series of contracts using limited competition to pave roads in 13 governorates in Iraq, citing unusual and compelling urgent circumstances due to security concerns and limited manpower to evaluate all submissions from Iraqi firms. For the 20 competed DOD contract actions, 16 files included documentation that competition occurred, such as evidence of bidders or price negotiation memos when required. However, DOD was unable to provide supporting evidence for the remaining 4 contract actions that were indicated as competed. Of the 11 USAID contract actions we reviewed, 3 were indicated as noncompeted and 8 indicated as competed. The files for all 3 noncompeted actions included the documentation required by the FAR and the Agency for International Development Acquisition Regulation (AIDAR) regarding competition. Two of these contracts were awarded under limited competition—one for catering services and one for armored vehicles—providing an opportunity for multiple vendors to submit bids. For both of these contracts, USAID used a blanket waiver authority provided by the USAID Administrator pursuant to section 706.302- 70(b)(3)(ii) of the AIDAR. This waiver was originally signed in January of 2003, later renewed in June 2004, and again in August 2005, and the agency is currently working on a 2006 version. The third noncompeted action was a modification extending the performance period and increasing the total award amount for a contract for facility security. The files for the 8 competed USAID contract actions included documentation that competition occurred, such as evidence of bidders or price negotiation memos. Finally, of the 5 State contract actions we reviewed, 4 were indicated as noncompeted and 1 indicated as competed. Of the 4 noncompeted actions, 2 were single-award contracts for protective services, and 2 were task orders for police and guard services off of 1 IDIQ contract. The files for the 2 single-award contracts and one of the task orders included all of the documentation required by the FAR and the Department of State Acquisition Regulation (DOSAR) regarding competition. However, the file for one of the task orders for construction of a police training facility did not include documentation regarding the basis for using an exception to the fair opportunity process, as required in FAR § 16.505. The file for the 1 competed contract action included documentation that competition occurred, such as evidence of bidders or price negotiation memos. Of the 22 noncompeted contract actions in our review, State should have notified Congress of 2 actions it awarded using other than full and open competition in accordance with the congressional notification requirement in section 2202 of Public Law 108-106 but did not. State failed to notify Congress when awarding 2 letter contracts for personal protective services noncompetitively. State indicated that the department failed to comply with the notification requirement in these two cases because the Office of Acquisitions Management, which is responsible for awarding and administering contracts at State, was not notified that IRRF funds were applied to these contracts by the relevant program office. State officials told us they have coordinated with program office staff to ensure that they communicate funding types to contracting staff in the future. We did not identify any USAID or DOD contract actions within our sample that required congressional notification. We requested comments from DOD, USAID, and State on a draft of this report. DOD provided only one technical comment, which was incorporated into the report. USAID reviewed the report and found it to be factually correct. State acknowledged our findings and provided additional information regarding steps taken to address the section 2202 reporting requirement. Comments from State and USAID appear in appendixes IV and V. We are sending copies of this report to the Secretaries of Defense and State; the Administrator, U.S. Agency for International Development; and the Commanding General and Chief of Engineers, U.S. Army Corps of Engineers. We will make copies available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. The major contributors to this report are listed in appendix VI. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you have any questions about this report, please contact me at (202)-512-4841. The fiscal year 2006 National Defense Authorization Act conference report required that GAO update its 2004 report on the extent of competition for Iraq reconstruction contracts. In response, we focused our review on reconstruction contract actions funded solely with the Iraq Relief and Reconstruction Fund (IRRF). IRRF represents the largest amount of U.S. appropriated funds for reconstruction purposes. Other sources of U.S. funding for Iraq military, reconstruction, and stabilization efforts that are not included in our review are the Iraq Security Forces Fund, the Commander’s Emergency Response Program, and the Commander’s Humanitarian Relief and Reconstruction Program. Additionally, the congressional notification requirement in section 2202 of Public Law 108- 106 that was included in our review applies only to contract awards funded with IRRF. We included the Departments of Defense (DOD) and State (State) and the U.S. Agency for International Development (USAID), as these agencies are responsible for 98 percent of the total obligations made with IRRF through June 2006. Additionally, within DOD, we used the Corps of Engineers Financial Management System (CEFMS) data to select individual components to include in our review that were responsible for the majority of IRRF II contracting during the time period of our review. The components we selected included the Joint Contracting Command- Iraq/Afghanistan, Army Corps of Engineers’ Gulf Region Division and Transatlantic Programs Center, Army’s TACOM Life Cycle Management Command, and the Air Force Center for Environmental Excellence. To determine the approximate number of reconstruction contract actions, the types of actions, the funding sources, and the competition type of such actions, we found that no single source of information contained suitable amounts of both contracting actions issued using IRRF monies and competition information. Therefore, to obtain DOD data, CEFMS was selected as the basis for DOD’s IRRF contract universe due to the fact that it is the payment system for most of the major offices performing DOD contracting and presented the most complete contract action list. Using CEFMS, we identified DOD components based on the Department of Defense Activity Address Code, and selected offices to include in our review based on total obligations under IRRF II. Since CEFMS does not capture competition information, however, we attempted to cross- reference contracts found in CEFMS with DOD’s DD Form 350 Individual Contracting Action Report database. However, at the time of our review, we found that DD 350 did not include fiscal year 2006 data and not all DOD components fully reported contract actions to DD 350. As a result, we contacted the individual DOD components selected for inclusion in our review and requested competition information for the offices’ contracting actions funded with IRRF monies. To obtain IRRF-funded contract actions from USAID and State, we relied on agency-provided data, since the Federal Procurement Data System- Next Generation (FPDS-NG) did not contain any of USAID’s contracting actions at the time we began our review. Although State’s contracting actions were contained within FPDS-NG, the system did not indicate which actions were funded using IRRF money and other criteria needed for our review. We judgmentally selected 51 contract actions for further review to determine compliance with documentation requirements as prescribed in statutes, regulations, and other guidance, such as justification and approval documentation for noncompeted actions, and synopses of proposed contract actions, price negotiation memos, and evidence of bidder’s lists for competed actions. To determine the applicable documentation requirements and policies governing competition when awarding contract actions, we reviewed the requirements of the Competition in Contracting Act of 1984 and the Federal Acquisition Regulation, and additional agency regulations including the Defense Federal Acquisition Regulation Supplement, Army Federal Acquisitions Regulation Supplement, Agency for International Development Acquisition Regulation, Department of State Acquisition Regulation, and other guidance. The contract actions were selected based on the following criteria: reconstruction contract actions funded with IRRF monies; actions awarded from October 1, 2003, through March 31, 2006; current obligations of $1 million or more; represented both competed and noncompeted actions; selected more actions from DOD than USAID and State based on volume of contract actions obtained; included a variety of contracts, task orders, and modifications; and included a variety of goods and services provided. Our findings regarding documentation are specific to these selected contract actions and are not projectable to the agencies’ total contract action universe. Of the 51 contract actions selected, 22 were indicated as awarded noncompetitively and 29 were indicated as awarded competitively. We included competitively awarded actions in our review to verify the accuracy of reported actions and confirm evidence of competition. In the few cases noted where actions were incorrectly reported as competed or noncompeted by USAID and State, we corrected the errors as appropriate for use in our analysis. Given that our actions to corroborate the data contained within agency systems or provided by the agencies identified only a few errors, which we corrected, we believe the data to be sufficiently reliable for our purposes. To determine whether agencies complied with the congressional notification requirement contained in section 2202 of Public Law 108-106, we reviewed agency contract data within our selected contract actions to identify instances where the reporting requirement would apply and followed up with officials where appropriate. In order to review and understand the contract files selected, we interviewed DOD, USAID, and State contracting officers and other procurement officials in Washington, D.C.; Virginia; and Iraq. Where possible, we obtained electronic documentation from agency officials. Appendix II lists the Iraq reconstruction contract actions we reviewed. We conducted our work between April 2006 and August 2006 in accordance with generally accepted government auditing standards. Names of Iraqi firms not listed. To ensure that task orders issued to rebuild Iraq comply with applicable requirements, and to maximize incentives for the contractors to ensure effective cost control, the Secretary of the Army should review the out-of-scope task orders for Iraqi media and subject matter experts issued by the Defense Contracting Command-Washington (DCC-W) and take any necessary remedial actions. DCC-W agreed with the GAO findings concerning out-of-scope work for the orders awarded to SAIC for the Iraqi Media Network and the subject matter experts. Contracting officers ordering the out-of-scope work have been made aware that their actions were improper. DCC- W has instituted agencywide training in a number of topics, including the need to carefully review the scope of work of a contract to determine what may be legitimately ordered from that contract. This training will be periodically repeated. In addition, its postaward reviews will include an assessment of whether requiring work is within the scope of the basic contract. GAO and DOD consider this recommendation closed. To ensure that task orders issued to rebuild Iraq comply with applicable requirements, and to maximize incentives for the contractors to ensure effective cost control, the Secretary of the Army should ensure that any future task orders under the Logistics Civil Augmentation Program (LOGCAP) contract for Iraq reconstruction activities are within the scope of that contract. According to DOD, the Procuring Contracting Officer for the LOGCAP contract reviews each proposed scope of work that will result in a task order and makes a determination whether the action is within the scope of the contract and obtains appropriate legal advice as necessary. GAO and DOD consider this recommendation closed. To ensure that task orders issued to rebuild Iraq comply with applicable requirements, and to maximize incentives for the contractors to ensure effective cost control, the Secretary of the Army should address and resolve all outstanding issues in connection with the pending Justifications and Approvals for the contracts and related task orders used by the Army Corps of Engineers to restore Iraq’s electricity infrastructure. As of June 2006, the justifications and approvals were being processed for Assistant Secretary of the Army for Acquisition, Logistics and Technology, approval. GAO considers this recommendation open, though Defense Procurement and Acquisition Policy has indicated that this recommendation will be closed in the near term. To ensure that task orders issued to rebuild Iraq comply with applicable requirements, and to maximize incentives for the contractors to ensure effective cost control, the Secretary of the Army should direct the Commanding General, Army Field Support Command, and the Commanding General and Chief of Engineers, U.S. Army Corps of Engineers, to definitize outstanding contracts and task orders as soon as possible. DOD has definitized, or reached agreement on key terms and conditions for, all of the six contract actions identified in our June 2004 report. We noted in our March 2005 report entitled, High-Level DOD Coordination is Needed to Further Improve the Management of the Army’s LOGCAP Contract (GAO-05-328), that the Army had made improvements in definitizing task orders issued under the LOGCAP contract. GAO and DOD consider this recommendation closed. To improve the delivery of acquisition support in future operations, the Secretary of Defense, in consultation with the Administrator, U.S. Agency for International Development, should evaluate the lessons learned in Iraq and develop a strategy for ensuring that adequate acquisition staff and other resources can be made available in a timely manner. In November 2005, DOD issued directive 3000.05, Military Support for Stability, Security, Transition, and Reconstruction Operations, which, in part, required that DOD ensure proper oversight of contracts in stability operations and ensure U.S. commanders deployed in foreign countries are able to secure contract support rapidly. DOD is also working on developing joint contingency contracting policy and doctrine and assessing DOD’s contract administration services capability for theater support contracts. The estimated completion data of the ongoing actions is fall 2006. GAO considers this recommendation open. Major contributors to this report were John Neumann, Daniel Chen, Kate France, Julia Kennon, John Krump, Art James, Shannon Simpson, Karen Sloan, Adam Vodraska, and Aaron Young. | Since 2003, Congress has appropriated more than $20 billion through the Iraq Relief and Reconstruction Fund (IRRF) to support Iraq rebuilding efforts. The majority of these efforts are being carried out through contracts awarded by the Departments of Defense (DOD) and State and the U.S. Agency for International Development (USAID). When awarding IRRF-funded contracts for $5 million or more noncompetitively, agencies are required by statute to provide notification and justification to Congress. In June 2004, GAO found that agencies generally complied with laws and regulations governing competition to award new contracts, but did not always comply with competition requirements when issuing task orders under existing contracts. As mandated by Congress, this report (1) describes the extent of competition in Iraq reconstruction contracts awarded by DOD, USAID, and State since October 1, 2003, based on available data, and (2) assesses whether these agencies followed applicable documentation and congressional notification requirements regarding competition for 51 judgmentally selected Iraq reconstruction contract actions. In written comments, State and USAID concurred with the report findings. DOD provided a technical comment. While no single, comprehensive system currently tracks governmentwide Iraq reconstruction contract data, available data showed that from October 1, 2003, through March 31, 2006, DOD, USAID, and State collectively awarded the majority of Iraq reconstruction contracts competitively. Based on competition information we obtained on $10 billion of the total $11.6 billion in IRRF obligations by these agencies during the period of our review, we found that about $9.1 billion--or 91 percent--was for competitively awarded contracts. While our ability to obtain complete competition data for all DOD Iraq reconstruction contract actions was limited because not all DOD components consistently tracked or fully reported this information, we obtained information on approximately $7 billion, or 82 percent, of DOD's total Iraq reconstruction contract obligations, and of this, we found that competition occurred for nearly all of the obligations. Additionally, based on complete data for the period of our review we found that USAID competitively awarded contract actions for 99 percent of its obligations, while State awarded contract actions competitively for only 10 percent of its obligations. GAO reviewed the files for 51 contract actions totaling $1.55 billion--22 of which were awarded noncompetitively and 29 of which were awarded competitively--almost all of which contained proper documentation. One contract file--for a noncompetitively awarded task order issued by State--did not contain justifications or other required documentation. DOD was also unable to provide documentation for 4 of the competitively awarded contract actions. Of the 22 noncompeted contract actions in GAO's review, State should have notified Congress of 2 actions awarded using other than full and open competition in accordance with notification requirements but did not. State officials told GAO that they have taken steps to address the problem. GAO did not identify any DOD or USAID contract actions within the sample that required notification. |
According to NRC’s website, radiation doses, such as those received by survivors of the atomic bombs in Japan, can cause cancers such as leukemia and colon cancer and, if levels are high enough, acute radiation syndrome. The symptoms of this syndrome range from nausea, fatigue, and vomiting to death within days or weeks. The higher the radiation dose, the sooner the effects of radiation will appear, and the higher the probability of death. For example, according to NRC’s website, 134 of the plant workers and firefighters battling the fire at the 1986 Chernobyl nuclear power plant accident received high doses of radiation and suffered from acute radiation syndrome. Of these, 28 died within the first 3 months from their radiation injuries. In contrast, the effects of low-dose radiation are more difficult to detect. In particular, below about 100 millisieverts (mSv) (10 rem)—the level below which the National Academies’ 2006 report on radiation and human health considered radiation to be low dose—data do not definitively establish the dose-response relationship between cancer and radiation exposure. It is often not possible to determine the extent to which a health outcome such as cancer is caused by low dose radiation because of the potential confounding effects of other chemical and physical hazards and lifestyle factors, such as smoking and diet. In addition, much of the data on health effects of radiation exposure come from non-U.S. populations, such as Japanese atomic bomb survivors, who received a large exposure to radiation over a short period of time (an acute exposure), and there is uncertainty about the extent to which the health effects for these populations can be extrapolated to a U.S. population that is regularly (chronically) exposed to low-dose radiation. The roles of federal agencies in developing and applying radiation protection requirements and guidance vary depending on the setting in which radiation exposure occurs. For the four settings in our review— operation and decommissioning of nuclear power plants, cleanup of sites with radiological contamination, use of medical equipment that produces radiation, and accidental or terrorism-related exposure to radiation—the key agencies for establishing dose limits and guidance levels are EPA, NRC, DOE, and FDA. EPA advises federal agencies about radiation matters that affect public health and provides technical information for conducting radiation risk assessments; federal and state agencies use such assessments to develop and implement radiation protection regulations and standards. EPA also develops requirements and guidance for particular settings in which radiation exposure can occur. For example, EPA has developed regulations to limit discharges of radioactive material affecting members of the public from operations associated with use of nuclear energy to produce electrical power for public use, such as nuclear power plants. In addition, EPA has developed guidance on establishing protective cleanup levels for radioactive contamination at sites cleaned up under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). It has also developed guidance on levels of radiation exposure that would trigger public safety measures, such as evacuation, to minimize or prevent radiation exposure during an emergency. NRC is responsible for protecting people and the environment from unnecessary exposure to radiation as a result of civilian uses of nuclear materials. Among other things, NRC has established dose limits for workers and the public exposed to radiation from the operation and decommissioning of nuclear power plants, as well as minimum requirements for emergency plans for protecting members of the public from exposure in the event of a radiological emergency. NRC also has the primary responsibility for licensing, inspecting, and regulating medical uses of nuclear material. DOE is responsible for ensuring that its facilities are managed to protect workers and the public. As part of this responsibility, DOE has established radiation dose limits for workers at its facilities and public dose limits for DOE radiological activities, including cleanup of radioactive contamination at DOE sites. In addition, under the Atomic Energy Act of 1954, DOE is the federal agency that currently has primary responsibility for research related to nuclear energy. This responsibility includes the protection of health during activities that can result in exposure to radiation. DOE addresses this requirement through research to determine if DOE workers and people living in communities near DOE sites are adversely affected by exposures to hazardous materials from site operations. DOE’s National Nuclear Security Administration (NNSA) assists in emergency response to accidental or terrorism-related exposure to radiation by characterizing radiation levels in the area of an accident or terrorist event and providing information to emergency-response decision makers. FDA has issued radiation safety regulations for medical equipment, such as diagnostic X-ray systems. According to FDA officials, FDA’s regulations generally do not limit the dose to the patient but instead prescribe mandatory performance standards for most radiology medical devices, such as standards for the display of cumulative time that an X-ray system is activated. FDA has also developed guidance for state and local agencies to aid in emergency response planning for accidental or terrorism-related radioactive contamination of human food and animal feeds. Other federal agencies also have roles in radiation protection. For example, ionizing radiation is addressed in specific OSHA standards for general industry, shipyard employment, and construction. According to DOD officials, DOD operates facilities and engages in activities where radiation exposure can occur and implements occupational and public dose limits established by NRC and states in which these facilities and activities are located. NASA sets radiation exposure limits for space flight and supports research on the health effects of cosmic radiation to better manage health risks to astronauts. DHS’s Federal Emergency Management Agency provides guidance on responding to incidents involving release of radioactive material and has established procedures for review and approval of state and local emergency plans for the offsite effects of a radiological emergency that may occur at a commercial nuclear power facility. Two U.S. scientific advisory bodies—the National Academies’ Nuclear and Radiation Studies Board and the National Council on Radiation Protection and Measurements (NCRP)—and one international body—the International Commission on Radiological Protection (ICRP)—are involved in analyzing scientific developments regarding the health effects of radiation exposure and advising federal agencies. The National Academies’ Nuclear and Radiation Studies Board conducts studies on safety and other issues associated with nuclear and radiation-based technologies. The board has published a series of seven reports to advise the U.S. government on the relationship between exposure to radiation and human health, with the most recent report published in 2006. NCRP, a congressionally-chartered, nonprofit educational and scientific body, seeks to formulate and disseminate information, guidance, and recommendations on radiation protection and measurements that represent the consensus of leading scientific thinking. NCRP issues reports on specific issues of concern to federal agencies, such as on the use of medical equipment that produces radiation. ICRP, an independent, international organization with members consisting of scientists and policymakers in the field of radiological protection, offers recommendations to regulatory and advisory agencies on protection against radiation. In addition to addressing particular areas within radiological protection, its publications describe an overall system of radiological protection. Several other organizations are involved in scientific research and standards setting for protection against radiation. For example, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), which includes 27 United Nations states as members of its scientific committee, has a mandate to assess and report on levels and effects of exposure to radiation. Its summaries of basic scientific studies, along with scientific developments reported by the National Academies and other national organizations, serve as a primary source of information for NCRP and ICRP. The International Atomic Energy Agency, in collaboration with other organizations, has issued basic safety standards for protecting people and the environment from harmful effects of radiation. It has also issued safety requirements for preparedness and response for a nuclear or radiological emergency. The World Health Organization, one of the organizations that has collaborated with the International Atomic Energy Agency, also supports research on the health effects of radiation. EPA, NRC, DOE, and FDA have generally used the advice of scientific advisory bodies to develop and apply radiation protection requirements and guidance for workers and the public for the four radiation settings in our review. Three scientific advisory bodies—ICRP, NCRP, and the National Academies’ Nuclear and Radiation Studies Board—have supported the use of the linear no-threshold model for such requirements and guidance; this model assumes that the risk of cancer increases with every incremental increase in radiation exposure. The requirements and guidance the four agencies have developed and applied vary depending on the settings, in part because the scientific advisory bodies on which the agencies relied have also developed recommendations specific to the settings we reviewed. In developing and applying radiation protection requirements and guidance for workers and the public—specifically, developing limits on dose or increased health risk and guidance levels on exposure—EPA, NRC, DOE, and FDA have generally taken the advice of scientific advisory bodies. This advice includes the use of the “linear no-threshold model,” which assumes that the risk of cancer increases with every incremental increase in radiation exposure. The model is used to estimate the risk of cancer when the overall level of exposure is in the range considered to be low dose. At this level of exposure, data from epidemiological studies of individuals exposed to radiation provide evidence of increased risk to cancer, but with uncertainties about the extent of this risk. Under this model, federal regulations set dose limits for radiation exposure that are below the level in the National Academies’ 2006 report on radiation and human health for defining low-dose radiation. For example, NRC’s annual dose limit for members of the public (excluding natural, or background, sources of radiation) is 1 mSv (0.1 rem), or a hundredth of the level the National Academies considers low dose. Three key scientific advisory bodies—ICRP, NCRP, and the National Academies’ Nuclear and Radiation Studies Board—have supported use of this model for development of radiation protection requirements and guidance. For example: ICRP, in its 2007 update to its recommendations on radiological protection, stated that at low doses of radiation, it considers the linear no-threshold model to be the best practical approach to managing risk from radiation exposure. In addition, ICRP stated that this model is consistent with the principle that actions should be taken to avoid or diminish harm to human life or health that is scientifically plausible but uncertain, as is the case at low doses of radiation. ICRP’s update also explained that it periodically re-evaluates its recommended dose limits based on its evaluation of new scientific data and information. NCRP, in a 2001 study on the linear no-threshold model that it continues to reference today, noted that the existing epidemiological data on the effects of low-dose radiation are inconclusive and, in some cases, contradictory, prompting some observers to dispute the validity of the linear no-threshold model. Nevertheless, NCRP concluded that while there is uncertainty about the health effects of low-dose radiation, the linear no-threshold model is more plausible than other models, such as the hormesis model, which assumes that low-dose radiation protects against rather than increases the risk of cancer. Further, according to NCRP’s president, recent epidemiological studies indicate that the preponderance of evidence continues to support the linear no-threshold model for use in radiation protection. The National Academies, in its 2006 report on low-dose radiation, supported the use of the linear no-threshold model, stating that the balance of evidence from epidemiologic, animal, and mechanistic studies tends to favor a simple proportionate relationship at low doses between radiation dose and cancer risk. According to the National Academies, the availability of new and more extensive data since the publication of its previous report in 1990 strengthened confidence in the 2006 report’s estimates of cancer risk. For example, the 2006 report incorporated data from an additional 15 years of follow-up of Japanese atomic bomb survivors and from studies of nuclear workers exposed to low-dose radiation. Nevertheless, these advisory bodies have recognized challenges in accurately estimating cancer risks from very low doses of radiation exposure when using the linear no-threshold model. For example, the epidemiological data used to estimate the dose-risk relationship for American workers over a 1-year period are largely from studies of Japanese atomic bomb survivors’ exposure to radiation from the atomic bomb. As a result, to account for different doses and dose rates of radiation exposure, advisory bodies have recommended that estimates of the risk of low doses using these data be adjusted accordingly. For example, in radiation protection guidance issued in 2007, ICRP recommended that cancer risk estimates for low doses of radiation be adjusted downward by a factor of one-half. Figure 1 depicts examples of the dose limits and guidance levels established by EPA, NRC, and DOE. (See app. I for further examples.) As shown in the figure, the public dose limit for nuclear power plants is one-third the U.S. average natural background radiation level. Some stakeholders have questioned whether radiation dose limits based on the linear no-threshold model are too strict, or whether they are strict enough, and have advocated for revising dose limits and guidance levels. For example, in 2015, NRC received three petitions from different individuals proposing that NRC raise its occupational and public dose limits. One petitioner commented that some studies suggest low levels of radiation have protective effects and that the costs of complying with linear no-threshold-based regulations were high. Similarly, a joint study by the French National Academies of Science and of Medicine in 2005 concluded that epidemiological studies have been unable to find a significant increase of cancer at low levels of radiation exposure. Conversely, representatives we interviewed from two nonprofit groups, Physicians for Social Responsibility and Beyond Nuclear, told us that dose limits based on the linear no-threshold model were not strict enough to protect vulnerable groups, such as children and pregnant women and their fetuses. NRC officials told us that in the absence of convincing evidence that there is a dose threshold below which low levels of radiation are beneficial or not harmful, NRC will continue to follow the recommendations of scientific advisory bodies to use the linear no-threshold model. Similarly, officials from EPA told us that they would consider changing the use of the linear no-threshold model as the basis of their requirements and guidance only if there were a strong recommendation from scientific advisory bodies on radiation protection as well as an endorsement of the change by the National Academies. In addition, EPA published a paper in 2009 that stated that it believed that the evidence on health effects of radiation exposure does not preclude the possibility of a threshold below which there is no increased risk of cancer, but that the evidence at present does not support the existence of a threshold. The limits and guidance the four agencies have developed and applied vary depending on the settings in which exposure can occur, as described below. This result is in part because the scientific advisory bodies on which the agencies relied have developed recommendations specific to the four settings we reviewed: (1) operation and decommissioning of nuclear plants; (2) cleanup of sites with radiological contamination; (3) use of medical imaging equipment that produces radiation; and (4) accidental or terrorism-related exposure to radiation. According to NRC’s notice of its final rule for standards for protection against radiation, NRC used ICRP’s recommendations issued in 1977 as the basis for NRC’s regulations. ICRP stated that it developed its 1977 recommendations on occupational dose limits in part through a comparison between the cancer risk from occupational exposure to radiation and the rates of occupational fatalities in industries recognized as having high standards of safety. Thus, nuclear power plant workers would not face a greater risk of cancer than the fatality risks, whether due to accidents or disease, that workers face in other industries. For the general public, ICRP suggested that the cancer risk—and therefore the dose limit—should be less than that for workers and should be comparable to the public’s risk from everyday activities, such as taking public transportation. ICRP further stated that when setting public dose limits, an agency must consider members of the public belonging to critical groups, such as children and pregnant women, who may be more susceptible to the effects of radiation than the population as a whole. According to one of ICRP’s key recommendations for radiation protection, radiation exposure should be limited to keep the likelihood and magnitude of exposure as low as reasonably achievable, taking into account economic and societal factors. In keeping with this recommendation, NRC requires nuclear power plants to have a radiation protection program that includes measures to keep doses as low as reasonably achievable (ALARA). NRC defines ALARA to mean making every reasonable effort to maintain exposures to radiation as far below dose limits as is practical consistent with, among other things, the economics of improvements in relation to benefits to the public health and safety. Under the ALARA principle, NRC encourages nuclear plants to demonstrate their use of the principle through cost-benefit analyses or other quantifiable methods. According to NRC officials, nuclear power plants typically set their own occupational dose limits at 40 percent of NRC’s regulatory limit of 50 mSv (5 rem) per year, and cost is generally a key criterion that plants use to determine what actions to take to reduce radiation exposures under their ALARA programs. At the nuclear power plant we visited, representatives told us that under their ALARA plan, the plant set its own dose limits for workers at 40 percent of the regulatory limit. Officials at the plant told us that they have been able to keep exposures below the plant’s own limit by continuously seeking opportunities to reduce unnecessary worker exposure to radiation, such as using robots to perform maintenance work in radiation areas. According to NRC’s 2014 annual report on occupational radiation exposure, none of the 124,831 nuclear power plant workers who were monitored in 2014 received a dose above NRC’s regulatory limit, and over 99 percent of these workers received a dose below the 40-percent level used by many plants as their own limit. DOE and EPA both used the recommendations of scientific advisory bodies, including the use of the linear no-threshold model, to develop limits on dose or increased health risk for members of the public from sites with radiological contamination, even though the agencies used different approaches in implementing the recommendations. For example, in an order on radiation protection of the public and environment, DOE set a public dose limit of 1.0 mSv (0.1 rem) per year, which was the dose limit recommended by ICRP in 1990. In developing its recommendations, ICRP used the assumptions of the linear no-threshold model to identify a dose that would not cause more than a small increase in the age-specific mortality rate from cancer. In contrast, according to a 2014 EPA memorandum on cleanup of sites with radiological contamination, EPA uses a risk-based approach to prescribe cleanup levels for carcinogens, including radiation, in a range that will not result in more than 1 in 10,000 to 1 in 1 million additional cancers in a population during their lifetimes. Under this approach, EPA uses the assumption of the linear no-threshold model to set site-specific levels of cleanup that account for various factors, such as the site’s expected future land use and the presence of other contaminants, such as chemicals that may also increase the risk of cancer. Under its 2014 memorandum, EPA determined that when using a federal or state standard for radiation protection at sites with radiological contamination, this standard is generally not sufficiently protective if it is greater than 0.12 mSv (0.012 rem) per year. According to FDA officials, FDA does not have the authority to regulate the total amount of radiation exposure a patient receives from medical imaging equipment. They also commented that decisions such as the frequency of taking medical images are based on patient need and that those decisions determine the total amount of radiation exposure to the patient. Similarly, ICRP’s 2007 guidance on radiation protection states that, while all use of radiation in medicine should be justified and the radiation dose from each examination should be as low as reasonably achievable, radiation dose limits do not apply to medical exposures of patients. FDA officials stated that, instead of setting limits on total amount of radiation exposure to patients, the agency regulates the maximum limits radiation output of medical equipment. In particular, they stated that the agency based its equipment standards on NCRP guidance from 1968 on medical X-ray and gamma ray protection. They also commented that this NCRP guidance provided a dose rate limit for the equipment and stated that the exposure rate should be as low as reasonably achievable. In a 2005 Federal Register notice on FDA’s change to its performance standards for medical-imaging equipment, FDA stated that it used the assumptions of the linear no-threshold model to determine that the health benefits to medical staff and patients (in monetary terms) exceeded the costs incurred by equipment manufacturers and FDA to implement the change. In keeping with the principle that radiation exposure should be kept as low as reasonably achievable, FDA encourages voluntary measures by health care providers to address radiation exposure to patients from the use of medical-imaging equipment. Under an initiative launched in 2010, FDA identified a number of factors that contribute to levels of exposure that exceed the levels for meeting patients’ clinical need, and FDA identified steps to mitigate these factors. For example, its initiative recommended that healthcare professional organizations continue to develop nationally recognized benchmark levels for medical-imaging procedures that use radiation, and FDA stated that it has increased its participation in these efforts both on its own and through collaborative efforts with industry and healthcare professional organizations. Benchmark levels are not mandatory but allow medical facilities to investigate when a medical examination exceeds the benchmark and determine whether it is possible to reduce exposure without adversely affecting image quality. To develop guidance for state and local governments’ emergency response to deliberate or accidental radiological incidents, EPA and FDA used the recommendations of scientific advisory bodies, including the assumptions of the linear no-threshold model to recommend radiation doses at which protective actions would provide a net benefit when compared with other factors, such as cost of the actions taken. For example, according to its 2016 guidance on emergency response to radiological incidents, EPA compared the cost of evacuation under several scenarios with the number of cancer deaths avoided to recommend a radiation dose to the public at which evacuation should be considered. Using ICRP guidance, EPA assumed a linear relationship between radiation exposure and cancer risk—a principle of the linear no- threshold model—to calculate the number of potential cancer deaths. According to EPA’s guidance, the radiation dose the agency identified fell within the risk level it considered acceptable, while also meeting EPA’s criteria that the cost of the protective action be justified by the reduction of risk to public health. According to EPA’s guidance, decisions on the radiation doses at which to take protective actions need to consider health risks other than radiation. For example, weather hazards may impede evacuation and favor sheltering-in-place instead. Similarly, EPA’s guidance explains that decisions on relocation need to account for a variety of health problems that relocation itself can cause. In its response to frequently asked questions about radiation in Fukushima, Japan, the World Health Organization noted that these problems were evident in the aftermath of the March 11, 2011, earthquake and subsequent tsunami that caused significant damage to the Fukushima Daiichi Nuclear Power Station, releasing radioactive material into the environment. There were no known acute deaths or illnesses from radiation exposure, but the relocation of thousands of people caused an increase in disaster-related deaths, as well as mental health and access to health care issues, according to the World Health Organization. FDA also relied on ICRP guidance and the linear no-threshold model to recommend radiation doses at which protective actions would provide a net benefit when compared with other factors. In particular, according to a 1998 Federal Register notice about recommendations for accidental radioactive contamination of human food and animal feed, the agency developed protective action guides for state and local agencies responding to these types of accidents. These guides provide a recommended radiation dose range in which countermeasures should be taken for the contaminated food and feed after an accident. This range is based on values set by ICRP on the basis of the linear no-threshold model. For fiscal years 2012 through 2016, seven federal agencies—CDC, DOD, DOE, EPA, NASA, NIH, and NRC—obligated about $210 million for research on the health effects of low-dose radiation, but annual funding decreased by 48 percent. During the period we reviewed, the seven federal agencies that funded this research collaborated on particular projects, but they did not use a collaborative mechanism to address overall research priorities, such as research needs that advisory bodies identified regarding health effects of low-dose radiation. From fiscal year 2012 through fiscal year 2016, seven federal agencies obligated $209.6 million for research on the health effects of low-dose radiation. As shown in figure 2, DOE and NIH accounted for most of this funding, with DOE obligating $116.3 million and NIH obligating $88.6 million, or about 56 percent and 42 percent of the total, respectively. The five other agencies—NRC, NASA, DOD, EPA, and CDC—obligated the remaining $4.7 million. The research that the seven federal agencies funded included both epidemiological and radiobiological studies. Agency officials told us that both types of research are important to better understand the health effects of low-dose radiation and could inform future efforts to update dose limits and guidance levels for radiation exposure. Two of the largest epidemiological studies funded by federal agencies were the Epidemiologic Study of One Million U.S. Radiation Workers and Veterans (Million Person Study)—an ongoing study headed by NCRP— and the International Nuclear Workers Study, a multi-year study that includes over 300,000 workers from France and the United Kingdom as well as from the United States. The Million Person Study began in 2009 and includes plans to examine mortality statistics on multiple cohorts (populations) of over 1-million U.S. radiation workers, veterans, and other individuals. The purpose is to provide information about low-dose radiation health risks when the exposures are received gradually over time and not instantaneously, as was the case for the 1945 atomic-bomb exposures in Japan. Officials from two agencies that fund or use the results of research on the health effects of low-dose radiation—DOE and NRC—told us that NCRP’s Million Person Study can help address these research gaps. For example, according to NRC, the study is important research in order for the agency to examine the radiation risks to workers exposed to doses and dose rates in actual exposure settings. DOE, EPA, NASA, and NRC have provided funding for the Million Person Study. DOE’s Office of Science provided the initial funding of $500,000 for the pilot study in fiscal year 2009, as well as $869,000 for a subsequent larger study, as part of its Low-Dose Radiation Research Program, but DOE stopped funding the study in fiscal year 2010 to fund other research priorities. Since DOE’s initial $500,000 funding for the pilot study, NCRP has received a total of $4.2 million in additional funding from DOE, EPA, NASA, and NRC, according to DOE officials. In addition, an explanatory statement accompanying the fiscal year 2017 Consolidated Appropriations Act directed DOE to provide not less than $500,000 from funds for DOE’s Office of Environment, Health, Safety and Security for this study. With the funding it has received, NCRP completed various feasibility studies and follow-up work on several of the different cohorts of individuals included in the overall study. For example, NCRP began work on a mortality study of nuclear power plant workers. NCRP has estimated that it would need $20 million to analyze and report on of all of the cohorts included in the overall study. In addition, NCRP’s president told us that continuous funding could help to retain the study’s original investigators, who might otherwise move onto other work. DOE also provided about $2.1 million for the International Nuclear Workers Study for fiscal years 2012 through 2016, and CDC provided $66,000. According to CDC officials, the workers in the study experienced a similar form of radiation, thereby simplifying the study’s analysis, and the results of the study have shown associations between radiation exposure and leukemia and solid cancers. Additional information on the types of low-dose radiation research funded by federal agencies and the results of this research is described below: DOE has two offices that have funded research on the health effects of low-dose radiation—the Office of Science and the Office of Environment, Health, Safety and Security—according to funding information DOE provided. The Office of Science established the Low Dose Radiation Research Program in 1998 and funded it through fiscal year 2016. A primary focus of this program was to fund radiobiological research, and over the course of the program, it provided an average of about $14 million per year for such research, which included funding for the Million Person Study. According to DOE’s website for the program, the program provided data and information about the low-dose range of exposure, producing 737 peer-reviewed publications as of March 2012. According to a 2016 report from DOE’s Biological and Environmental Research Advisory Committee, among the important discoveries under the program was a phenomenon known as the bystander effect, where cells may sustain radiation damage even though no radiation passes through them. Other areas of discovery included the role of DNA repair and the immune system, as well as the potential beneficial effects at the cellular level caused by low-dose radiation. The Office of Environment, Health, Safety and Security provided annual funding from fiscal year 2012 through fiscal year 2016 for epidemiological studies in two areas: (1) the Radiation Effects Research Foundation, which conducts studies involving Japanese atomic bomb survivors in Hiroshima and Nagasaki and is a source of data used by national and international standard-setting organizations and scientific advisory bodies to set regulations and (2) assessments of worker and public health risks from radiation exposure resulting from nuclear weapons production activities in the former Soviet Union, which provided DOE researchers with data from Russian workers who experienced chronic exposure to radiation. NIH has funded and conducted both epidemiological and radiobiological studies on low-dose radiation, according to NIH officials. The officials stated that the studies are conducted through the National Cancer Institute’s internal research program for radiation epidemiology, as well as through NIH’s research programs for external funding of investigator-initiated research. The aim of the internal research program for radiation epidemiology is to identify, understand, and quantify the risk of cancer in populations exposed to various types of radiation, and to advance understanding of cancer caused by radiation. Other institutes of NIH, including the National Institute of Environmental Health Sciences, also fund research related to the health effects of radiation exposure as part of NIH’s overall mission to fund medical research. Examples of research supported by NIH have included (1) a study conducted in partnership with the U.S. Department of Veterans Affairs on cancer mortality among military participants in U.S. nuclear weapons tests and (2) a tissue bank with samples from Chernobyl survivors. These samples are being used to understand the effects of radioactive exposure from nuclear power plant accidents. NIH has also funded radiobiological research on high- dose radiation, and some of this research also applies to low-dose radiation. EPA helps fund research through an ongoing interagency agreement with DOE’s Oak Ridge National Laboratory, according to EPA officials. The funding supports the development of models that provide information about doses to particular organs from ingestion or inhalation of a specific quantity of a radioactive element, such as cesium or plutonium. According to EPA instructions for calculating radiation dose and risk, EPA uses this information to estimate cancer risks of exposure to over 800 radioactive elements. These estimates, according to EPA, can be used by federal and state agencies to develop and implement radiation protection regulations and standards. EPA also provided funding for the Million Person Study. According to EPA officials, the agency contributed to the study to be able to discuss and review the research in its early stages. NRC officials we interviewed said that NRC does not generally fund research on radiation’s health effects but agreed to provide funding to the Million Person Study with the understanding that NRC would be a minority funding partner in the program. However, after DOE stopped funding the study, NRC became the largest contributor, providing a total of $2.1 million in fiscal years 2012 to 2016. NRC also funded an epidemiological study analyzing cancer risks in populations living near U.S. nuclear facilities, but it did not continue the study because of the study’s limited usefulness for drawing conclusions about risk and its long duration and high cost, according to NRC officials. NASA officials told us that the agency mostly conducts research on space-based radiation, which differs from ground-based radiation in terms of its physical characteristics and its effects on health. In the past 5 years, NASA has funded over $100 million for research on space-based radiation, including research on its health effects, such as on the risk of acute central nervous system effects. The agency also provided funding for low-dose radiation research at DOE, as well as for the Million Person Study. CDC has provided some funding for epidemiological studies such as those evaluating the long-term effects of occupational radiation exposures or analyzing mortality among nuclear workers, according to funding information provided by CDC. For example, according to this information, CDC partially funded the International Nuclear Workers Study. CDC officials told us that the program has published more than two dozen studies related to occupational exposures and cancer risks among workers across the DOE complex. CDC’s National Institute for Occupational Safety and Health also provided funding for institute researchers to conduct studies on flight attendants exposed to cosmic radiation and on uranium miners exposed to radon. DOD has contributed a small amount of funding for radiation health effects research activities through the Armed Forces Radiobiology Research Institute, according to funding information provided by DOD. Most of the work conducted through the institute is research on radiation countermeasures—treatments that could be used in the aftermath of an attack involving the release of radioactive material. In addition, according to DOD’s funding information, the institute provides some funding to researchers in order to better understand, for example, cancer risks due to low-dose radiation exposure. As shown in figure 3, in fiscal years 2012 through 2016, the seven agencies collectively decreased their annual funding obligations for research on health effects of low-dose radiation by 48 percent, from $57.9 million in fiscal year 2012 to $30.4 million in fiscal year 2016, and NIH and DOE decreased their annual funding obligations by 48 and 45 percent, respectively. DOE accounted for a large portion of this overall decrease in annual funding. Specifically, over this 5-year period, DOE reduced its annual funding obligations for this area of research by 45 percent—from $32.6 million in fiscal year 2012 to $18.0 million in fiscal year 2016. According to DOE, the decrease was primarily due to DOE’s reduction in funding for its Low Dose Radiation Research Program. DOE’s Office of Science established this program in 1998 to fund research on the effects of radiation on genomes, cells, and living organisms, with the aim of providing a scientific basis for developing radiation protection standards in line with the research results that demonstrate the response of complex biological systems to low doses of radiation. According to DOE officials, decreases in funding for the program reflected a shift toward bioenergy and environmental research within the department’s Office of Science. These officials said that the agency provided the final funding for the program in fiscal year 2016. In contrast, funding remained stable for research supported by DOE’s Office of Environment, Health, Safety and Security on epidemiological studies in Japan and Russia. Similarly, over the 5-year period, NIH’s funding for low-dose radiation research decreased by 48 percent—from $23.1 million in fiscal year 2012 to $12.0 million in fiscal year 2016. NIH officials commented that sequestration occurred during the time period in which radiation research funding decreased. In addition, NIH officials explained that funding levels for a particular disease or research area can fluctuate depending on several factors, including the number and quality of research proposals submitted and the outcome of NIH’s peer reviews of the proposals, as well as the overall research budget. Table 1 shows agencies’ annual obligations for research on health effects of low-dose radiation. The seven agencies that funded research on health effects of low-dose radiation for fiscal years 2012 through 2016 collaborated on particular research projects through the use of several mechanisms, including the following: Joint funding of individual research projects: For example, as previously mentioned, DOE’s Office of Science, EPA, NASA, and NRC jointly funded the Million Person Study, and CDC and DOE’s Office of Environment, Health, Safety and Security helped fund the International Nuclear Workers Study. Participation in interagency committees: For example, DOD, DOE, EPA, HHS and NRC are members of the Interagency Steering Committee on Radiation Standards, which has a goal of promoting consistency in federal radiation protection programs. Collaborating on research on low-dose radiation is not a committee focus, but the committee provides a forum for sharing information on research developments. Similarly, the head of DOE’s Office of Environment, Health, Safety and Security co-chairs a bilateral U.S.-Russian Federation committee for coordinating research on the health effects of exposure to radiation in the Russian Federation from the production of nuclear weapons. CDC, DOD, EPA, NASA, and NRC are also U.S. members of the committee. Participation in meetings and conferences: For example, in June 2017, DOE’s Oak Ridge National Laboratory hosted a workshop on radiation-protection research needs. The workshop agenda included presentations by DOE, EPA, NRC, FDA, and NIH’s National Cancer Institute. In addition, DOE officials told us they share research results informally with other agencies through their participation in conferences held by NCRP and other groups, and NIH officials also said that members of the radiation epidemiology scientific community have the opportunity to connect at specialized meetings. However, the seven agencies that fund research on health effects of low- dose radiation did not use a collaborative mechanism to address overall research priorities, such as research needs that scientific advisory bodies have identified. The 2006 National Academies report to advise the U.S. government on the relationship between exposure to radiation and human health—which was funded in part by DOD, DOE, EPA and NRC— identified 12 areas of research needs. Many of these areas were related to uncertainties from the linear no-threshold model and, by extension, in the agencies’ dose limits and guidance levels that are based in part on that model. In addition, as previously noted, the 2016 report of DOE’s Biological and Environmental Research Advisory Committee also provided information about research needs in low-dose radiation and found that further research could decrease uncertainty in predicting cancer risk from low-dose radiation. The report recommended that, should DOE decide to continue research in this area, workshops be convened to formulate a specific research program. In addition, the report stated that other agencies—including NRC, NIH, EPA, DOD, and NASA—could benefit from the reduction in uncertainty that could be obtained by this research. Until recently, DOE’s Low Dose Radiation Research Program provided a stable source of funding for such research and according to DOE’s website, DOE took a leading role in advocating for greater communication and coordination between the fields of radiation biology and epidemiology. As previously mentioned, DOE is the federal agency that currently has primary responsibility under the Atomic Energy Act of 1954 for research related to the protection of health during activities that can result in exposure to radiation. DOE’s decisions to reduce funding the program in fiscal year 2012 and stop funding the program in fiscal year 2016 also reduced the role that DOE previously held as a leading source of federal funding for low-dose radiation research. DOE’s reduced role has created a void in federal efforts to maintain a collaborative mechanism for low-dose radiation research, and no other agency has stepped forward to fill this void. Our previous work has shown that collaborative mechanisms can serve multiple purposes, such as leading interagency efforts to develop and coordinate sound science and technology policies across the federal government. Although collaborative mechanisms differ in complexity and scope, they all benefit from certain key features, such as leadership, which raise issues to consider when implementing these mechanisms. Such issues include: whether a lead agency or individual has been identified; if leadership is shared, whether the agencies have clearly defined roles and responsibilities; and how leadership will be sustained over the long-term. For example, the Interagency Steering Committee on Radiation Standards includes a process for rotating the leadership role among member agencies. DOE is well positioned to lead an effort to ensure that federal agencies have a mechanism for interagency collaboration to address overall research priorities related to low-dose radiation health effects because of the agency’s past experience as a leader in this area of research. Such a role is also consistent with DOE’s research responsibility under the Atomic Energy Act of 1954. Such an effort could help DOE and the collaborating agencies determine roles and responsibilities, including leadership, when addressing shared research priorities. DOE and other federal agencies have invested millions of dollars in low- dose radiation research, and this research has led to a better understanding of the health effects of radiation exposure, thereby helping federal agencies develop and implement radiation protection requirements and guidance for workers and the public. DOE has provided more than half of all federal funding for this research over the past several years. Given the reduction in funding for low-dose radiation research, federal agencies can benefit from greater collaboration on addressing their research priorities in this area. Our previous work has shown that collaborative mechanisms can be used for coordinating federal science efforts and that agencies can enhance their collaborative efforts through key practices, such as agreeing on leadership roles and responsibilities. In the past, DOE took a leading role in both funding and evaluating low- dose radiation research, and the agency continues to fund a substantial portion of the research. However, more recently DOE’s funding has significantly decreased, resulting in a lack of leadership in this area. DOE, consistent with its past experience as a leader in this area of research and its research responsibility under the Atomic Energy Act of 1954, could assist agencies in developing an interagency collaborative mechanism for the future. We recommend that the Secretary of Energy lead the development of a mechanism for interagency collaboration to determine roles and responsibilities for addressing priorities related to research on the health effects of low-dose radiation. We provided a draft of this report to the Department of Commerce; DHS; DOD; DOE; Department of Labor; EPA; HHS’s CDC, FDA, and NIH; NASA; and NRC for review and comment. DOE, the Department of Labor, EPA, HHS, and NRC provided technical comments, which we incorporated as appropriate. DOE also provided written comments, which are reproduced in appendix II. The other agencies did not provide any comments. DOE commented that in general, the draft report reflects how federal agencies, including DOE, developed and applied radiation protection requirements and guidance for workers and the public. DOE did not concur with our recommendation that it lead the development of a mechanism for interagency collaboration on research on the health effects of low-dose radiation. In particular, DOE stated that EPA and NRC also have legal mandates to research low-dose radiation exposure and that these agencies establish their research priorities in accordance with their respective budget authorities and recommendations from independent advisory bodies. DOE stated that as a result, it would not be appropriate for DOE to lead the development of a mechanism for interagency collaboration. Instead, according to DOE, from its experience, the leadership of an organization with government-wide responsibilities would result in the most effective interagency collaboration. We believe that DOE’s concerns stem from a misinterpretation of our recommendation, and we made several changes to our report and our recommendation to clarify DOE’s role. In particular, we did not recommend that a mechanism for interagency collaboration serve as a replacement for agencies’ legal mandates, budget authorities, and recommendations from independent advisory bodies. Instead, this mechanism would help agencies address shared research priorities, such as research needs that the National Academies, in advising the U.S. government, identified regarding health effects of low-dose radiation. According to officials we spoke with from DOE’s Office of Environment, Health, Safety and Security, more collaboration among agencies on low- dose radiation research would be very helpful. In making our recommendation, we did not specify the coordinating mechanism that agencies should use and instead left it to DOE to lead the development of an appropriate mechanism. If the leadership of an organization with government-wide responsibilities would result in more effective interagency collaboration, as DOE suggested in its written comments, then DOE could implement our recommendation by working with such an organization to obtain its involvement in a coordination mechanism. We continue to believe that an interagency coordination mechanism for low-dose research is needed and that DOE is in the best position to lead agencies in developing the most appropriate mechanism. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Commerce, the Secretary of Defense, the Secretary of Energy, the Secretary of Health and Human Services, the Secretary of Homeland Security, the Secretary of Labor, the Administrator of EPA, the Administrator of NASA, the Chairman of NRC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact John Neumann at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To prevent cancer and other harmful effects associated with exposure to radiation, federal agencies have established radiation protection measures that apply to a wide range of settings in which exposure can occur. These measures call for radiation exposure, for workers and for the public, to be kept within regulatory limits (either on dose or increased health risk) or, for emergency situations, non-binding guidance on exposure levels established for protecting individuals. Table 2 shows examples of federal agencies’ dose limits, guidance levels, and other radiation protection measures. John Neumann, (202) 512-3841 or neumannj@gao.gov. In addition to the individual above, Joseph Cook (Assistant Director), Allen Chan, Kendall Childers, Richard Frankel, Richard Johnson, David Messman, Cynthia Norris, Josie Ostrander, Todd Paulsen, Amber Sinclair, Sara Sullivan, and Jack Wang made key contributions to this report. | According to EPA, exposure to low doses of radiation does not cause immediate health effects but may increase a person's cancer risk. Federal agencies fund research on cancer risk, but uncertainties remain about risk assessments that federal agencies use to develop radiation protection regulations and guidance. GAO was asked to examine federal agencies' radiation protection requirements and guidance and related research. This report (1) describes how selected federal agencies have developed and applied radiation protection requirements and guidance and (2) examines the extent to which federal agencies have funded and collaborated on research on low-dose radiation's health effects for fiscal years 2012 to 2016. GAO selected four federal agencies, based on their development of requirements or guidance for settings in which radiation exposure to workers and the public can occur. GAO reviewed agency documentation and interviewed agency officials on the development of the requirements and guidance. GAO also collected and examined federal-funding data for low-dose radiation research from seven agencies that fund this research. The Department of Energy (DOE), Nuclear Regulatory Commission (NRC), Environmental Protection Agency (EPA), and Food and Drug Administration generally used the advice of scientific advisory bodies to develop and apply radiation protection requirements and guidance for workers and the public in the radiation exposure settings that GAO reviewed. These settings were: (1) the operation and decommissioning of nuclear power plants; (2) the cleanup of sites with radiological contamination; (3) the use of medical equipment that produces radiation; and (4) accidental or terrorism-related exposure to radiation. Specifically, the agencies relied on the advice of three scientific advisory bodies that supported the use of a model that assumes the risk of cancer increases with every incremental radiation exposure. Accordingly, the agencies have set regulatory dose limits and issued guidance to confine exposure to levels that reduce the risk of cancer, while recognizing that scientific uncertainties occur in estimating cancer risks from low-dose radiation. For example, NRC requires nuclear power plants to consider measures for limiting workers' exposure below NRC's regulatory dose limit, such as by using robots for maintenance work in radiation areas. GAO identified seven federal agencies that funded research on low-dose radiation's health effects. In fiscal years 2012 to 2016, DOE, NRC, EPA, and four other federal agencies obligated about $210 million for such research (see table). Although the agencies have collaborated on individual projects on radiation's health effects, they have not established a collaborative mechanism to set research priorities. GAO's previous work has shown that federal agencies can use such mechanisms to implement interagency collaboration to develop and coordinate sound science policies. In the past, DOE took a leading role in this area because DOE provided stable funding and advocated for greater coordination on research on low-dose radiation's health effects. However, since fiscal year 2012, DOE has phased out funding for one of its main research programs in this area. This has created a void in coordination efforts among federal agencies, and no other agency has stepped forward to fill this void. Because of DOE's prior experience as a leader in this area of research and its research responsibility under the Atomic Energy Act of 1954, it could play an important role in helping federal agencies establish a coordinating mechanism for low-dose radiation research. Dollars are in millions and have not been adjusted for inflation Source: GAO analysis of agency data. | GAO-17-546 GAO recommends DOE lead development of a mechanism for interagency collaboration on research on low-dose radiation's health effects. DOE disagreed, stating that agencies set their own research priorities. GAO continues to believe that DOE is in the best position to lead such an effort, as discussed in the report. |
Mortgage servicers are the entities that manage payment collections and other activities associated with home loans. Mortgage servicers can be large mortgage finance companies, commercial banks, or nondepository institutions. Servicing duties can involve sending borrowers monthly account statements, answering customer-service inquiries, collecting monthly mortgage payments, and maintaining escrow accounts for property taxes and insurance. In the event that a borrower becomes delinquent on loan payments, servicers also initiate and conduct foreclosures. Errors, misrepresentations, and deficiencies in foreclosure processing can result in a number of harms to borrowers ranging from inappropriate fees to untimely or wrongful foreclosure. Several federal regulators share responsibility for regulating the banking industry in relation to the origination and servicing of mortgage loans. OCC has authority to oversee nationally chartered banks and federal savings associations (including mortgage banking activities). The Federal Reserve oversees insured state-chartered banks that are members of the Federal Reserve System, bank and thrift holding companies, and entities that may be owned by federally regulated depository institution holding companies but are not federally insured depository institutions. The Federal Deposit Insurance Corporation (FDIC) oversees insured state-chartered banks that are not members of the Federal Reserve System and state-chartered savings associations. Finally, CFPB has the authority to regulate mortgage servicers with respect to federal consumer financial law.into a memorandum of understanding with prudential regulators— specifically the Federal Reserve, FDIC, OCC, and the National Credit Union Administration—that governs their responsibilities to share information and coordinate supervisory activities so as to effectively and efficiently carry out their responsibilities, decrease the risk of conflicting supervisory directives, and increase the potential for alignment of related supervisory activities. OCC designates each national bank as a large, mid-size, or community bank. The designation is based on the institution’s asset size and whether other special factors affect its risk profile, such as the extent of asset management operations, international activities, or high-risk products and services. Large banks are the largest and most complex national banks and are designated by the Senior Deputy Comptroller for Large Bank Supervision. Mid-size banks may be designated as large banks at the discretion of the Deputy Comptroller for Midsize and Credit Card Banks. responsible Federal Reserve Bank, which in turn assigns a central point of contact to each servicer. The contact leads an examination team with responsibility for continually monitoring activities, conducting discovery examinations designed to improve understanding of a particular business activity or control process, and testing whether a control process is appropriately designed and achieving its objectives. In September 2010, allegations surfaced that several servicers’ documents in support of judicial foreclosure may have been inappropriately signed or notarized. In response to this and other servicing issues, federal banking regulators—OCC, the Federal Reserve, the Office of Thrift Supervision, and FDIC—conducted a coordinated on- site review of 14 mortgage servicers to evaluate the adequacy of servicers’ controls over foreclosure processes and to assess servicers’ policies and procedures for compliance with applicable federal and state laws. Through this coordinated review, regulators found critical weaknesses in servicers’ foreclosure governance processes; foreclosure documentation preparation processes; and oversight and monitoring of third-party vendors, including foreclosure attorneys. On the basis of their findings from the coordinated review, OCC, the Office of Thrift Supervision, and the Federal Reserve issued in April 2011 formal consent orders against 14 servicers under their supervision (see fig. 1). Subsequently, the Federal Reserve issued similar consent orders against two additional servicers. These consent orders were intended to ensure safe and sound mortgage-servicing and foreclosure-processing activities and help address weaknesses with mortgage servicing identified during the reviews. To comply with the consent orders, each of the 16 servicers is required to, among other things, enhance its vendor management, training programs and processes, and compliance with all applicable federal and state laws, rules, regulations, court orders, and servicing guidelines. In addition, as a result of the consent orders, the Federal Reserve issued civil money penalties against some of the servicers and provided that the penalty amounts could be remitted by federal payments made and borrower assistance provided under the National Mortgage Settlement or by providing funding to housing counseling organizations.OCC also considered civil money penalties against the servicers it regulates, and for four servicers that were also party to the National Mortgage Settlement, OCC reached an agreement that civil money penalties would be assessed if the servicer did not satisfy the requirements of the formal consent orders or their respective obligations under the National Mortgage Settlement. The consent orders also required each servicer to retain an independent consultant to review certain foreclosure actions on primary residences from January 1, 2009, to December 31, 2010, to identify borrowers who suffered financial injury as a result of errors, misrepresentations, or other deficiencies in foreclosure actions, and to recommend remediation for borrowers, as appropriate. In general, the consent orders identified seven areas for consultants to review: 1. whether the servicer had proper documentation of ownership of the 2. whether the foreclosure was in accordance with applicable state and 3. whether a foreclosure sale occurred while a loan modification was 4. whether nonjudicial foreclosures followed the terms of the loan and state law requirements;5. whether fees charged to the borrower were permissible, reasonable, 6. whether loss-mitigation activities were handled in accordance with program requirements and policies; and 7. whether any errors, misrepresentations, or other deficiencies resulted in financial injury to the borrower. To review these areas, consultants generally segmented their file review activities to test for each area of potential error separately. As a result, a borrower’s loan file might have undergone multiple reviews for different potential errors before the results of each of the review segments were compiled and the file review was considered complete. Loans were identified for review through a process by which eligible borrowers could request a review of their particular circumstances (referred to as the request-for-review process) and through a review of categories of files considered at high risk for errors (referred to as the look-back review). Regulators required servicers to establish an outreach process for eligible borrowers who believed they might have been harmed due to errors in the foreclosure process to request a review of their particular circumstances. Consultants were expected to review all of the loans received through the request-for-review process. For the look-back review, regulators required consultants to review 100 percent of all files in three categories—borrowers in bankruptcy in which a completed foreclosure took place, loans potentially subject to the protections provided by the Servicemembers Civil Relief Act (SCRA), and agency- referred foreclosure cases—that were identified as at high risk for servicing or foreclosure-related errors during the regulators’ 2010 coordinated reviews. Consultants for Federal Reserve-regulated servicers were also required to review 100 percent of files in two other categories determined to be high risk—borrowers with pending modification requests and borrowers current on a trial or permanent modification. In addition, as each servicer had a unique borrower population and servicing systems, consultants, with examination teams’ input, were expected to identify various high-risk loan categories appropriate to their servicer—such as loans in certain states or loans associated with certain foreclosure law firms—that could be associated with a higher likelihood of servicing or foreclosure-related errors and review a sample of those loans. Beginning in January 2013, OCC and the Federal Reserve announced that they had reached agreements with 15 of the 16 servicing companies to terminate the foreclosure reviews and replace the reviews with a payment agreement (as previously shown in fig. 1). Under these agreements, servicers agreed to provide compensation totaling approximately $10 billion, including $4 billion in cash payments to eligible borrowers and $6 billion in foreclosure prevention actions. These amounts were generally divided among the 15 participating servicers according to the number of borrowers who were eligible for the foreclosure review at the time the amended orders were negotiated such that the total per-servicer amount ranged from $16 million to $2.9 billion (see table 1). For the majority of servicers, the amended consent orders ended an approximately 20-month file review process. Although consultants were at various stages of completing the reviews when the work was discontinued, the amended consent orders underlined that regulators retained the right to obtain and access all material, records, or information generated by the servicer or the consultant in connection with the file review process. The amended consent orders did not affect the other aspects of the original consent orders—such as required improvements to borrower communication, operation of management information systems, and management of third-party vendors for foreclosure-related functions—and work to oversee servicer compliance with these other aspects continues. According to regulatory staff and documents, the estimated time it would take for borrowers to receive remediation and mounting costs of completing the file reviews motivated the decision to amend the consent orders. As of December 2012, OCC staff estimated that remediation payments to borrowers would not start for many months and that completing the file review process could take, at a minimum, an additional 1 to 2 years, based on the number of files still to be reviewed and the extent of the work to be completed. The mounting costs of the file reviews also motivated the decision to terminate the file reviews for most servicers. As of August 2012, the collective costs for the consultants had reached $1.7 billion, according to OCC’s decision memorandum. Based on the results of the reviews conducted by consultants through December 2012, regulators estimated that borrower remediation amounts would likely be small while the consultant costs to complete the reviews would be significant. As a result, OCC and Federal Reserve staff determined that completing the reviews to determine precisely which borrowers had compensable errors due to harm would have resulted in long delays in providing remediation payments to harmed borrowers. With the adoption of the amended consent orders, regulators and servicers moved away from identifying the types and extent of harm an individual borrower may have experienced and focused instead on issuing payments to all eligible borrowers based on identifiable characteristics. To determine the cash payment amount to be provided to each borrower, the majority of participating servicers categorized borrowers according to specific criteria. Fourteen of the servicers that participated in the amended consent order process, covering approximately 95 percent of the population of 4.4 million borrowers that were eligible for the foreclosure review process under the original consent orders, adopted this approach (see table 2). To categorize borrowers, regulators provided each servicer with a cash payment framework that included 11 categories of potential harms—including violation of SCRA protections and foreclosure on borrowers in bankruptcy—and generally ordered the categories by severity of potential harm. For each of the 11 categories in the cash payment framework, regulators identified specific borrower and loan characteristics that servicers then used to place all eligible borrowers into categories such that a borrower would be placed in the highest category for which he or she had the required characteristics. Regulators used the results of this categorization process as the basis for determining the payment amounts for each category. The payment amounts for all eligible borrowers for those 14 servicers ranged from several hundred dollars for a servicer that did not engage the borrower in a loan modification to $125,000, plus equity and interest, for a servicer that foreclosed on a borrower who was eligible for SCRA protection. One other servicer signed an amended consent order to terminate the file review process and provide cash payments to borrowers. In contrast to the other servicers that signed amended consent orders, this servicer had completed its initial file review activities and OCC used the preliminary file review results as the basis for determining payments to all eligible borrowers. The amended consent orders also required all 15 servicers to undertake a specified dollar amount of foreclosure prevention actions and submit those actions for credit based on criteria established by regulators. For 13 of the servicers, these actions are to occur between January 2013 and January 2015. The amended orders provided two methods for servicers to receive credit for foreclosure prevention actions. First, servicers could conduct loss-mitigation activities for individual borrowers, by providing loan modifications or short sales, among other actions. Regulators also specified that the actions taken under this method could not be used to satisfy other similar requirements, such as the foreclosure prevention requirement of the National Mortgage Settlement (discussed later). Second, servicers could satisfy their obligation by making cash payments to approved housing counseling agencies, among other actions. One servicer, OneWest Bank, did not elect to amend its consent order and terminate the file review process. The consultant for this servicer continues file review activities for a portion of the eligible population of 192,000 borrowers, as planned. According to OCC, in 2014, the servicer will provide remediation to borrowers based on findings of actual harm. In addition to the consent orders issued by OCC, the Office of Thrift Supervision, and the Federal Reserve, mortgage servicers have been subject to other actions designed to improve the provision of mortgage servicing by setting servicing standards. In February 2012, the Departments of Justice, Treasury, and Housing and Urban Development, along with 49 state Attorneys General, reached a settlement with the country’s five largest mortgage servicers. Under the settlement, the servicers will provide approximately $25 billion in relief to distressed borrowers and the servicers agreed to a set of mortgage servicing standards. This settlement, known as the National Mortgage Settlement, established nationwide servicing reforms for the participating servicers, including establishing a single point of contact for borrowers, standards for communication with borrowers, and expectations for fee amounts and the execution of foreclosure documentation. The settlement also established an independent monitor to oversee the servicers’ execution of the agreement, including their adherence to the mortgage servicing standards. CFPB also established new mortgage servicing rules that took effect in January 2014. Among other things, these rules established requirements for servicers’ crediting of mortgage payments, resolution of borrower complaints, and actions servicers are required to take when borrowers are late in their mortgage payments. In addition to the National Mortgage Settlement, other recent settlements have required servicers to provide foreclosure relief to borrowers as a component of the agreement. In November 2013, the Department of Justice along with state Attorneys General for four states announced a settlement with JPMorgan Chase to provide $4 billion in foreclosure relief, among other actions, to remediate harms allegedly resulting from unlawful conduct. The settlement identified specific actions for which JPMorgan Chase would receive credit towards its obligation, including certain types of loan modification actions, lending to low- to moderate- income borrowers and borrowers in disaster areas, and activities to support antiblight programs. Similarly, in December 2013, CFPB and 49 state Attorneys General and the District of Columbia announced a settlement with Ocwen Financial Corporation to provide $2 billion in relief to homeowners at risk of foreclosure by reducing the principal on their loans. Both settlements also assign an independent monitor to oversee the execution of the settlements, and the settlement with Ocwen requires the servicer to comply with the standards for servicing loans established in the National Mortgage Settlement. Regulators considered factors such as projected costs and potential remediation amounts associated with the file reviews to negotiate the $3.9 billion total cash payment under the amended consent orders. However, because the reviews were incomplete, these data were limited. According to Federal Reserve staff, OCC led the data analysis to inform negotiations, and the Federal Reserve relied on aspects of this work. Despite the uncertainty regarding the remaining costs and actual financial harm experienced by borrowers, regulators did not test the major assumptions used to inform negotiations. According to our prior work, testing major assumptions provides decision makers a range of best- and worst-case scenarios to consider and provides information to assess whether an estimate is reasonable. We compared the final negotiated cash payment amount to estimates we obtained by varying the key assumptions used in regulators’ analysis. Our analysis found that the final negotiated amount was generally within the range of different results based on alternative assumptions. Regulators established goals related to timeliness, the cash payment amounts, and the consistency of the treatment of borrowers and the distribution of payments. Regulators met their timeliness and amount goals and took steps to promote a consistent process, including providing guidance to examination teams and servicers. The cash payment agreement obligations under the amended consent orders were achieved through negotiations between regulators and participating servicers. According to OCC, staff engaged with six servicers in November 2012 to discuss a cash payment agreement. As previously discussed, the estimated time it would take for borrowers to receive remediation and mounting costs of completing the reviews motivated the cash payment agreement under the amended consent orders. Following initial discussions with these six servicers, regulators engaged in similar discussions with an additional eight servicers subject to the foreclosure review requirement, according to regulatory staff. The total negotiated cash payment amount for all 15 servicers that ultimately participated in amended consent orders was approximately $3.9 billion. Generally, each servicer’s share of the cash payment amount was determined based on its proportional share of the 4.4 million borrowers who were eligible for the foreclosure review. Regulators considered factors such as projected costs to complete file reviews and potential remediation amounts associated with the file reviews to inform negotiations with servicers. According to Federal Reserve staff, OCC led negotiations with servicers and the initial analysis of estimates that informed these negotiations. According to Federal Reserve staff, they participated in negotiations and relied on certain elements of OCC’s analysis to inform the Federal Reserve’s decisions regarding a payment agreement for the institutions they oversee. To inform negotiations with servicers, OCC developed two estimates of servicers’ costs: an estimate of the projected cost to complete the reviews and an estimate for the potential remediation payout to borrowers. Specifically, OCC staff said they used the cost estimate as a means of estimating what servicers might be willing to pay and the potential remediation payout as an early attempt to estimate potential harm and understand how funds would be distributed among borrowers. The final amount of $3.9 billion was negotiated between regulators and servicers and was higher than the estimates regulators used to inform negotiations. Projected cost to complete the reviews. According to regulatory staff and documents, OCC and the Federal Reserve relied on cost projections from consultants which estimated that the remaining expected fees for consultants to complete the reviews would be at least $2 billion. In November 2012, consultants reported cost projections based on time frames ranging from as short as 4 months for one servicer to as long as 13 months for other servicers—that is, 4 to 13 months beyond November 2012—to complete reviews. Regulatory staff told us they also considered the amounts servicers had reserved to pay for potential remediation. Specifically, OCC included an estimate of the amount servicers had reserved to pay for potential remediation ($859 million), bringing the total estimated cost to complete the reviews had they not been terminated to approximately $2.9 billion ($2 billion to complete the reviews plus the $859 in remediation reserves). According to regulatory staff and documents, the Federal Reserve relied on projected costs and remediation reserves provided by OCC to inform their decisions during negotiations. Potential remediation payout to borrowers. Using the aggregate financial harm error rate—that is, the financial harm error rate for all completed files among all servicers—of 6.5 percent in December 2012, OCC estimated the potential remediation payout to borrowers from the reviews would be $1.2 billion, according to OCC documents. In this analysis, regulators used amounts listed in the foreclosure review remediation framework and added an additional $1,000 per borrower for borrowers who submitted a request-for-review and were in the process of foreclosure. For borrowers who submitted a request-for-review and had a completed foreclosure, OCC added an additional $2,000 per borrower.estimated the distribution of borrowers among the payment categories by extrapolating the results of one servicer’s initial categorization to all servicers. Specifically, they used one servicer’s preliminary distribution of borrowers to estimate the proportion of borrowers in each category. According to OCC staff and documents, they then applied these proportions to the borrower populations for other servicers and applied the 6.5 percent financial harm error rate to each category. According to OCC staff, they used the distribution of one servicer’s population because it provided retail servicing nationwide. OCC staff stated that they analyzed the distribution of borrowers for two additional servicers and reached similar results. Federal Reserve staff told us they did not rely on OCC’s financial harm error rate analysis to inform their decisions during negotiations; rather, as stated previously, they relied on cost projections and remediation reserves to inform their decisions during negotiations. In addition, OCC staff told us they The data that were available to regulators to inform negotiations for the cash payment amount were limited. Because the reviews were incomplete in November 2012 when negotiations began, data were limited due to uncertainty about the (1) costs associated with completing the reviews and (2) error rate for the entire population of 4.4 million borrowers eligible for review. First, given the incomplete state of the reviews in November 2012 when negotiations began, regulators had limited information about costs associated with completing the reviews. For example, cost projections available to regulators prior to the negotiations did not account for additional requests-for-review submitted in December 2012. The period for eligible borrowers to submit requests- for-review did not expire until December 31, 2012—after negotiations between regulators and servicers began. Between November 29, 2012, and December 27, 2012, the number of requests-for-review increased by more than 135,000 requests (44 percent). In addition, for most consultants, the cost projections did not account for the planned second phase of reviews, known as deeper dives, in which consultants would have conducted additional reviews based on errors identified in the first phase of reviews. Among the servicers that participated in the payment agreements, all consultants we spoke with anticipated that they would conduct deeper dives. In its decision memorandum for the amended consent orders, OCC estimated an additional 1 to 2 years to complete the reviews. OCC staff stated based on the scope and complexity of the remaining reviews, they believed the reviews would have taken longer than consultants projected in November 2012. Second, the incomplete nature of the reviews in December 2012 limited the extent to which regulators could estimate the financial harm error rate and potential remediation. The remediation reserves established by some servicers were based on reviews that had been conducted by consultants thus far. Similarly, the extent to which OCC could use the preliminary error rate of 6.5 percent for the completed reviews to reliably estimate the prevalence of harm in the population and potential remediation was limited. According to data provided to regulators, third-party consultants of servicers that had agreed to the payment agreement in January 2013 had completed final reviews for approximately 14 percent of the files slated for review, and none of the consultants had completed their sampled file reviews, making it difficult for OCC to reliably estimate the prevalence of harm or potential remediation payout for the entire 4.4 million borrowers eligible for the reviews. In addition, reports provided to regulators by consultants of the servicers who agreed to the payment agreement in January 2013 showed variation in progress and financial harm error rates across servicers (see table 3). For example, servicer “K” reported over 90 percent of the sampled file reviews complete for foreclosures in progress and foreclosures complete, with error rates of about 26.7 percent and 15.6 percent, respectively. In contrast, servicer “A” reported it had not completed any final reviews. Further, the segments and types of reviews that were completed varied among consultants. For example, one consultant told us they prioritized sampled files for review over requested file reviews, while another consultant told us they focused on completing requested reviews. Another consultant stated they prioritized requested reviews and pending foreclosures. The final negotiated cash payment amount of $3.9 billion exceeded the two separate cost estimates of $2.9 billion and $1.2 billion that OCC generated to inform negotiations. However, OCC performed only limited analyses. For example, OCC did not vary key assumptions about costs and error rates used in its estimates, which would have been appropriate given the limitations of the available data. The Federal Reserve did not conduct any additional analyses to inform negotiations, but relied, in part, on data and analysis provided by OCC pertaining to projected costs and remediation reserves to inform its decisions regarding the payment agreement. As part of our review, we conducted a sensitivity analysis to test changes to major assumptions associated with the data regulators used to inform negotiations. Specifically, we tested assumptions related to projected costs, error rate, and borrower categorization. Further, to assess the reasonableness of the final negotiated amount, we used the results of our sensitivity analysis to compare the final negotiated cash payment amount to the amounts calculated when we varied key assumptions. We found that the final negotiated amount of $3.9 billion was generally more than amounts suggested under various scenarios we analyzed. (See app. I for more detail on this analysis.) Projected costs. In its analysis using consultants’ reported projected costs, OCC estimated that the cost to complete the reviews would have been $2.9 billion. However, as we noted earlier, cost projections were limited and did not take into account the additional requests for review submitted by borrowers in December 2012 or the time associated with anticipated deeper dives. We calculated monthly costs using consultants’ reports that were available from September 2012 through December 2012 and estimated the projected total cost to complete reviews under several alternative scenarios. Our analysis showed that the total costs could have been either higher or lower than the estimates OCC used in its analysis, depending on how long the reviews would have taken if they had continued. For example, we estimated that if the reviews had taken an additional 13 months to complete (the longest projected time reported by consultants in November 2012), the cost would have been nearly $2.5 billion—about $460 million (23 percent), more than the regulators’ estimate of $2 billion. Conversely, if the reviews had taken less time to complete than the consultants projected, regulators’ analyses may have overestimated costs. We then added OCC’s remediation reserve estimate of $859 million to our cost estimates. Including the remediation reserves, our estimate for projected costs based on 13 additional months of review was $3.3 billion (see fig. 2). Both our estimated amount at 13 months and OCC’s estimation of $2.9 billion are less than the actual final negotiated amount of $3.9 billion. Because OCC stated the reviews could take up to an additional 2 years, we included an additional 24 months in our analyses, which resulted in an estimate of $4.6 billion. OCC staff stated that, based on the experience of the servicer that continued with the reviews and had a relatively small number of borrowers eligible for review, an additional 2 years or more to complete the reviews was a likely scenario for other servicers had they not participated in the amended consent orders. Financial harm error rate. As an alternative measure, OCC estimated remediation payouts based on a preliminary financial harm error rate of 6.5 percent for file reviews completed as of December 2012 across all servicers. On the basis of that analysis, OCC estimated that remediation payouts from the file reviews could be $1.2 billion. However, as discussed above, the progress and findings of errors and financial harm among servicers varied significantly. We analyzed the projected remediation payments using the lowest, median, and highest preliminary error rates for the 13 servicers that participated in the payment agreement in January 2013. Our analysis generated a range of estimated remediation payouts between 71 percent below and almost 206 percent above the amount generated by OCC’s analysis using the average error rate of 6.5 percent (see fig. 3). However, the final, negotiated cash payment of $3.9 billion was higher than the payment of $3.7 billion that we calculated at the highest reported servicer error rate. Borrower categorization. As stated previously, OCC estimated the distribution of borrowers among the payment categories in its error rate analysis by extrapolating the results of one servicer’s initial borrower categorization to all servicers. OCC and the Federal Reserve told us that each servicer’s borrower population was unique. As such, different servicers could have different borrower distributions among the payment categories. We analyzed the distribution of borrowers for the other five servicers involved in initial amended consent order negotiations based on preliminary data servicers provided to regulators. Our analysis showed that the final, negotiated cash payment of $3.9 billion was higher than the estimates that would have resulted from using any of the other five servicers’ borrower distributions (see fig. 4). Prior to agreeing on a final cash payment amount, both the Federal Reserve and OCC conducted additional analyses to corroborate that the negotiated cash payment amount was acceptable. For example, the Federal Reserve estimated payment amounts to borrowers by category under the tentative agreement to confirm that the negotiated amount would not result in trivial payments to borrowers. This analysis showed that a $3.8 billion total cash payment would provide payments to borrowers in each category ranging from several hundred dollars up to $125,000. Therefore, after considering these cost estimates as well as the timelines for project completion, the Federal Reserve determined that the negotiated amount was acceptable because it exceeded the combined expected fees and remediation reserve estimates of completing the reviews and would allow for nontrivial payment amounts to borrowers in each category. OCC staff stated they conducted similar, informal analyses of the tentative settlement agreement. Specifically, OCC staff stated they considered the error rate for proposed cash payment amounts during negotiation. For example, staff estimated that the actual error rate from completed reviews would have had to exceed nearly 26 percent before remediation payments under the reviews would exceed the negotiated cash payment amount. Therefore, according to this analysis, OCC determined that the negotiated amount was acceptable. Staff also stated they believed the negotiated amount would be more than sufficient to cover the total amount servicers would have paid to harmed borrowers under the foreclosure review. Regulators stated that both the limited nature of the information available during the negotiation and the process for determining the amounts paid by servicers under the amended consent orders were not typical. According to Federal Reserve staff, in a typical process, they would conduct investigations to determine actual harm and perform analyses to determine compensation amounts. For example, for a recent enforcement order against a subprime mortgage lender, which involved a much smaller population of potentially harmed borrowers than the foreclosure review, the Federal Reserve required the servicer to analyze individual files to determine the specific amount of harm. OCC staff stated that because the negotiated payment agreement involved the discontinuation of the reviews required by the original consent orders, they did not have data that would otherwise typically have been available. Both OCC and Federal Reserve staff told us there are no prior enforcement actions that are comparable to the payment agreement under the amended consent orders. OCC staff stated that the amended consent orders are atypical in terms of the number of borrowers eligible for reviews (over 4 million), the number of projected file reviews (over 739,000), and the extensive nature of each review. In addition, regulators stated that, given the limited progress of the file reviews, they did not believe extensive analysis was possible. While regulators did have more analytical methods available to them, we recognize that they had limited data available. Generally, regulators set three goals for the process of categorizing and distributing cash payments to borrowers: 1. provide compensation to a large number of borrowers before 2014, 2. provide cash payments to borrowers of between several hundred dollars and $125,000, and 3. reduce the possibility of inconsistent treatment of borrowers among servicers, when compared with the file review results. Regulators took steps to meet their goal for the timeliness of distribution of cash payments to a large number of borrowers. As of December 2013, checks had been distributed to approximately 4 million borrowers covered by the 13 servicers that were part of the January 2013 amended consent order announcements. As shown in figure 5, California and Florida were the states with the largest number of checks issued as well as the largest Specifically, borrowers in California and total amount paid to borrowers.Florida received about 32 percent of the total issued checks (1.3 million checks collectively worth approximately $1.2 billion). In addition, borrowers in seven states (Arizona, Georgia, Illinois, Michigan, Nevada, Ohio, and Texas) received checks worth a total of between $100 million and $200 million per state. Although the checks were sent to the mailing address of the borrower rather than the property address of the affected property, according to our analysis of Mortgage Bankers Association data, these states correspond to some of the states with the highest foreclosure inventories in 2009 and 2010. In comparison, borrowers in five states and the District of Columbia received checks worth a total of less than $5 million per state (Alaska, District of Columbia, North Dakota, South Dakota, Vermont, and Wyoming). To facilitate meeting the goal of a timely borrower categorization process, regulators defined specific loan and borrower characteristics—such as extent of delinquency, forbearance or repayment plan start date, foreclosure sale date, or bankruptcy filing date—for each cash payment category in advance. They expected servicers to use these characteristics to categorize borrowers based on the data in servicers’ computer systems—review of files by hand to make a judgment about a borrower’s category was generally not permitted.expected servicers to conduct an internal review of their categorization results—for example, several servicers engaged their internal audit department, which are separate from the servicers’ mortgage servicing operations, to conduct a preliminary validation of the results to identify problems or weaknesses with categorization activities. According to several examination staff that we spoke with, they met regularly with the staff responsible for internal reviews to discuss their approach and review their results. This step contributed to a more timely verification process by the examination teams as they were already familiar with the servicer’s internal review procedures and results. Finally, regulators asked servicers to select one third-party payment administrator to facilitate issuance of checks. According to OCC staff, regulators worked closely with this payment administrator concurrently with the categorization process to define the work processes for check distribution to help facilitate a timely distribution of checks to borrowers once the categorization process was complete. The cash payment categorization process was largely completed by April 2013 for the 13 servicers, and the payment administrator began issuing checks to each of the approximately 4.2 million eligible borrowers serviced by the 13 servicers that were part of the January 2013 amended consent order announcements. As figure 6 shows, the payment administrator issued approximately 89 percent of checks to borrowers in April 2013 with the majority of the remaining checks issued by July 2013. As of early January 2014, approximately 193 payments remained to be issued. The payment administrator had not issued these checks because of borrower-specific challenges, including problems with the borrower’s taxpayer identification number or the need to issue multiple checks for the same loan. The payment administrator issued approximately 96,000 checks for amounts that were less than the borrower should have received. Supplementary checks worth about $45 million were issued to the affected borrowers in May 2013. As of the beginning of January 2014, approximately 81 percent of the issued checks had been cashed. According to OCC staff, to help promote check cashing, regulators instructed the payment administrator to conduct additional research on a borrower’s address and re-issue checks to borrowers whose initial checks had expired and had not been cashed to try and increase the check- cashing rate. Under the cash payment process, borrowers generally received cash payments of between $300 and $125,000, in line with regulators’ goal of providing those amounts to borrowers. In general, the amounts paid to borrowers in the same category varied depending on whether the borrower had submitted a request-for-review—those borrowers received a higher payment amount than other borrowers—and whether the foreclosure was in process, had been rescinded, or was complete as of December 31, 2011. In addition, those borrowers serviced by two servicers that signed the original consent orders after April 2011 and therefore had not participated in the request-for-review process were generally paid at the same level or at a higher level—24 percent to 30 percent more—than a borrower who did not submit a request-for-review. As seen in figure 7, the largest number of borrowers (1.2 million borrowers, or 29 percent of the eligible population) were placed in the category for approved modification requests, which provided payments of between $300 and $500, depending on whether the borrower was considered to have submitted a request-for-review. About 1,200 borrowers were paid at the maximum rate of $125,000, including approximately 1,100 SCRA-eligible borrowers. Approximately 11 percent (439,000 borrowers) were paid an additional amount designated for borrowers who had submitted requests-for-review. Although regulators met their cash payment amount goal, they recognized that some borrowers might have received more or less through the foreclosure review process. According to regulators, as part of their process to determine the cash payment amounts to be paid to borrowers in each category, they considered the amount that borrowers would have been paid for errors in that category under the file review process, among other considerations. Under the cash payment framework, borrowers in the highest paid categories—SCRA-eligible borrowers and borrowers foreclosed upon who were not in default— received the same amounts as they would have under the file review process. For the other categories, the final cash payment amounts were generally less than the amounts that would have been paid for an error in that category under the file review process for borrowers who did not submit a request-for-review. According to regulators, they decided to pay higher amounts to borrowers who submitted requests-for-review— generally double the amounts paid to borrowers who did not submit requests-for-review—because they felt that those borrowers had an expectation of receiving a file review and should be compensated for that expectation. According to regulators, in adopting the cash payment process, they recognized that some borrowers would fare better or worse that they might have under the file review process. For example, some borrowers who might not have received remediation under the file review process, either because a file review did not identify harm or the file was not reviewed, would receive a cash payment. However regulators said the converse was also true, that is, borrowers who through the file review could have been found to have been harmed and therefore eligible for remediation could potentially receive a lower amount through the cash payment process. OCC and Federal Reserve staff also stated that under the amended consent orders, borrowers were not required to waive or release any rights or claims against the servicer to receive a cash payment. According to regulators, in recognition of challenges in achieving consistent results among servicers during the file review process, they took steps to promote a consistent approach to the cash payment categorization process—one of their goals—such that similarly situated borrowers would have similar results. For example, regulators held weekly meetings with OCC and Federal Reserve examination team staff as well as with servicers to discuss the categorization process. In addition, they provided guidance to examination teams and servicers for the categorization process, including examination teams’ oversight activities. According to examination teams, the guidance provided was timely, and given the limited time to complete the categorization process, they generally worked closely with the servicer to ensure any resulting changes were incorporated. OCC headquarters staff also conducted on- site visits to each servicer and examination team to review the categorization process and activities. According to OCC staff, these on- site visits allowed for a comparison of servicers’ categorization processes and the oversight processes used by the examination teams to help ensure these activities were done according to the guidance and as a result would be largely consistent. Similarly, the Federal Reserve examination teams and Federal Reserve Board staff met in person to discuss the categorization process and oversight activities as part of their efforts to promote consistent results. Finally, according to a few servicers we spoke with, to promote consistent results some servicers met early in the process, with regulators’ input, to discuss the regulators’ categorization guidance and mentored other servicers as they conducted their initial categorization activities to help ensure there was a shared interpretation of the guidance among servicers. However, there were some differences in the categorization results for borrowers among servicers as a result of flexibilities in the categorization process, as well as limitations with some servicers’ data systems. For example, servicers were given the option of retaining the third-party consultant hired to work on the foreclosure reviews to complete file reviews for borrowers who were categorized into the first two categories—SCRA-eligible borrowers and borrowers not in default at the time of foreclosure—rather than relying on the loan and borrower characteristics regulators’ specified for those categories. Based on the file review results, servicers were required to provide remediation to borrowers whom the file reviews determined had been harmed and re- categorize the remaining borrowers into the next highest payment category for which they qualified according to other loan and borrower characteristics. Based on our review of regulators’ documents, 12 of the 13 servicers used this option and directed consultants to complete file reviews for borrowers who were placed in some of these categories. According to OCC staff and one servicer we spoke with, some consultants had already completed or were near completion of the file reviews for SCRA-eligible borrowers. Similarly, missing or unreliable data in servicers’ systems resulted in some servicers being unable to categorize borrowers according to the cash payment framework criteria and instead placing borrowers in the highest category for which they had data. According to our review of examination teams’ conclusion memorandums and interviews with examination teams, at least 5 of the 13 servicers were unable to place some borrowers into the most appropriate category of the framework because servicers’ systems did not have the data necessary to categorize borrowers according to the loan and borrower characteristics provided by regulators. For the majority of these servicers, the percentage of affected borrowers was relatively small. For example, in one case data limitations affected roughly 4 percent of borrowers at that servicer, whereas in another case, they impacted approximately 8 percent of that servicer’s borrowers. However, for one servicer, data limitations were extensive enough that regulators required the servicer to stop the categorization process for approximately 74 percent of eligible borrowers and categorize those borrowers into higher categories than their characteristics might have indicated if data had been available in the servicer’s system. According to regulators, they mitigated the impact of these limitations on individual borrowers by instructing servicers to place borrowers in the highest possible category from which they could not be excluded due to missing or unreliable data. Figure 8 illustrates an example of how the same borrower might have had different results depending on the servicer. Placing borrowers in higher categories when data were unavailable potentially had a distributional impact on other borrowers. Where there is a set sum of money, as in this case, making changes by placing more borrowers than anticipated in higher categories could result in either (1) lower payment amounts per borrowers in those categories or (2) lower- than-anticipated amounts for borrowers in lower categories. According to Federal Reserve staff the relatively small number of borrowers affected by these changes meant that the distributional impact was minimal. Regulators did not establish specific objectives for the $6 billion obligation they negotiated with servicers to provide foreclosure prevention actions. However, they communicated the expectation that the actions be meaningful, and they set forth broad principles for servicers’ entire portfolio of foreclosure prevention actions. To negotiate the amount and determine the design of the foreclosure prevention component of the amended orders, regulators did not follow their typical practices to inform supervisory actions, which include analysis of information. For example, analysis of the volume of servicers’ recent foreclosure prevention actions might have helped regulators assess the sufficiency and feasibility of the required obligation, among other things. According to most servicers we spoke with, they would be able to meet the required volume of activities using their existing foreclosure prevention activities. Regulators did collect data to inform oversight of servicers’ financial obligations, and OCC and the Federal Reserve are requiring examination teams to oversee servicers’ policies and monitoring controls related to the principles. However, according to Federal Reserve staff, most of the Federal Reserve examination teams have not conducted their oversight activities related to the foreclosure prevention principles and regulators’ guidance for oversight of the principles does not identify actions examination teams should take to evaluate or test implementation of these principles. According to regulators’ supervisory guidance as well as federal internal control standards, establishing specific monitoring activities, including testing, is important to effective supervision. In the absence of such monitoring activities, regulators may not know if a key element of the amended consent orders is being realized. The $6 billion foreclosure prevention action obligation amount was negotiated by regulators and servicers and was not framed by specific objectives or informed by any data or analysis. According to OCC’s and the Federal Reserve’s supervisory manuals, enforcement actions, including consent orders, are used to address specific problems, concerns, violations of laws or agreements, and unsafe or unsound practices, among other things, that are identified through supervisory examinations. Further, federal internal control standards highlight the importance of establishing clear objectives for activities undertaken by agencies as a means of ensuring that agency outcomes are achieved. The foreclosure prevention component of the amended consent orders, however, was not intended to address specific problems, violations, or unsafe or unsound practices. According to the Federal Reserve, the $6 billion required foreclosure prevention actions represent additional remediation, above and beyond the $3.9 billion cash payment required of servicers in lieu of finishing the reviews. OCC staff stated that the foreclosure prevention component of the amended consent orders mirrored the requirement that servicers provide loss mitigation options to harmed borrowers under the file review process. Although regulators negotiated the foreclosure prevention action obligations in the amendment that terminated the foreclosure review for most servicers, the foreclosure prevention obligations were not related to preliminary findings from the reviews. In addition, the actions were not specifically intended to assist only borrowers who were eligible for the reviews; servicers can count foreclosure prevention actions performed to assist any borrower in their portfolio toward their obligation under the amended consent order provided the action meets the criteria in the orders. The amended consent orders, however, directed servicers to attempt to prioritize these borrowers for assistance to the extent practicable. Regulators stated that they included the foreclosure prevention component in the amended consent orders because the National Mortgage Settlement had a similar component.component in the amended consent orders was intended to convey to servicers the importance of foreclosure prevention activities. Thomas J. Curry, Comptroller of the Currency, remarks before Women in Housing and Finance (Washington, D.C.: Feb. 13, 2013). keep borrowers in their homes and ensuring that foreclosure prevention actions are nondiscriminatory such that actions to not disfavor a specific geography, low- or middle-income borrowers, or a protected class. According to regulators, these principles were to be applied to servicers’ broad portfolio of foreclosure prevention activities (not just those undertaken as part of the $6 billion obligation under the amended consent orders). Although regulators stated they considered other similar settlements, they did not collect or analyze relevant data to inform the amount or structure of the foreclosure prevention component of the amended consent orders. According to regulators’ supervisory manuals, regulators typically analyze information to inform enforcement actions. Despite the absence of identified problems and specific objectives to guide the analysis, a variety of data were available to regulators that could potentially have informed negotiations. In addition, while it is typical for regulators and their supervised institutions to negotiate consent orders, regulators stated that the negotiations for the amended consent orders did not follow the typical enforcement action process. According to OCC staff, the decision to significantly amend the consent orders by replacing the foreclosure review with a cash payment agreement and a foreclosure prevention component was unprecedented. We recognize the atypical nature of the negotiations and regulators’ desire to distribute timely payments to eligible borrowers. However, we believe some data collection and analysis would have been feasible and useful to inform the amount and structure of the foreclosure prevention component. Regulators, in particular OCC, had access to loan-level data about some servicers’ foreclosure prevention actions through the data they collect from servicers for the quarterly OCC Mortgage Metrics reports and that servicers report to Treasury’s Making Home Affordable program, which includes Treasury’s Home Affordable Modification Program (HAMP), that they could have used to inform negotiations. Other useful data were available from servicers. The following are examples of types of analyses that could be useful to inform such negotiations. Analysis of the value of various types of foreclosure actions undertaken by servicers. Analysis of the value of various foreclosure actions undertaken by servicers may have provided information for regulators to consider in assessing the sufficiency of the negotiated amount to provide meaningful relief to borrowers. For example, data on servicers’ recent volume of foreclosure prevention actions, measured by the unpaid principal balance of loans at the time these actions were taken, as well as an average or range of unpaid principal balances for various types of actions undertaken by servicers, may have provided a basis for gauging the number of borrowers who might be helped with various amounts of foreclosure prevention obligations under the amended consent orders. Our analysis of HAMP data shows that the average unpaid principal balance for loans that received a modification through HAMP was approximately $235,000. As such, in a hypothetical scenario in which a servicer was obligated to provide $100 million in foreclosure prevention actions and reached the obligation by providing only loan modifications, it could be estimated that 425 borrowers would be assisted by the obligation as measured by the unpaid principle balance of the loans. Analysis of the volume of servicers’ typical foreclosure prevention actions. Analysis of the volume of servicers’ typical foreclosure prevention actions might have provided insight into the potential impact, if any, of the foreclosure prevention actions and informed the feasibility of the negotiated amounts—that is, the extent to which servicers could reach the required amounts within the 2-year period using their existing programs. Four of the seven servicers we interviewed that participated in amended consent orders indicated that they anticipated they would be able to meet the required volume of activity using their existing foreclosure prevention activities. Of these four servicers, two indicated they could achieve the required volume of foreclosure prevention actions within the first year, and one servicer indicated it would be easy to meet the requirement given that they regularly provide much larger amounts of foreclosure prevention assistance than their negotiated obligation. One servicer that we did not interview reported large volumes of activities using their existing programs and policies during the first 6 months of the eligible period. Specifically, between January and June 2013, the servicer reported short sale activities that were approximately 87 percent of the required obligation. During this same period, the servicer reported it had also undertaken loan modification activities that were valued at about 7 times more than their total required foreclosure prevention obligation.stated they opted to make payments to housing counseling agencies to fulfill the amended consent order requirement because they determined they would not be able to meet the obligation with their existing portfolio since the loans in the portfolio were not highly delinquent. In contrast, officials from one servicer we interviewed Analysis of alternative crediting approaches. Analysis of the results of alternative crediting approaches may have provided insight into the sufficiency of the negotiated amount—that is, the extent to which the required obligations would reach an appropriate number of borrowers as determined by regulators. The amended consent orders provide credit based on the unpaid principal balance of the loan. On the basis of this methodology, a loan with an unpaid principal balance of approximately $235,000, for example, would result in a credit of approximately $235,000 toward the servicer’s obligation, regardless of the action taken. However, alternative crediting structures exist. For example, the National Mortgage Settlement, which includes a similar foreclosure prevention component, uses an alternative approach that generally provides credit based on the amount of the principal forgiven or assistance provided. Using this methodology, for a loan modification with the same unpaid principal balance of approximately $235,000, where the principal forgiven was 29 percent of that balance (the average amount of principal forgiveness for first-lien HAMP loan modifications), a servicer would receive a credit towards their obligation of $68,855. Thus, in a hypothetical scenario in which a servicer was required to provide $100 million in foreclosure prevention actions and met the obligation by using only principle forgiveness, our analysis estimated 425 borrowers would receive assistance under the amended consent orders compared to about 1,452 borrowers under the National Mortgage Settlement. Further, analysis of the mix of servicers’ typical activities might have provided baseline information for regulators to consider in assessing whether creating incentives for certain actions by crediting them differently might be warranted to help achieve the stated expectation of keeping borrowers in their homes. consent orders, the methodology for determining credit for foreclosure prevention actions is the same for all actions, regardless of the type of action or characteristics of the loan. However, some actions are designed to keep borrowers in their homes (loan modifications, for Alternatively, some actions are designed to help avoid example). In contrast to the amended consent orders, the National Mortgage Settlement provides varying amounts of credit depending on the type of action and certain loan characteristics. Under the National Mortgage Settlement approach, a loan modification, for example, would be credited at a higher ratio than a short sale. Regulators stated they considered the National Mortgage Settlement structure in defining the types of creditable activities under the amended consent orders and the methodology for determining how the activities would be credited towards each servicer’s obligation. Foreclosure prevention actions for which servicers can receive credit under the amended consent orders are generally the same as the actions for which servicers can receive credit under the National Mortgage Settlement. However, OCC staff said they adopted a different crediting approach for the amended consent orders because it is more transparent than the approach used for the National Mortgage Settlement. foreclosure but borrowers lose their homes (e.g., short sales or deeds-in-lieu). Analysis of eligible borrowers still in their homes and in need of assistance. Analysis of the number of borrowers eligible for the foreclosure review who were still in their homes and in need of assistance might have informed the relevance of the method for allocating of the negotiated amount. Regulators generally divided the $6 billion obligation among servicers based on their share of the 4.4 million borrowers eligible for the foreclosure review, with servicers responsible for amounts that ranged from about $10 million to $1.8 billion. In addition, in the amended consent orders, regulators directed servicers to prioritize these borrowers, even though the foreclosure prevention actions were not restricted to borrowers eligible for review. However, the number of borrowers who were eligible for the foreclosure review and might benefit from the foreclosure prevention action obligations is potentially limited. Specifically, according to information on regulators’ websites covering 13 of the 15 servicers that participated in amended consent orders, 41 percent of the borrowers who were eligible for the foreclosure review had completed foreclosures as of December 31, 2011. Further, according to two servicers we interviewed, the number of borrowers who were eligible for the reviews and still able to receive foreclosure prevention actions was relatively small. For example, one servicer noted that approximately 50 percent of these borrowers were no longer being serviced by them. They added that of the remaining population, about 50 percent had already received at least one foreclosure prevention action. As such, many of the borrowers who were eligible for the foreclosure review because of a foreclosure action in 2009 and 2010 might not have been able to benefit from the foreclosure prevention actions required under the amended consent orders. To oversee the foreclosure prevention component of the amended consent orders, regulators are considering both servicers’ actions to meet the monetary obligations and the foreclosure prevention principles included in the amended orders. Regulators collected data from servicers and regulators provided guidance to examination teams to facilitate oversight activities. OCC and the Federal Reserve established reporting requirements to collect information from servicers on the foreclosure prevention actions they were submitting for crediting to meet the monetary obligations specified in the amended consent orders. To meet those obligations, servicers could either provide foreclosure prevention actions to borrowers or make cash payments to borrower counseling or education or into the cash payment funds used to pay borrowers based on categorization results. Eight of the servicers opted to meet their obligation by providing foreclosure prevention actions, and the remaining seven made cash payments. To facilitate verification of servicers’ crediting requests for foreclosure prevention actions, regulators required servicers to submit periodic reports, which all of the servicers have done. Servicers were required to submit loan-level information, such as the loan number, foreclosure status, and unpaid principal balance before and after the action, on each loan the servicers submit for crediting towards their obligation. In addition, servicers were required to state if the borrower was part of the eligible population for the foreclosure review—to respond to the expectation in the amended consent orders that, to the extent practicable, servicers prioritize eligible borrowers from the foreclosure review. According to regulators, they are in the process of hiring a third-party to evaluate the servicers’ reported data to validate that the reported actions meet the requirements of the amended consent orders and facilitate regulators’ crediting approval decisions. Servicers have begun reporting on their foreclosure prevention actions, and according to OCC staff, early submissions from servicers meeting their obligation through provision of foreclosure prevention actions to borrowers suggest they will meet their foreclosure prevention requirements quickly. The actions submitted for crediting varied, with some servicers primarily submitting short sale activities for crediting and others reporting loans that received loan modification actions. The reporting requirements also include information related to the principles established in the amended consent orders, although this information is not representative of servicers’ complete portfolio of foreclosure prevention actions. For example, servicers are required to report information on the types of assistance provided, which provides information on the extent to which the actions servicers are reporting for crediting are helping borrowers keep their homes—such as by providing a loan modification as compared to a short sale, in which a borrower would still lose his or her home. According to servicers we spoke with, the information they are reporting to regulators on their foreclosure prevention activities for crediting is not representative of their full portfolio of foreclosure prevention activities and would not provide information on how well their overall program is meeting the principles established for the assistance. For example, some servicers are submitting loans for crediting review that focus primarily on certain segments of their servicing population, such as only proprietary (in-house) loans. Another servicer had submitted all of its loss mitigation activities that may qualify for crediting according to the definitions in the amended consent orders, but this still does not represent all of their activities. Overall, the reporting requirements associated with the foreclosure prevention actions in the amended consent orders provide information to assess crediting but not to evaluate servicers’ application of the foreclosure prevention principles to their broader portfolio of loans. Regulators also issued guidance to examination teams for oversight of the foreclosure prevention principles. The guidance identifies procedures examination teams were expected to take to oversee a servicer’s application of the foreclosure prevention principles to their broad portfolio of foreclosure prevention actions. Those procedures included steps related to each of the key elements in the principles. However, the guidance does not identify actions examination teams should take to evaluate or test servicers’ application or implementation of the steps. For example, the guidance requires examination teams to describe the policies and monitoring controls servicers have in place to help ensure that their foreclosure prevention activities are nondiscriminatory, but does not set an expectation that teams will evaluate how well servicers are applying those policies and controls to their mortgage servicing practices. Similarly, the guidance requires examination teams to identify the performance measures servicers use to assess the principle related to the sustainability of foreclosure prevention actions, but the guidance does not require examination teams to evaluate how well a servicer’s programs are providing sustainable actions.foreclosure prevention actions are meaningful—one of the principles— examination teams are to collect data on the servicers’ foreclosure prevention actions, including the extent to which those actions resulted in higher or lower monthly payments, but the guidance does not require Finally, to assess whether servicers’ examination teams to evaluate the data to understand what it indicates about servicers’ actions. In contrast, other sections of the same guidance provided to examination teams for oversight of the other articles of the consent orders specify regulators’ expectations that examination teams will evaluate and test certain policies, monitoring controls, and data. For example, OCC’s guidance to oversee compliance—which is intended to assess whether servicers’ mortgage practices comply with all applicable legal requirements and supervisory guidance—identifies specific areas where examination teams should test policies and controls as well as performance measures. For instance, examination teams are expected to evaluate the servicer’s performance measures to determine the servicer’s ability to complete timely foreclosure processing, to identify and evaluate controls for preventing improper charging of late fees, and to evaluate the servicer’s staff model for certain criteria. Similarly, the Federal Reserve’s guidance specifies testing procedures for most elements of the original consent orders, such as third-party management, servicer’s compliance program, and risk management. For instance, to ensure that documents filed in foreclosure-related proceedings are appropriately executed and notarized—one of the requirements in the original consent orders—the guidance states that examination teams should review servicers’ policies, procedures, and controls to ensure that the documents are handled appropriately and then test a sample of documents to verify that notarization occurred according to the applicable requirements. According to regulators’ supervisory manuals, effective supervision requires defining examination activities, including determining clear objectives and describing the specific procedures to evaluate and test that policies and procedures are implemented. In addition, federal internal control standards require individuals responsible for reviewing management controls—such as servicers’ policies and procedures for the foreclosure prevention principles—to assess whether the appropriate policies and procedures are in place, whether those policies and procedures are sufficient to address the issue, and the extent to which the policies and procedures are operating effectively. Some examination teams are close to completing the oversight procedures related to the foreclosure prevention principles, but others have not begun, and the extent to which regulators plan to evaluate or test information collected is unclear. According to OCC staff, examination teams completed their initial oversight of these principles in December 2013, as part of their other consent order validation activities. OCC staff told us they are reviewing the results of each of the examination teams’ procedures and may identify the need for additional activity. OCC staff stated they also plan to conduct an additional review of each servicer’s foreclosure prevention actions, which will include consideration of the principles in the amended consent orders, but they do not have specific procedures to evaluate or test servicers’ implementation of those principles. According to Federal Reserve staff, most Federal Reserve examination teams have not yet conducted their oversight activities related to the foreclosure prevention principles. Federal Reserve staff told us that examination teams generally are conducting these reviews during the second quarter of 2014 and that the Federal Reserve would consider conducting additional follow-up activities related to the principles. According to federal internal control standards, management control activities should provide reasonable assurance that actions are being taken to meet requirements, such as the requirements related to the foreclosure prevention principles. not yet completed their oversight activities for the foreclosure prevention principles, the extent to which this oversight will incorporate additional evaluation or testing of servicer’s implementation of the principles is unclear. See GAO/AIMD-00-21.3.1. procedures are effective—an assessment OCC examination teams are required to make—or assessing how well the principles guide servicer behavior. For example, although servicers may have policies that explicitly forbid disfavoring low- or moderate-income borrowers during foreclosure prevention actions, without reviewing data, such as a sample of transactions from various programs, it is difficult to determine whether the policy is functioning as intended. Without these procedures, regulators may miss opportunities to determine how well servicers’ foreclosure prevention actions provide meaningful relief and help borrowers retain their homes. According to regulators we spoke with, the initial review of borrowers’ 2009 and 2010 foreclosure-related files and cash payment categorization process confirmed past servicing weaknesses—such as documentation weaknesses that led to errors in foreclosure processing—that they suspected or discovered through the 2010 coordinated review that was done in advance of the original consent orders. Regulators have taken steps to share these findings across examination teams. Continued supervision of servicers and information sharing about the experiences and challenges encountered help ensure that these weaknesses are being corrected. Recent changes to regulators’ requirements for mortgage servicing also help to address some of the issues. Although consultants generally did not complete the review of 2009 and 2010 foreclosure-related files through the file review process, consultants, servicers, and regulators were able to describe some of the servicing weaknesses they identified based on the work that was completed. According to OCC staff, these preliminary findings from consultants’ review of 2009 and 2010 foreclosure-related files were consistent with issues discovered through the earlier coordinated review of foreclosure policies and practices conducted by examination teams in 2010 that led to the consent orders. As we noted previously, the file reviews were retrospective assessments and were designed to identify and remediate the harms suffered by borrowers due to 2009 and 2010 servicing practices.practices from these file reviews, regulators asked consultants to complete an exit questionnaire and held exit interviews with each consultant to discuss the file review process and preliminary observations and findings. In addition, while consultants did not prepare final reports with their findings, regulators we spoke with said they had shared some preliminary findings with examination teams through weekly updates as the file reviews progressed. Examples of weaknesses identified during the coordinated review and confirmed during the review of files from the same period, included the following: To collect information on what was learned about servicers’ Failure to halt foreclosures during bankruptcy. The report from the regulators’ 2010 coordinated review noted that servicers’ quality controls were not adequate to ensure that foreclosures were halted during bankruptcy proceedings. These concerns were validated during the subsequent review of 2009 and 2010 foreclosure files during which consultants found some instances of foreclosures taking place after borrowers had filed for bankruptcy. Failure to halt foreclosures during loss mitigation procedures. The report from the 2010 coordinated review also expressed concern that servicers’ quality control processes did not ensure that foreclosures were stopped during loss mitigation procedures, such as loan modifications. During the subsequent file reviews, one consultant found that in some cases, a servicer had foreclosed on borrowers who were in the midst of applying for loan modifications. In addition, the file reviews identified some borrowers who were wrongfully denied loan modifications, did not receive loan modification decisions in a timely manner, or were not solicited for HAMP modifications in accordance with HAMP guidelines. Failure to apply SCRA protections. The coordinated review report also noted that a lack of proper controls could have affected servicers’ determinations of the applicability of SCRA protections. Some consultants identified issues such as servicers failing to verify a person’s military status prior to starting foreclosure proceedings and failing to consistently perform data checks to determine military status. Failure to maintain sufficient documentation of ownership. Although the 2010 coordinated reviews found that servicers generally had sufficient documentation authority to foreclose, examiners noted instances where documentation in the foreclosure file may not have been sufficient to prove ownership of the mortgage note. Likewise, during the subsequent consent order file reviews, some consultants found cases of insufficient documentation to demonstrate ownership. Weaknesses related to oversight of external vendors and documentation of borrower fees. The coordinated file review report noted weaknesses in servicers’ oversight of third-party vendors, and OCC staff stated that the subsequent file review found errors related to fees charged to borrowers, many of which occurred when servicers relied on external parties. Staff explained that servicers often did not have controls in place to ensure that services were performed as billed and that the fees charged to customers were reasonable and customary. In addition, the process of categorizing borrowers for cash payments— which relied on servicers’ data about those borrowers from 2009 and 2010—found issues that were consistent with weaknesses identified during the 2010 coordinated reviews, particularly in servicers’ data systems. For example, one examination team noted that a servicer’s data weaknesses related to servicemembers and others became more apparent during the cash payment categorization process. In addition, as noted earlier, at least 5 of the 13 servicers were unable to categorize some borrowers according to the framework criteria because of system limitations. Federal Reserve staff noted that problems with one servicer’s data related to loan modifications led the servicer to place everyone in the highest category possible rather than rely on the system. Further, another examination team told us that while reviewing the categorization of borrowers for cash payments, the servicer’s internal audit department found a high rate of borrowers incorrectly categorized in the loan modification categories due to weaknesses in the quality of the servicer’s data. The examination team explained that after reviewing the servicer’s initial categorization, regulators determined that the servicer did not have sufficiently reliable system data to categorize borrowers in the lowest categories, and therefore those borrowers were categorized in a higher category. After terminating the reviews of 2009 and 2010 foreclosure-related files, regulators instructed examination teams to identify deficiencies and monitor servicers’ actions to correct them. For example, OCC required examination teams to complete conclusion memorandums on deficiencies consultants identified. The conclusion memorandums were to include information on the deficiencies consultants identified in the servicer’s policies, procedures, practices, data, systems, or reporting. The guidance for the memorandums also asks examination teams to discuss steps servicers took to correct these deficiencies. In one conclusion memorandum, the examination team noted that the servicer was in the process of addressing issues, such as technological impediments to efficient and accurate servicing and the accurate identification of borrowers eligible for SCRA protections and borrowers in bankruptcy, but that not all issues had yet been addressed. According to Federal Reserve staff, they are not planning to do a broad analysis of the results from the file reviews, but they have asked the examination teams to consider issues that emerged from them and whether additional corrective action is needed. OCC and Federal Reserve staff also told us that examination teams are continuing their oversight activities to determine whether servicers are addressing all aspects of the consent order, which includes the areas highlighted by the preliminary file reviews. OCC staff said that the examination work is intended to determine what issues have been addressed and what issues continue to exist. Some examination teams told us that they are leveraging the results of the reviews and the cash payment categorization process by following up on some of the issues identified for the servicers they oversee in their future oversight. For example, one team said that it was following up on findings related to bankruptcy, fees, notices of loan modifications, and income calculations associated with loan modification applications. In particular, they noted that they have done subsequent testing related to borrowers in bankruptcy and will continue to assess the servicer’s efforts in this area. Another team stated that in light of challenges with an aspect of the cash payment categorization process, they identified weaknesses with the servicer’s staffing, project management, and problem resolution processes. To try to prevent repetition of these mistakes, the examination team required the servicer to identify and implement changes to their mortgage servicing practices. However, some examination teams said that little additional information was learned from the file review or cash payment activities that they could leverage in future oversight. For example, one examination team noted that because few files had gone through complete reviews, they could not determine how widespread the problems found were. They said that because the file reviews were terminated before the reviews were completed, they did not have sufficient information to interpret the initial findings. Another examination team told us that no new information was learned from the file reviews and all of the issues raised during them were known issues. A third examination team told us that they would incorporate some aspects of the consultant’s processes into their review process, but the reviews were not far enough along to draw conclusions about any additional substantive weaknesses with the servicer’s practices. In addition, Federal Reserve staff noted that because the file reviews were terminated before many data points were collected, what could be learned from them is limited. Similarly, one examination team noted that while weaknesses were identified with the servicer’s operations during both the file review and cash payment processes, they were specific to systems and activities from 2009 and 2010 that were no longer in place or operational. Additionally, OCC staff explained that because the files that were reviewed were from 2009 and 2010, the findings may no longer be applicable, particularly given changes in servicing operations since that time. Because examination teams learned different information from their oversight of the file review and cash payment processes, sharing each others’ experiences could be instructive for ongoing oversight of mortgage servicing. As we noted earlier, the completion rates for the file review process varied from no files with a completed review to 57 percent of the planned files reviewed. In addition, the areas that were reviewed varied among servicers. For example, several of the consultants reported completing at least initial reviews of the majority of files in the bankruptcy category. Another consultant stated that the only category of review completed was the SCRA category, and therefore, it only had findings related to the retention of SCRA data. A third consultant had completed its review of a majority of the initial files planned for review, and had found several different types of errors, including errors with fees charged, loan modification decisions, and documentation of ownership. Although, as regulators have noted, each servicer has unique operations and data systems, servicing standards and other requirements defined by regulators are generally broadly applied and insight from one servicer’s approach to meet these standards—or problems meeting these standards—can be instructive for another examination team responsible for overseeing these same standards. According to our analysis of examination teams’ conclusion memorandums, some servicers encountered similar challenges in the cash payment process. In contrast to the file review process, the borrower categorization process was completed for 14 of the servicers and servicers had to place borrowers into the same categories. Several examination teams and a servicer noted that merging data from multiple servicing systems posed particular challenges for completing the borrower categorization process. Other examination teams we spoke with described challenges servicers encountered with their data systems to record information on bankruptcy and other foreclosure-related actions. Understanding what caused similar types of challenges and their prevalence among servicers may help regulators identify future areas for oversight activities. According to regulators, they have taken steps to share information among examination teams about issues encountered during the file review and cash payment process and OCC planned to take additional steps. For example, regulators told us that during the file review and cash payment categorization process, OCC and Federal Reserve examination teams held weekly phone meetings. According to several examination teams we spoke with, during these meetings they would highlight challenges they were encountering, such as issues related to missing data in a servicer’s systems. In addition, Federal Reserve staff stated that Federal Reserve examination teams met during the cash payment categorization process to share information on their approach to the activities and discuss approaches different teams were taking to address challenges. To further facilitate information sharing among examination teams, Federal Reserve staff told us that examination teams posted to a shared website their conclusion memorandums for the cash payment activities, which included information on the approach servicers used to categorize borrowers. According to OCC staff, they are also writing a consolidated conclusion memorandum that will summarize examination teams’ findings from the foreclosure review process, including information on specific challenges identified at individual institutions that may be instructive for other examination teams. According to regulators, examination teams also have offered to share information with CFPB about issues encountered during the file review process. Banking regulators and CFPB have entered into a Memorandum of Understanding, which states that CFPB and the regulators will endeavor to inform each other of issues that may impact the supervisory interests of the other agencies. According to regulators we spoke with, there has been limited sharing of findings from the foreclosure review process with CFPB. According to OCC staff, in some cases, they have shared information with CFPB about servicers’ compliance with the original consent orders and, in other instances, they offered to provide CFPB information on the file review process, but CFPB had not requested follow-up information. Federal Reserve staff said two of its examination teams have provided information to CFPB on the Federal Reserve’s monitoring activities related to the original consent orders, including the file reviews, and amended consent orders. Recent servicing requirements, some of which apply to a broader group of mortgage servicers than those included in the file review process, may also address some of the weaknesses found during the 2010 coordinated review and confirmed during the review of foreclosure-related files from 2009 and 2010 and the borrower categorization process. Since the 2009 and 2010 period of the file reviews, regulators have issued several guidelines and standards related to mortgage servicing: April 2011 Consent Orders. In addition to the requirement to conduct file reviews of borrowers who were in foreclosure or had completed foreclosure any time in 2009 or 2010, the original consent orders issued by OCC and the Federal Reserve to 16 servicers also included other requirements, such as submitting a plan for improving the operation of servicers’ management information systems for foreclosure and loss mitigation activities. Regulators’ examination teams will continue to monitor these requirements and ensure that the aspects of the consent orders that apply are met. National Mortgage Settlement. Five servicers are covered by the National Mortgage Settlement, which includes requirements such as preforeclosure notices to borrowers, procedures to ensure the accuracy of borrower accounts, and quarterly reviews of foreclosure documents. CFPB Mortgage Servicing Rules. These rules were issued in January 2013, became effective January 10, 2014, and apply to all servicers, The rules cover several with some exemptions for small servicers.major topics that address many aspects of mortgage servicing, including specific requirements related to communication with delinquent borrowers and loss mitigation procedures. OCC and Federal Reserve Imminent Foreclosure Standards. In April 2013, OCC and the Federal Reserve issued checklists to the servicers they supervise to establish minimum standards for handling and prioritizing of borrower files that are subject to imminent foreclosure sales. For example, both sets of standards require that once the date of foreclosure is established, the servicer must confirm that the loan’s default status is accurate. These requirements address issues identified during the file reviews and cash payment process. For example, to address issues related to borrowers being foreclosed upon while in the process of a loan modification application, OCC and Federal Reserve’s Minimum Standards for Prioritization and Handling of Borrower Files Subject to Imminent Foreclosure Sales require servicers to take steps to verify a borrower’s status once a foreclosure date has been established. Specifically, servicers must promptly (1) determine whether the borrower has requested consideration for, is being considered for, or is currently in an active loss mitigation program; and (2) determine whether the foreclosure activities should be postponed, suspended, or cancelled. As another example, to address issues related to communicating loan modification decisions to borrowers, CFPB’s rules state that servicers must provide the borrower with a written decision, including an explanation of the reasons for denying the loan modification, on an application submitted within the required time frame. The guidelines also address issues related to servicers’ data systems. For example, CFPB’s rules require that servicers are able to compile a complete servicing file in 5 days or less. CFPB officials noted that this requirement was specifically included to address weaknesses in servicers’ data systems that might still exist. In addition, as previously noted, the OCC and Federal Reserve consent orders required servicers to submit a plan for the operation of their management information systems. The plan needed to include a description of any changes to monitor compliance with legal requirements; ensure the accuracy of documentation of ownership, fees, and outstanding balances; and ensure that loss mitigation, foreclosure, and modification staff have sufficient and timely access to information. Regulators took steps to promote transparency through efforts to keep borrowers and the general public informed about the status and progress of amended consent order and continuing review activities and through posting information publicly on their websites. Regulators also plan to issue public final reports on the cash payment process and foreclosure prevention actions as well as the results of the one file review that continued. These actions, however, have included limited information on processes, such as specific information about the category in which borrowers were placed or how those determinations were made. In our March 2013 report, we found that transparency on how files were reviewed under the foreclosure review was generally lacking and that borrowers and the general public received limited information about the progress of reviews.implement a communication strategy to regularly inform borrowers and We recommended that regulators develop and the public about the processes, status, and results of the activities under the amended consent orders and continuing foreclosure reviews. Since the announcement of the amended consent orders and our March 2013 report, regulators have taken steps to keep borrowers and the general public informed about the status of activities under the amended consent orders and continuing foreclosure reviews. For example, regulators directed that the payment administrator for 14 of the 15 servicers subject to amended consent orders send postcards to approximately 4.4 million borrowers informing them that they would receive a cash payment from their servicer. In addition, regulators directed the administrator to send communications to borrowers subject to the continuing file review to inform them that their reviews were ongoing. OCC staff noted that they anticipated requiring a final communication to borrowers when the review is completed. Regulators also kept the general public informed about the status of activities. For example, regulators conducted two webinars to provide details on the amended consent order activities and published answers to frequently asked questions on their websites. Regulators also used mass media such as press releases and public service announcements to communicate the status of activities. In addition, regulators updated their websites with information on the number and amount of checks issued and cashed under the amended consent orders, and in May 2013, regulators reported this information by state. Finally, regulators also made the cash payment frameworks and borrower categorization results publicly available on their websites. The frameworks list the payment categories and amounts and also include the overall results of the cash payment process by including the number of borrowers in each payment category. Regulators plan to issue publicly final reports on the direct payment process and foreclosure prevention actions as well as information from the reviews that were terminated and the results of the review that continued. We noted the importance of public reporting to enhancing transparency in our March 2013 report. At that time, regulators planned to release reports on the foreclosure review and cash payment process, but the content of the reports had not been determined. Since our report, regulators have taken additional steps toward making reporting decisions. However, they are still considering the content and timing of these reports. Federal Reserve staff stated that they have worked with OCC to reach out to community groups to get their input on the information to include in public reports, and they are reviewing the types of information on foreclosure prevention actions reported for the National Mortgage Settlement and HAMP. Federal Reserve staff also stated that they anticipate the final report would include information on the terminated reviews. OCC staff said they are conducting examinations to assess the extent to which servicers addressed all aspects of the consent orders, including weaknesses highlighted by the preliminary file reviews, and they anticipate reporting on conclusions of the foreclosure reviews, including the reviews that were terminated. OCC staff stated they are waiting on the results of the continuing review and reports on servicers’ foreclosure prevention actions before making final reporting decisions. Although regulators have taken steps to promote transparency, these actions included limited information on the data regulators considered in negotiating the cash payment obligations and the processes for determining cash payment amounts. Our March 2013 recommendation to implement a communication strategy included not only keeping borrowers informed about the status and results of amended consent order and continuing review activities, but it also included keeping borrowers and the public informed about processes to determine those results. In our March 2013 report, we found that more publicly disclosed information about processes could have increased transparency and thereby public confidence in the reviews, given that one of the goals regulators articulated for the foreclosure review was to restore public confidence in the mortgage market. Federal internal control standards state the importance of relevant, reliable, and timely communications within an organization as well as with external stakeholders. In addition, our prior work on organizational transformation suggests that policymakers and stakeholders demand transparency in the public sector, where stakeholders are concerned not only with what results are to be achieved, but also with which processes are to be used to achieve those results. Regulators released limited information on the process used to determine cash payment amounts. Regulators’ joint press release announcing the payment agreement stated that the amounts of borrowers’ payments depended on the type of possible servicer error, and regulators’ websites and webinars provided information on the roles of regulators, servicers, and the payment administrator. However, regulators did not release publicly information on the criteria for borrower placement in each category, such as the specific loan and borrower characteristics associated with each category. In addition, information about the process for determining cash payment amounts for each category was not communicated to individual borrowers. Borrowers subject to the amended consent orders received postcards informing them they would receive a cash payment. The postcards, however, did not include information about the process by which their payment amounts would be determined. Moreover, the letter accompanying the cash payment does not include information about the category in which a borrower was placed. Consumer groups we interviewed maintained that borrowers should have been given information about the category into which they were placed and an explanation of how they were categorized. Regulators said that borrowers could obtain additional information from other sources. Federal Reserve staff explained that the letter to borrowers does not include information on the borrower’s cash payment category, but they said that a borrower may be able to figure out this information using the publicly issued cash payment framework, which includes cash payment amounts for each category. Regulators also told us that borrowers could call the payment administrator with questions or complaints related to the cash payment process under the amended consent orders. However, according to the payment administrator’s protocol, staff were instructed to provide general information on the cash payment process, but did not have specific information about the category in which borrowers were placed or how those determinations were made. Federal Reserve staff stated that borrowers who have complaints about their servicer could also write to their servicer’s regulator directly, but consumer groups said that very few borrowers would file a formal complaint with the regulators because they never received an explanation of what category they were placed in and regulators did not establish an appeals process. Further, letters sent to borrowers stated that the payments were final and there was no appeals process. Regulators told us they did not establish an appeals process because borrowers did not waive their rights to take legal action by accepting the payment. Federal Reserve staff stated that although there was not a process for borrowers to appeal their payments, borrowers who are not satisfied with the payment amounts can pursue any legal claims they may have. With additional information on processes, regulators have opportunities to enhance transparency and public confidence with the amended consent order activities. The majority of cash payments have been deposited. As such, regulators have missed key opportunities to provide information that would have enhanced transparency of the cash payment process for individual borrowers. Further, since borrowers cannot obtain further information by formally appealing the results of the direct payment process, the lack of information about the criteria for placement in the various categories may hinder public confidence in the process. The final reports that regulators plan to issue represent an important opportunity to provide additional information on processes to clarify for borrowers and the general public how payment decisions were made. The amended consent order process—with the distribution of cash payments to 4.4 million borrowers and requirements that servicers provide $6 billion in foreclosure prevention actions—terminated the review of 2009 and 2010 foreclosure-related files for 15 servicers prior to completion. This process addressed some of the challenges identified by regulators with the file review process—for example, it provided cash payments to borrowers more quickly than might have occurred had the file reviews continued. In addition, through the foreclosure prevention component of the amended orders, regulators were able to convey their commitment to specific principles to guide loss mitigation actions— including that servicers’ foreclosure prevention activities provide meaningful relief to borrowers and not disadvantage a specific group. While views varied on the usefulness of the file review process, regulators are taking steps to use what was learned to inform future supervisory activities. While regulators used the amended consent orders to establish principles for foreclosure prevention activities, they did not require examination teams to evaluate or test servicers’ activities related to these principles. In particular, they did not require evaluation or testing of servicers’ policies, monitoring controls, and performance measures, to determine the extent to which servicers are implementing these principles to provide meaningful relief to borrowers. In contrast, other parts of the guidance provided to examination teams for oversight of the consent orders do require evaluation and testing, and the requirements in regulators’ supervisory manuals and federal internal control standards also include such requirements. For OCC examination teams, which have completed reviews of servicers’ activities related to the foreclosure prevention principles, additional planned supervisory activities, such as a review of servicers’ foreclosure prevention actions, may help identify concerns with servicers’ implementation of aspects of the foreclosure prevention principles. However, the specific procedures to conduct these additional planned activities have not been established. In comparison, for Federal Reserve examination teams that have not yet completed the reviews, there is an opportunity to implement a more robust oversight process that includes evaluation and testing, but the extent to which the Federal Reserve will take these steps is unclear. In the absence of specific expectations for evaluating and testing servicers’ actions to meet the foreclosure prevention principles, regulators risk not having enough information to determine whether servicers are implementing the principles and protecting borrowers. Finally, although regulators communicated information about the status and results of the cash payment component of the amended consent orders, they missed opportunities to communicate additional information to borrowers and the public about key amended consent order processes. One of the goals that motivated the original file review process was a desire to restore public confidence in the mortgage market. In addition, federal internal control standards and our prior work highlight the importance of providing relevant, reliable, and timely communications, including providing information about the processes used to realize results, to increase the transparency of activities to stakeholders—in this case, borrowers and the public. Without making information about the processes used to categorize borrowers available to the public, such as through forthcoming public reports, regulators may miss a final opportunity to address questions and concerns about the categorization process and increase confidence in the results. We are making the following three recommendations: 1. To help ensure that foreclosure prevention principles are being incorporated into servicers’ practices, we recommend that the Comptroller of the Currency direct examination teams to take additional steps to evaluate and test servicers’ implementation of the foreclosure prevention principles. 2. To help ensure that foreclosure prevention principles are being incorporated into servicers’ practices, we recommend that the Chairman of the Board of Governors of the Federal Reserve System ensure that the planned activities to oversee the foreclosure prevention principles include evaluation and testing of servicers’ implementation of the principles. 3. To better ensure transparency and public confidence in the amended consent order processes and results, we recommend that the Comptroller of the Currency and the Chairman of the Board of Governors of the Federal Reserve System include in their forthcoming reports or other public documents information on the processes used to determine cash payment amounts, such as the criteria servicers use to place borrowers in various payment categories. We provided a draft of this report to OCC, the Federal Reserve, and CFPB for comment. We received written comments from OCC and the Federal Reserve; these are presented in appendixes III and IV. CFPB did not provide written comments. We also received technical comments from OCC, the Federal Reserve, and CFPB and incorporated these as appropriate. In their comments on this report, the Federal Reserve agreed with our recommendations and OCC did not explicitly agree or disagree. However, OCC and the Federal Reserve identified actions they will take or consider in relation to the recommendations. For the two recommendations on assessing servicer implementation of foreclosure prevention principles, OCC stated that it included this requirement in its examination plans. OCC added that foreclosure prevention principles will be used as considerations when assessing the effectiveness of servicer actions. We continue to believe that identifying specific procedures for testing and evaluating servicers’ application of the foreclosure prevention principles to their mortgage servicing practices will help regulators determine how effectively servicers’ policies and procedures are protecting borrowers and providing meaningful relief. The Federal Reserve noted that examination teams plan to use testing during their servicer assessments. The Federal Reserve plans to conduct the assessments in 2014, as we noted in the report. For the recommendation on improving the transparency of the consent order processes, OCC stated that it will consider including additional detail about the categorization of borrowers in its public reports. The Federal Reserve said it will consider the recommendation as it finalizes reporting and other communication strategies. Both regulators also noted that they had made information about the foreclosure review and amended consent order processes available on their public websites. As we discussed in our report, regulators have taken steps to communicate information about the status of activities and results of the amended consent orders, and communicating information on the processes for determining borrowers’ cash payment amounts provides an additional opportunity for regulators to realize their goal of increasing public confidence in these processes. We are sending copies of this report to interested congressional committees, the Board of Governors of the Federal Reserve System, the Consumer Financial Protection Bureau, and the Office of the Comptroller of the Currency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The objectives of this report were to assess: (1) the factors regulators considered in negotiating servicers’ cash payment obligations under the amended consent orders and the extent to which regulators achieved their stated goals for the cash payments; (2) the objectives of the foreclosure prevention actions in the amended consent orders and how well regulators designed and oversaw the actions to achieve those objectives; (3) the extent to which regulators are sharing information from the file review and amended consent order processes; and (4) the extent to which regulators have promoted transparency of the amended consent orders and remaining review. The scope of our work covered the 16 servicers that were issued consent orders in 2011 and 2012 requiring they conduct file reviews. To address the factors the Office of the Comptroller of the Currency (OCC) and the Board of Governors of the Federal Reserve System (Federal Reserve) considered in negotiating servicers’ cash payment obligations, we interviewed regulatory staff about the factors they considered and analyses they conducted to inform the negotiations. We also asked staff about the extent to which the factors and analyses differed from typical enforcement action negotiations. We reviewed the analyses regulators’ used to inform the negotiations and other documentation on the decision to replace the foreclosure review with a cash payment agreement, such as OCC’s decision memorandum. We also reviewed data consultants provided to regulators on incurred and remaining costs, progress of reviews, and findings of error. In addition, we conducted a sensitivity analysis to test the impact of changes to major assumptions and a reasonableness review of the final negotiated cash payment amount. According to Office of Management and Budget guidance, a sensitivity analysis examines the effects of changing assumptions and ground rules on estimates. Further, our Cost Estimating and Assessment Guide states that a sensitivity analysis provides a range of results that span a best and worst case spread and also helps identify factors that could cause an estimate to vary. To conduct our sensitivity analysis, we followed three key steps outlined in our Guide: (1) identify the key drivers and assumptions to test, (2) estimate the high and low uncertainty ranges for significant input variables, and (3) conduct this assessment independently for each input variable. We identified and tested major assumptions related to projected costs, error rates, and borrower categorizations. We also used the results of our analysis to test the reasonableness of the final negotiated cash payment amount. Our Cost Estimating and Assessment Guide describes a reasonableness review as a process to independently test whether estimates are reasonable with regard to the validity of major assumptions. Projected costs. To test assumptions related to the projected remaining costs to complete the reviews as reported by consultants, we calculated monthly costs for each consultant using consultants’ cost reports that were available from September 2012 through December 2012. We then selected the shortest, median, and longest projected additional months of review across servicers to calculate the projected costs under these scenarios (see table 4). We compared our calculated costs in these scenarios to regulators’ cost analyses and the final negotiated cash payment amount. Error rate. To test assumptions related to the error rate, we reviewed error rates in status reports consultants provided to regulators for the 13 servicers that agreed to the payment agreement in the January 2013. The amended consent orders implementing the payment agreement required the consultants of the participating servicers to submit data on the progress of the file reviews as of December 31, 2012. We used these data, which the consultants submitted to regulators in the months following the payment agreement, to select the lowest, median, aggregate, and highest error rates reported by consultants and calculated the potential remediation payments under these scenarios (see table 5). We compared our calculated remediation payments under these scenarios to the payment calculated in regulators’ analyses and the final negotiated cash payment amount. Borrower categorization. To test assumptions related to the categorization of borrowers across the payment categories used in OCC’s error rate analysis, we analyzed borrower distributions for the other five servicers involved in the initial amended consent order negotiations. We used categorizations servicers provided to the regulators during the negotiation process in December 2012. We then calculated the potential remediation, using the 6.5 percent financial harm error rate used in regulators’ analysis, under each scenario (see table 6). We compared our calculated remediation payments under these scenarios to the payment calculated in regulators’ analyses and the final negotiated cash payment amount. We verified the accuracy of regulators’ analyses by performing some logic tests and recreating the tables and formulas they used for their calculations. To assess the reliability of data on the status and preliminary financial harm error rates we used in our analyses, we collected information from exam team staff for all servicers that participated in the amended consent order payment agreement. Because exam team staff were responsible for the day-to-day oversight of consultants’ work, we collected information on the steps they took to determine whether the data were reasonably complete and accurate for the intended purposes. All exam team staff stated they conducted data reliability activities such as observing data entry procedures and controls, participating in or observing training for the systems used to generate status reports, conducting logic tests, or reviewing status reports. Exam team staff did not note any limitations related to the results of the final reviews completed by consultants as of December 2012 that would affect our use of these data. As such, we determined the data to be sufficiently reliable for the purposes of this report. We were unable to assess the reliability of data on consultants’ incurred costs or servicers’ initial borrower categorization results used in our analyses. Because most consultants had terminated their work on the foreclosure review during our data collection, we had limited access to the underlying cost data reported by consultants to regulators, and regulatory staff told us they did not assess these data. In addition, the initial borrower categorizations performed by servicers during negotiations represented preliminary results that were intended to provide regulators with information about how the cash payment amount might be distributed. These data were described as preliminary by servicers, and neither servicers nor regulatory staff validated the accuracy of the information used during negotiations. Given that limited information was available from the sources and users of these data, we were not able to assess their reliability. As such, we determined that the data related to consultants’ costs and servicers’ initial borrow categorizations are of undetermined reliability. However, because our use of these data is consistent with regulators’ intended use to inform negotiations we determined that the risk of using data of undetermined reliability was low, and we concluded that the data were appropriate for our purposes in this report. To determine the stated goals for the cash payments and assess the extent to which regulators took steps to ensure servicers achieved them, we reviewed the amended consent orders, OCC’s and the Federal Reserve’s decision memorandums, and statements made by regulators about the amended consent orders, including press releases and speeches or testimony. We then assessed achievement of these goals using data we collected and analyzed and information from interviews we conducted with regulators. Specifically, we reviewed regulators’ instructions to servicers and examination teams for the categorization process and subsequent oversight activities and interviewed OCC headquarters and Federal Reserve Board staff about implementation of these activities and their oversight actions. In addition, we analyzed regulators’ reports on the results of the servicers’ categorization process, in particular information on the number of borrowers placed into each category by servicer and any subsequent changes to categorization results. We also reviewed examination teams’ conclusion memorandums describing their oversight activities to verify and validate servicers’ cash payment categorization activities, and 10 of the 11 examination teams we interviewed or received written responses from provided information about their specific approach. We also interviewed three consultants responsible for categorizing borrowers into some categories—for example, borrowers eligible for protections under the Servicemembers Civil Relief Act (SCRA), Pub. L. No. 108-189, 117 Stat. 2835 (2003) (codified at 50 U.S.C. app. §§ 501-597b)—about their methodology and regulators’ oversight, and of the eight servicers we interviewed seven provided information about their process to categorize borrowers for cash payments and regulators’ role in this process. To identify the examination teams and servicers to interview, we selected examination teams and servicers that were overseen by each regulator and also considered a range of sizes of eligible populations for the file reviews, including some of the largest servicers. To identify the consultants to interview, we considered those consultants that supplemented information gathered from consultants in prior work on the file review process. Finally, we assessed the reliability of these data by reviewing related documentation and interviewing payment administrator officials knowledgeable about the data. We determined that these data were sufficiently reliable for the purposes of this report. To assess the objectives for the foreclosure prevention actions and how well regulators designed the actions to realize those objectives, we reviewed the amended consent orders to understand the parameters and requirements for foreclosure prevention actions, reviewed regulators’ decision memorandums, and reviewed regulators’ statements about the foreclosure prevention actions in press releases and speeches or testimony. We also interviewed regulators about their intentions for the actions and the analysis they conducted to support the negotiations of the design and amounts. We compared this process with regulators’ typical processes for issuance of enforcement actions, as described in their supervisory manuals and in interviews with regulators’ staff. We also interviewed three experts familiar with negotiations and the design of settlements, including staff from the National Mortgage Settlement, to understand elements typically considered in the design of settlements. We selected these experts based on their familiarity with similar mortgage servicing settlements or their recognized expertise in the field of settlements involving potential financial harm or where cash payments were to be made to victims. In addition, we interviewed staff from one regulatory agency, the Bureau of Consumer Financial Protection (commonly known as the Consumer Financial Protection Bureau, or CFPB), about their policies and procedures for negotiating enforcement actions, in particular related to mortgage servicing. Finally, we reviewed two settlements that included foreclosure prevention components—the National Mortgage Settlement and the separate California Agreement in the National Mortgage Settlement—to help identify various factors to consider in the design of foreclosure prevention actions in enforcement orders or settlements. Further, to address how regulators oversaw achievement of the objectives of the foreclosure prevention component in the amended consent orders, we considered both regulators’ activities to oversee servicers’ financial obligations and actions to oversee the foreclosure prevention principles in the amended consent orders. To facilitate this process, we reviewed regulators’ instructions to servicers for reporting on their foreclosure prevention obligations and servicers’ reporting submissions for May, July, September, and December 2013. We also reviewed OCC’s and the Federal Reserve’s instructions to its examination teams for oversight of the foreclosure prevention principles. To further understand regulators’ oversight of the financial obligations and foreclosure prevention principles, we interviewed OCC and Federal Reserve staff, including headquarters and Federal Reserve Board staff and staff from 10 of the 11 examination teams we interviewed— representing both OCC and the Federal Reserve and a mix of larger and smaller servicers (determined by the number of eligible borrowers from the foreclosure review)—about their oversight activities. We compared these instructions and their implementation with the supervisory expectations in regulators’ supervisory manuals, the supervisory instructions for the other articles of the original consent orders, and federal internal control standards. To supplement our understanding of the foreclosure prevention reporting and oversight activities, we interviewed representatives from six of the eight mortgage servicers we spoke with (representing servicers overseen by both OCC and the Federal Reserve of various sizes based on the size of the eligible population from the foreclosure review) about their activities to comply with the foreclosure prevention requirement and regulators’ oversight activities. We also interviewed staff from the National Mortgage Settlement, which requires five mortgage services to provide foreclosure prevention actions, to understand their experience and approach. To assess the extent to which regulators are leveraging and sharing information from the file review process, we analyzed consultants’ preliminary findings from the file review process, in particular information they reported to regulators in exit surveys and during exit interviews with regulators. We also reviewed OCC’s examination teams’ conclusion memorandums from their oversight of the file review process. We compared these with publicly available information on regulators’ findings from the 2010 coordinated file review conducted by OCC, the Federal Reserve, the Federal Deposit Insurance Corporation, and the Office of Thrift Supervision to identify the extent to which the findings were similar. We also interviewed staff from OCC headquarters and Federal Reserve Board and 10 of the 11 examination teams, and representatives from 8 mortgage servicers about what they learned about mortgage servicing from the preliminary file reviews and cash payment categorization processes and changes in mortgage servicing practices since the 2009 and 2010 period covered by the file review process. In addition, we asked regulator staff, including the examination teams, about steps they had taken or were planning to take to share this information among examination teams or with other regulators, such as CFPB, or to use this information for future oversight. We also interviewed CFPB staff about information they had requested or received about the preliminary file review results. We compared regulators’ plans to share and leverage information with federal internal control standards for recording and communicating information to help management and others conduct their responsibilities. To assess regulators’ efforts to promote transparency of the amended consent orders and remaining review, we reviewed press releases and documents from regulators related to the amended consent orders and the remaining review. In particular, we reviewed what documents were available about the amended consent orders and the remaining review on the regulators’ websites, such as frequently asked questions, webinars, press releases, and status updates related to check issuance, and analyzed the content of these materials. We also reviewed the payment administrator’s telephone instructions to respond to questions about the amended consent order process. In addition, we reviewed examples of the postcards and letters sent to borrowers to communicate about the amended consent order payments and to provide cash payments. We also interviewed regulator staff about the steps they took to promote transparency and their plans for future reporting. We compared this documentation to federal internal control standards on communications and our work on organizational transformation to identify any similarities or differences. Further, we considered our prior recommendation about lessons learned about transparency of the foreclosure review for the amended consent order process. Finally, we also conducted interviews with representatives of consumer groups. We conducted this performance audit from May 2013 through April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We have issued two prior reports on the foreclosure review process. In our first report on the outreach component of the foreclosure review, we found that the Comptroller of the Currency (OCC) and the Board of Governors of the Federal Reserve System (Federal Reserve) and servicers had gradually improved the communication materials for borrowers, but that regulators could make further enhancements to the outreach efforts. In our second report, we identified lessons learned from the file review process that could be used to enhance the activities under the amended consent orders and the continuing reviews. Below we list the recommendations made in each report and the actions taken by regulators in response. In addition to the contact named above, Jill Naamane (Assistant Director), Bethany M. Benitez, Maksim Glikman, DuEwa Kamara, John Karikari, Charlene J. Lindsay, Patricia MacWilliams, Marc Molino, Jennifer Schwartz, Andrew Stavisky, Winnie Tsen, and James Vitarello made key contributions to this report. | In 2011 and 2012, OCC and the Federal Reserve signed consent orders with 16 mortgage servicers that required the servicers to hire consultants to review foreclosure files for errors and remediate harm to borrowers. In 2013, regulators amended the consent orders for all but one servicer, ending the file reviews and requiring servicers to provide $3.9 billion in cash payments to about 4.4 million borrowers and $6 billion in foreclosure prevention actions, such as loan modifications. One servicer continued file review activities. GAO was asked to examine the amended consent order process. This report addresses (1) factors considered during cash payment negotiations between regulators and servicers and regulators' goals for the payments, (2) the objectives of foreclosure prevention actions and how well regulators designed and are overseeing those actions to achieve objectives, and (3) regulators' actions to share information from the file review and amended consent order processes and transparency of the processes. GAO analyzed regulators' negotiation documents, oversight memorandums, and information provided to borrowers and the public about the file review and amended consent orders. GAO also interviewed representatives of regulators, servicers, and consultants. To negotiate the $3.9 billion cash payment amount in servicers' amended consent orders, the Office of the Comptroller of the Currency (OCC) and the Board of Governors of the Federal Reserve System (Federal Reserve) considered information from the incomplete foreclosure review, including factors such as projected costs for completing the file reviews and remediation amounts that would have been paid to borrowers. To evaluate the final cash payment amount, GAO tested regulators' major assumptions and found that the final negotiated amount generally fell within a reasonable range. Regulators generally met their goals for timeliness and amount of the cash payments. By December 2013, cash payments of between $300 and $125,000 had been distributed to most eligible borrowers. Rather than defining specific objectives for the $6 billion in foreclosure prevention actions regulators negotiated with servicers, regulators identified broad principles, including that actions be meaningful and that borrowers be kept in their homes. To inform the design of the actions, regulators did not analyze available data, such as servicers' recent volume of foreclosure prevention actions, and did not analyze various approaches by which servicers' actions could be credited toward the total of $6 billion. Most servicers GAO spoke with said they anticipated they would be able to meet their obligation using their existing level of foreclosure prevention activity. In their oversight of the principles, OCC and the Federal Reserve are verifying servicers' foreclosure prevention policies, but are not testing policy implementation. Most Federal Reserve examination teams have not begun their verification activities and the extent to which these activities will incorporate additional evaluation or testing of servicers' implementation of the principles is unclear. Regulators' manuals and federal internal control standards note that policy verification includes targeted testing. Without specific procedures, regulators cannot assess implementation of the principles and may miss opportunities to protect borrowers. Regulators are sharing findings from the file reviews and amended consent order activities among supervisory staff and plan to issue public reports on results, but they have not determined the content of those reports. The file reviews generally confirmed servicing weaknesses identified by regulators in 2010. Regulators are sharing information among examination teams that oversee servicers, and some regulator staff GAO spoke with are taking steps to address weaknesses identified. Regulators also have promoted transparency by releasing publicly information on the status of cash payments. However, these efforts provided limited information on the processes used, such as how decisions about borrower payments were made. Federal internal control standards and GAO's prior work ( GAO-03-102 and GAO-03-669 ) highlight the importance of providing relevant information on the processes used to obtain results. According to regulators, borrowers could obtain information from other sources, such as the payment administrator, but information on how decisions were made is not available from these sources. In the absence of information on the processes, regulators face risks to public confidence in the mortgage market, the restoration of which was one of the goals of the file review process. OCC and the Federal Reserve should define testing activities to oversee foreclosure prevention principles and include information on processes in public documents. In their comment letters, the regulators agreed to consider the recommendations. |
DOD is historically the federal government’s largest purchaser of services. Between 2001 and 2002, DOD’s reported spending for services contracting jumped almost 18 percent to about $93 billion. In addition to the sizeable sum of dollars involved, DOD contracts for a wide and complex range of services, such as professional, administrative, and management support; construction, repair, and maintenance; information technology services; research and development; medical services; operation of government- owned facilities; and transportation, travel, and relocation. In each of the past five years, DOD has spent more on services than it has on supply and equipment goods (that includes contracting for ships, aircraft, and other military items) (see figure 1). Despite this huge investment in buying services, our work—and the work of the DOD Inspector General—has found that DOD’s spending on services is inefficient and not being managed effectively. In fact, we have identified overall DOD contract management as a high-risk area, most recently in our Performance and Accountability Series issued this past January. Responsibility for acquiring services is spread among individual military commands, weapon system program offices, or functional units in various defense organizations, with limited visibility or control at the DOD or military-department level. Too often, requirements are not clearly defined; competition is not adequately pursued; rigorous price analyses are not performed; and contractors’ performance is not sufficiently overseen. Information systems that provide reliable data and are capable of being used as management tools are lacking, and DOD has established few enterprisewide contracting-related performance metrics. Further, DOD lacks a strategic plan to identify and prioritize future service contracting-related efforts for better management. Seeking longer-term remedies to bring about sorely needed reform, the Congress has passed legislation to direct DOD to adopt best practices used by leading companies and to achieve significant savings through improved management approaches for services contracts. The National Defense Authorization Act for Fiscal Year 2002 directs DOD to improve its management structure and oversight process for acquisition of services. One of the law’s aims is to prompt DOD to undertake a comprehensive spend analysis of its services contracts. This analysis is intended to provide DOD the basis for expanding its use of cross-functional commodity teams to leverage its buying power, improve the performance of its services contractors, organize its supplier base, and ensure that its dollars are well spent. Moreover, expecting that DOD could achieve significant savings without any reduction in services, the legislation also establishes savings goals that DOD should achieve by employing commercial best practices and effective management. In addition, Congress reduced the amounts appropriated to DOD in fiscal years 2002 and 2003 by a total of $2.5 billion to reflect savings from business process reforms in the procurement of services. Increasingly, private sector companies have been purchasing a wide range of services from outside suppliers at a cost rising at an average of 3.5 percent a year. The leading companies we interviewed—IBM, ChevronTexaco, Bausch & Lomb, Delta Air Lines, and Dell—reported between $92 billion and $94 billion in combined annual procurement spending for goods and services in 2001, and they use a large part of their purchasing dollars to buy services (see table 1). As service acquisition costs have increased, companies have sought to reduce them by taking a strategic approach, starting with the use of spend analysis processes to provide the necessary data. A strategic approach pulls together participants from a variety of places within an organization who recommend changes to a company’s personnel, processes, structure, and culture that can constrain rising acquisition costs. These changes (often referred to as “strategic sourcing”) can include adjustments to procurement and other processes such as instituting enterprisewide purchasing of specific services; reshaping a decentralized process to follow a more center-led, strategic approach; and increasing the involvement of the corporate procurement organization, including working across business units to help identify service needs, select providers, and manage contractor performance. A critical component of an effective strategic approach is a comprehensive spend analysis program. An initial spend analysis permits company executives to review the total dollars spent by a company each year to see how much is spent, what was bought, from whom it was bought, and who is purchasing it. This analysis thus identifies where numerous suppliers are providing similar services—and at varying prices—and where purchasing costs can be reduced and performance improved by better leveraging buying power with the right number of suppliers to meet the company’s needs. Overall, spend analysis permits companies to define the magnitude and characteristics of their spending, track emerging market spending, understand their internal clients and supply chain, and monitor spending with diverse suppliers for socioeconomic business goals. Spend analysis is an important driver of strategic planning and execution, and it allows for the creation of lower-cost consolidated contracts at the local, regional, or global level. At the same time, as part of a strategic sourcing effort, spend analysis allows companies to monitor trends in small and minority-owned business supplier participation in order to address the proper balance with equally important corporate supplier diversity goals. Studies have reported significant cost savings in the private sector, with some companies achieving reported savings of 10 percent to 20 percent of their total procurement costs through the use of a strategic approach to buying goods and services. A recent Purchasing Magazine poll finds that companies employing procurement best practices—including employing effective spend analysis processes—are routinely delivering a 3 percent to 7 percent savings from their procurement costs. Research by A.T. Kearney, Inc., suggests that, if all companies using procurement best practices to some extent matched the savings rates of the leading companies, total savings could reach as much as 41 percent more than the $13.5 billion achieved in 2000. The leading commercial companies we studied report achieving and expecting to achieve billions of dollars in savings by developing companywide spend analysis programs and services contracting strategies, as shown in table 2. Although the financial and other results of spend analysis clearly are worth the effort, initially setting up these programs can be challenging, according to research organizations and our interviews with company executives. Companies have experienced problems accumulating sufficient data from internal financial systems that do not capture all of what a company buys or are being used by different parts of the company but are not connected. Because simplified data may not exist or be available, companies have frequently been unsure who their buyers are and have had to contend with databases that include listings of items and suppliers that in reality are identical to each other but which are all stored under different names. Companies also found that existing databases have not captured anywhere near enough details on the services for which vendors are being paid. Despite these challenges, companies that developed formal, centralized spend analysis programs found that they have been able to resolve their problems over time and go on to engage in effective spend analysis on a continuous basis through the use of five key processes, according to our review of research organizations’ findings and interviews with company executives. The processes involve automating, extracting, supplementing, organizing, and analyzing data. Building the foundation for a thorough spend analysis involves creating an automated information system for compiling spending data. The system routinely extracts vendor payment and related procurement data from financial and other information systems within the company. The data are then automatically compiled into a central data warehouse or a spreadsheet application, which is continually updated. Most of the automated spend analysis systems currently in use were developed in house, although some companies have hired third-party companies for expertise and technology. The data are primarily extracted from accounts payable financial systems and reviewed for completeness. Accounts payable data can be voluminous and very detailed. Companies process large numbers of vendor invoices for payment each year, and each of those must be examined by their spend analysis systems. When necessary, the accounts payable data are supplemented with other sources, such as more detailed purchase card data obtained from external bank-card vendors’ systems or other information, such as suppliers’ financial status and performance information. Companies must obtain as much information as possible from both internal and external sources to gain a complete understanding of their spending for services contracts. Data files must be accurate, complete, and consistent. The data are subjected to an extensive review for accuracy and consistency, and steps are then taken to standardize the data in the same format, which involves the creation of uniform purchasing codes. The data are typically organized into comprehensive categories of suppliers and commodities that cover all of the organization’s purchases. Simultaneously, commodity managers, councils, or teams are established to access and analyze the information on a ongoing basis, using standard reporting and analytical tools. Each group is responsible for one or more commodities, which may also include responsibility for a number of sub-categories. Once the spending data have been organized and reviewed, companies use the data as the foundation for a variety of ongoing strategic efforts. The following company profiles illustrate significant aspects of the spend analysis and strategic-sourcing processes. Each profile begins with a description of the savings targets the company has set, achieved, and expects to achieve in the future. This is followed by a discussion of the difficulties the company experienced before implementing spend analysis; the components of its spend analysis system—including how it extracts, supplements, organizes, and analyzes its data; an example drawn from company practice of a successful application of spend analysis; and how the company expects to keep improving its system over time. Despite the uniformity of this framework, these companies are not identical in the manner that they implement spend analysis or strategic sourcing. Some have more mature systems than do others, while some have strengths or creativity demonstrated in specific aspects of the process. Each, however, has been cited by procurement and industry specialists as a role model for procurement and spend analysis, and our interviews and subsequent analysis have borne that out. Year after year, IBM’s global procurement organization focuses on delivering a sustained competitive advantage across its entire portfolio of purchases, which totaled $42 billion in 2001. IBM’s procurement transformation began in 1994 and continues to evolve. As a result, IBM reports having achieved significant efficiencies and globally leveraged its spending through strategic sourcing to reduce the number of suppliers and save hundreds of millions of dollars. In the beginning of its transformation, IBM lacked sufficient knowledge on what it was spending across the enterprise. Company buyers were calling the same items and suppliers by different names and being charged different prices for the same product or service. The company had disparate accounts payable systems, and the procurement organization was unable to gather easily a consolidated view of spending with IBM suppliers. Aggregated data were unavailable, and the linkage between procurement and accounts payable was inadequate for leveraging the company’s buying power. To launch a comprehensive spend analysis, IBM had to address four major challenges: (1) linking its disjointed legacy systems, (2) investing in a single-enterprise resource-planning system, (3) establishing uniform naming conventions for suppliers, goods, and services, and (4) creating a single procurement management system to support a global process. To address these challenges, IBM developed an extensive “end-to-end” procurement system, which includes a paperless process for requisitions and purchase orders, electronic linkages to suppliers, a worldwide accounts payable system that receives and processes all suppliers’ invoices, and a centralized spend analysis program built around an automated business data warehouse for efficiently extracting accounts payable and other enterprise spending data in a common format. Initially, IBM’s data management system did not support aggregating all of the accounts payable and other data to support management decision making. Recognizing this situation, IBM quickly responded by implementing a centralized global business data warehouse to facilitate decision making based on accounts payable and other data covering the entirety of IBM’s purchases. IBM’s global procurement organization has used spend analysis to establish a substantial level of control by the company’s 31 “commodity councils”. The councils analyze the spending data in order to meet the needs of IBM groups worldwide and to enter into deals with suppliers by leveraging IBM’s total buying power to gain proper volume discounts. Before 1995, IBM’s decentralized buyers controlled only 45 percent of the company’s purchasing; centralized councils now control almost 100 percent. Although IBM business units initially found it difficult to give up decentralized control over buying to the global procurement organization, IBM’s global procurement organization used spend analysis presentations to demonstrate the savings that were possible and to achieve buy-in to the new purchasing process while being responsive to business units’ needs. IBM’s spend analysis approach also supplements information from internal accounts payable with business intelligence data on suppliers’ businesses and market status from an outside party. This information is part of the spend analysis process used to create up-to-date profiles on IBM’s top suppliers. IBM spend analysis also integrates external information on average prices paid in the market in order to measure the company’s strategic-sourcing performance in achieving a competitive advantage through its procurement processes. IBM works with third-party consultants to obtain credible market intelligence in order to determine the “best in class” price for a given commodity and whether or not IBM is obtaining the lowest market prices from its suppliers. IBM’s global procurement organization created uniform purchasing codes and upgraded data entry processes for accounts payable in order to organize the spend analysis categories of products and services commodities that could be leveraged for strategic-sourcing purposes. For example, IBM’s procurement data, which include related accounts payable data, are organized under 31 broad categories that correspond with the commodity councils. Each category encompasses a number of subcommodities that cover the company’s production-related services and general procurement. For example, one high-level services procurement grouping is temporary technical services—a multi-billion dollar annual spending category for IBM—which includes eight sub-commodities, such temporary services as programmers, systems engineers, technical writing, and systems help-desk support. Currently, the councils use spend analysis to support their negotiations with suppliers and to work with internal business units in order to bring the best value to bear. For example, the technical services commodity council relied on spend analysis to carry out a strategic-sourcing effort. The council’s analysis revealed that the company was spending billions annually for temporary technical services, that its hiring process was taking 10 days on average, and that multiple suppliers were sending in candidate resumes. As a result of the council’s effort, a centralized Web-based hiring system was developed internally for sourcing external technical services. Requesters can go online and select candidates from a database, conduct interviews, and submit requisitions, while reducing the process of hiring to less than 3 days. Costs were reduced by a reported $40 million in 2001 as a result of the commodity council’s prenegotiating various skill payment rates with two-thirds fewer suppliers. In summary, IBM has implemented a number of strategic enhancements to its global purchasing approach. Ongoing enhancements, including corporate spend analysis capabilities, will focus on deeper integration of the procurement process into the company’s supply chain management aimed towards a new level of global buying effectiveness. IBM is making changes to exploit greater electronic procurement capabilities and to consolidate purchase order processing and procurement support services in centralized locations around the world. Such changes are intended to remove administrative workload from the commodity councils, allowing them to focus on management of suppliers, internal customers, and IBM costs. ChevronTexaco’s phased approach to the strategic sourcing of its entire procurement spending is expected to result in savings of at least $300 million a year by 2003 and $1.3 billion a year after 2005. The company’s annual spending on procurement is currently between $16 billion and $18 billion. The company’s procurement savings goals, established after the two historically decentralized companies merged in 2001, are based on spend analysis. Before the merger, each separate company had difficulty understanding its own spending practices. Chevron had a limited number of personnel working on the task—its purchasing unit had only a few analysts who laboriously collected, reviewed, and organized all the accounts payable data after issuing data calls to various business units. The information collected was consolidated in large spreadsheet binders, but these did not capture all company spending or details on suppliers’ diversity of interest to corporate leaders. Chevron lacked the data to negotiate effectively with suppliers, who knew more about what was being spent and what business they had with Chevron. Texaco also had difficulty understanding its supplier base and what it was buying because its accounts payable data were stored in 14 systems, suppliers’ names were not standardized in those systems, and not enough details were captured on the goods or services for which vendors were being paid. Once the companies merged, ChevronTexaco adopted, as its global procurement focus, the development of accurate, detailed information on spending. ChevronTexaco’s spend analysis system now automatically extracts accounts payable data on most purchased goods and services from these systems. For greater precision, ChevronTexaco supplements the accounts payable data with external information and internal expertise to obtain more detailed insight into the products and services being bought and the vendors that supply them. The data are organized into three dozen broad categories, including 250 products and services, which cover most of the company’s annual spending. ChevronTexaco’s global procurement leadership and several decision support staff (who work with a few dozen cross-functional commodity teams) analyze the spending data. These teams link the procurement organization, strategic-sourcing processes, and business units by collaboratively using the spending data to identify, plan, and recommend sourcing projects for goods and services, including capital projects. For example, three consulting and professional services commodity teams are responsible for analyzing data related to spending for temporary accounting staff, financial and information technology management, and legal and technical services. An initial commodity team analysis of the consulting and professional services’ spending data showed close to $600 million spent on consulting services and many subcategories that needed to be identified. Further spend analysis showed that the company was using 1,600 suppliers, that buying was highly fragmented with little standardization, and that consultant contracting was not sufficiently competitive. The spend analysis identified five consulting services’ supply markets for separate consideration—financial, information technology, general management, legal, and technical. The team discovered that most of the five were ripe for competition, that some were reducing staff and seeking larger client bases, and that some were laying off employees and going through a slump. After taking into account internal business unit readiness for supplier consolidation, the team finally recommended separate strategic- sourcing projects in information technology, legal, and general management consulting. ChevronTexaco estimates net savings to be between 8 percent and 10 percent of the company’s total spending on those 3 consulting and professional services’ subcategories. ChevronTexaco uses spend analysis to document and report direct savings that result from negotiated price reductions, volume discounts, and leveraged discounts. Spend analysis supports ChevronTexaco’s active supplier diversity program by permitting strategic-sourcing teams to track the company’s spending with small and diversely-owned businesses and identify opportunities to attract competitive offers from such suppliers. Analysis of the spending data has also been used to meet a wide range of the company’s strategic goals, including identifying the right stakeholders for participation in a global procurement organization coordinating key business areas. To win support, procurement executives used spend analysis to promote internally the need for procurement reengineering to help business units reduce costs without sacrificing operations, safety, and services. Spend analysis also underpins the development of performance measures used throughout the company’s standardized procurement processes. ChevronTexaco plans further improvements to its spend analysis system. The company is investing in a third party’s suite of electronic procurement applications. One of the applications is an automated spend analysis tool that will more quickly extract even more detailed data from the company’s financial system. Bausch & Lomb’s strategic sourcing effort saved the company a reported $20 million a year from 1998 through 2001, and is anticipated to save an additional $11 million in each year through 2005. These savings were generated through a one-third reduction in the number of Bausch & Lomb’s suppliers from 20,000 to 13,500 and negotiation of discounts on the volume of business with the remaining suppliers. In 1997, Bausch & Lomb was having difficulty coordinating information from multiple internal information systems as it attempted to understand what it was spending. To overcome this problem, the company contracted with a consultant during the first 2 years of its effort to create and automate master vendor files through a central database and directly provide spend analysis support. Bausch & Lomb’s spend analysis—which focused on developing a comprehensive database and targeting categories with the most suppliers and the most spending—became the foundation of its strategic sourcing effort. To perform its spend analysis, Bausch & Lomb extracted accounts payable data from more than 50 internal systems and sent the data to the consultant to review and correct the records to eliminate duplication and identify “families” of suppliers connected through corporate ownership that could be used to negotiate better terms. The consultant also used its technology tool to compile and automate the analysis of Bausch & Lomb’s spending data. Spending data were standardized by using two publicly available classification systems, allowing for comparisons to be made between vendor identifiers and the affiliated commodity codes. These internally available data were supplemented with other information from the consultant’s business intelligence database that addressed suppliers’ risk and status as minority or women-owned businesses and with purchase card expenditure data. Bausch & Lomb then organized the data into 50 broad categories of products and services, each of which was subdivided into 4 to 12 commodities. Responsibility for the categories was divided among several headquarters commodity managers—including those specializing in information technology, pharmaceuticals, and business processes. The commodity managers analyzed the spending data and sought input from business units to develop strategic sourcing strategies and business plans for each of the commodities to combine the company’s total buying power and rationalize the supplier base. The commodity managers now oversee the corporate procurement of specific goods and services across all the business units. For example, when the business process commodity manager applied spend analysis to Bausch & Lomb’s use of temporary personnel services, the outcomes included the opportunity to reduce the number of suppliers, lower costs, and achieve other streamlining benefits. Business units had been using purchase orders to obtain temporary services, and the spend analysis revealed that although 60 suppliers were being used, one national company was the top temporary services provider. This knowledge enabled Bausch & Lomb to negotiate a 17 percent reduction with that company for temporary services by consolidating the supplier base from 60 companies to 1. The remaining temporary services company agreed to this reduced rate because it was guaranteed a greater volume of individual purchase orders and because Bausch & Lomb’s business units were required to use that preferred company unless they had a need that it could not meet. Bausch & Lomb’s ongoing spend analysis of this $13 million commodity also enables it to monitor business unit compliance with the contract to use the preferred company and achievement of savings targets. Bausch & Lomb’s procurement organization now performs and regularly updates the spend analysis with support from the consultant. Each year, Bausch & Lomb refreshes its spend analysis data with new supplier information obtained from the consultant. The annual spend analysis examines how much its divisions are spending on specific commodities to determine its potential bargaining power with its suppliers and to review the risks of existing suppliers. Its commodity managers identify which strategic sourcing projects to tackle based on the dollar amount spent, the number of suppliers, the potential cost savings, and opportunity to consolidate suppliers. The company’s annual updating of the spending data gives enough information to focus strategic efforts in the right direction. To enhance their spend analysis, Bausch & Lomb is also working with its consultant to start extracting more detailed data from its general ledger systems. Spend analysis has been a key element in Delta’s transformation of its more than $7 billion procurement operation and its adoption of a strategic sourcing process. Since 2000, the company’s reported payback has been rapid—more than $200 million saved through strategic sourcing projects and other supply-chain management transformation efforts. Almost 3 years ago, Delta’s supply chain management organization faced challenges in its ability to aggregate purchasing data due to the presence of multiple legacy systems and a lack of data integrity. In July 2000, those legacy systems were replaced with a new core financial system, which was also useful when the supply chain management organization decided to launch its current spend analysis program. Delta’s spend analysis program is based on the automated extraction of accounts payable records from its core financial system. The extracted data are placed in a data warehouse and then compiled in an integrated, off-the-shelf software tool (accessible through the company’s intranet) that is used to develop spend analysis reports. All company managers and supply chain management staff can access the company’s spend analysis reporting tool. The internal financial data are supplemented with purchase card spending data, totaling about $75 million per year, from the company’s bank card vendor. In addition, Delta worked with a third party to validate the information received from small, minority, and woman-owned businesses in order that supplier diversity information was accurately coded in its core financial system. Delta organized its spending data to correspond with its six broad purchasing areas: fuel and airport services, corporate operations (such as finance and human resources), technical operations (such as aircraft maintenance), marketing and in-flight services, corporate real estate, and fleet planning and acquisitions. Those 6 purchasing areas are responsible for purchasing goods and services in more than 270 commodities, such as consultants, legal, and temporary services. Delta’s supply chain management organization worked with a team to create the commodity codes following a review of the goods and services the company buys. These codes have made it possible to organize accounts payable and other data by commodity to support the company’s initial spend analysis, a key part of the first two steps in its strategic-sourcing process. Beginning in September 2000, Delta’s supply chain management organization took steps to realize the value that a transformation could bring. Key elements of this transformation included the implementation of a strategic sourcing process; establishment of cross-functional teams; and expansion of the supply chain management organization’s scope of involvement in company spending. Commodity teams began analyzing the spending data to obtain an upfront understanding of the supplier base, the company’s buying power, and the estimated savings from consolidated buying. In mid-2002, commodity teams across Delta’s purchasing areas were actively managing 58 cost-saving projects developed through spend analysis and reported operating savings of $82.2 million from projects already completed that year. Delta’s supply chain management organization also uses spend analysis to track and report the company’s spending with small business and minority- and women-owned businesses in order to measure the outcome of the teams’ strategic-sourcing projects in terms of the company’s supplier diversity goals. An example of Delta’s successful outcomes with spend analysis is its information technology commodity team’s strategic sourcing effort in 2001. The team’s analysis revealed the company was using more than 60 different information technology contract services suppliers and purchasing approximately $16 million in external services. The requisition processes varied within each of the business units; limited formal metrics were in place for managing supplier performance; and the existing contracts’ pricing structures did not facilitate cost reduction efforts. An external industry analysis indicated that Delta could benefit by bidding information technology contract services given that the supplier market was hard hit by the downturn in the economy and that a surplus of high quality information technology service suppliers existed. Using this knowledge, the commodity team, which included representatives from the company’s human resources and technology business units, developed a new consolidated-proposal request for external services and used an on- line reverse auction to complete the sourcing effort. The new contracts resulted in reported annual savings of $3 million and reduced the number of suppliers from 60 to 6 companies—3 of which qualified as diverse-owned businesses. Despite Delta’s accomplishments in spend analysis, challenges remain in obtaining reliable and complete data, and its supply chain management organization is working to improve financial system data integrity and automated reporting to provide the information needed for real-time business decisions. Last year a team was formed to improve the quality of information on suppliers, commodity codes, and buyers. Recommendations on process improvements will be made in 2003, followed by an effort to clean up Delta’s purchase order and contract files. A related team is working to improve the availability of automated reporting from Delta’s off-the-shelf spend analysis reporting tool. The company expects increased accuracy in its spending information will provide greater visibility into buying patterns and enhance strategic sourcing decision making and results. Dell’s earlier success in using spend analysis and strategic sourcing in its manufacturing procurement operations prompted the company to establish a new procurement savings goal of 20 percent from the $3 billion to $4 billion it spends in purchasing of nonmanufacturing services and products. Before 2000, Dell’s spend analysis and strategic-sourcing focused only on production procurement to support its manufacturing operations. The company had no spend analysis program to track general procurement of goods and services needed to support the company’s nonmanufacturing operations. However, once the company decided that general procurement merited the same strategic approach as production procurement, the procurement organization quickly developed a second spend analysis program. Since 2000, Dell’s procurement and finance organizations have worked together on its internally developed spend analysis system, which provides automated on-line reporting and cost analysis of the company’s general procurement purchasing. Every month, the system extracts accounts payable records from one of the company’s two financial systems for consolidation into the data warehouse used for spend analysis. The consolidated spend analysis reports are supplemented with supplier diversity, business intelligence, and purchase card information obtained from external sources. For example, Dell obtains business intelligence information from an outside party about its suppliers’ financial health and utilizes that independent information to determine percent of revenue based on sales to Dell. The company also obtains detailed vendor data for purchases obtained under the corporate purchase card program. However, the supplemental business intelligence and purchase card information must be separately analyzed vendor by vendor, item by item, and compared with the consolidated reports from the accounts payable information. The need to organize the accounts payable and purchase card data for spend-analysis and strategic-sourcing purposes required the procurement organization to identify 15 high-level categories, each encompassing many products and services commodities. This involved research with business units familiar with Dell’s vendors in order to “tag” each vendor according to the commodity being supplied. Consulting is one example of a high level category, and it encompasses consultant services such as information technology, electronic commerce, financial, legal, and Dell technology. New suppliers are similarly tagged to keep the spend analysis system updated. One current limitation to Dell’s tagging methodology is that some vendors do not fit neatly under a single commodity. Dell’s system organizes purchase data for those vendors under a miscellaneous category, and the staff regularly analyze the data to later sort spending with those suppliers into the appropriate commodity. Dell’s procurement organization has four senior managers who are responsible for several commodity teams in the areas of marketing and communications, corporate services, and operations. In these teams, commodity managers partner with the primary business owners to manage strategic sourcing and other procurement activities in specific spending areas. Each commodity team uses spend analysis to identify, prioritize, and leverage the company’s combined buying power with suppliers in order to reduce costs and improve supplier performance. As an example of a successful outcome using spend analysis, one of the senior managers worked with the customer services team on a strategic-sourcing project to staff support call centers and provide certified technicians and related on-site services for Dell computer hardware repair. The spend analysis revealed that Dell’s business units were spending more than $200 million annually on an ad-hoc basis with 8 suppliers for the same services. The team discovered that it was difficult to manage eight suppliers and expensive to have each provide the entire scope of services on a worldwide basis. The new sourcing strategy cut the number to four suppliers and provided a volume price discount, efficiencies in supplier management, and capacity to support Dell’s growing sales in the U.S. and overseas. Dell required two of those suppliers to provide a global array of services and two to work only in the U.S. In taking this action, Dell also successfully met its supplier diversity objectives by awarding two of the new contracts to diversely- owned companies. Dell procurement officials plan continued improvements to the spend analysis program, such as automating the production of analytic reports and generating reports that focus on detecting corporate relationships among suppliers. Enhanced analysis and reporting of relationships can be used to leverage Dell’s buying power for additional savings with related suppliers. DOD is in the very early stages of setting up a spend analysis program. The agency’s leaders have made a commitment to improve how DOD acquires services and to adopt best commercial practices. Although these are the right first steps, the agency has yet to emulate the best practices of spend analysis to the same extent as the private sector. DOD also has not yet pursued more strategic approaches like reorganizing its procurement processes under a more centrally led management structure. DOD’s initial actions include issuing new policy in May 2002—in response to our work and the 2002 national defense authorization legislation—to elevate major purchases of services to the same level of importance as the purchase of major weapon systems. In February 2003, the Deputy Secretary of Defense tasked a new team to complete, by September 2003, a pilot spend analysis of services acquisition data across DOD and to determine if larger scale efficiencies and savings could be achieved over its current decentralized procurement environment. DOD requested proposals from interested vendors with commercial spend analysis experience to provide contract support to the DOD team. Pilot projects associated with the spend analysis will be completed by September 2004. Information we obtained during preproposal discussions with prospective vendors suggest that the DOD pilot project may not engage the full range of spend analysis best practices as have the private sector companies we interviewed. (See table 3.) Although DOD does seek to include basic elements of the key private sector spend analysis best practices in the prospective pilot, its efforts fall short of the private sector standard. Its efforts at automation involve only a one-time requirement, not the repeatable process found in private companies. Efforts to extract data are restricted to those taken from two centrally available databases on services contract actions (excluding research and development) in excess of $25,000, a limitation due to the agency’s self-imposed 90-day time frame for completing the spend analysis. Although superior data—obtained by the vendor from other internal and external sources with DOD’s help—may be used to supplement what has been extracted, DOD cannot guarantee that it will be able to provide what the vendor may request. The scope of the pilot is also relatively limited, compared to the more expansive private sector programs. Ten service category business cases are being considered, and procurement savings strategies will be tested for at least five categories. If time permits, DOD’s pilot manager told us that more than five categories could be tested. While DOD expects to learn from this pilot spend analysis, only a small number of procurement actions will result from it. As DOD moves forward to adopt commercial best practices for service acquisitions on the basis of its pilot, the scope of its strategic approach may be limited to smaller organizational units, rather than a major more centralized reorganization of DOD’s procurement processes. To justify its “wait and see” approach with a pilot, DOD cites several factors that set it apart from commercial companies. These include its much larger and more complex services supplier base, decentralized acquisition environment with many procurement offices spread across the military services and defense agencies, and no single financial data system relative to procurements. According to DOD, it must also fulfill numerous socioeconomic goals for contracting with small and diversely-owned suppliers and has more regulatory and budgetary constraints around the acquisition process. In citing these factors in advance of the pilot, DOD is being cautious about viewing procurement as a strategic (i.e., DOD-wide) process that simplifies acquisitions, saves money, and increases the quality of purchased services, compared to its current tactical process of numerous individual contract actions. Once the pilot spend analysis is complete, DOD faces the challenge of making the best use of the results. It needs to decide what long-term changes are required to bolster the current organizational structure and processes to foster a more strategic approach to acquiring services. The extent to which DOD makes these changes will determine its success in meeting congressional expectations for major management reform of—and substantial savings from—the procurement of services. As we reported last year, DOD’s size and complex service needs may lead it to pursue different approaches within the defense agencies, military departments, and individual commands. In this regard, private sector experience suggests that DOD must start with spend analysis to identify and prioritize specific contracted services and then follow through with organizational and process changes, such as the establishment of full-time dedicated cross-functional teams or commodity managers, to improve the coordination and management of key services. As DOD attempts to reengineer its approach to purchasing services, it faces challenges similar to those faced by private sector organizations. For example, DOD is subject to statutory and regulatory goals for contracting with small businesses and other socioeconomic categories, such as woman-owned small businesses and small disadvantaged businesses, that may constrain it from consolidating numerous smaller contracts into larger ones. This is an approach often taken by the companies we studied. Those constraints must be considered in the business cases to be developed by the spend analysis vendor. The experience of private sector companies—which also are keenly aware of the importance of small and diversely-owned business participation as suppliers—may offer DOD valuable insights into addressing this challenge. Companies we studied use spend analysis to carefully and successfully balance supplier consolidation and cost-savings strategies with corporate supplier diversity goals of equally high priority. Companies’ commodity teams often include supplier diversity specialists, who propose concrete steps for considering small, minority-, and woman-owned businesses throughout the strategic-sourcing process. Like the companies, DOD can use spend analysis to understand its current level of supplier diversity on a commodity-by-commodity basis and to balance cost-saving strategies and socioeconomic goals. Spend analysis can also support DOD’s efforts to comply with small business requirements to review potential bundling of procurement requirements in order to determine if the bundling is necessary and justified. DOD cites its lack of a single financial data system relative to procurements as another challenge. Because of the pilot’s 90-day time frame for completing the initial spend analysis, DOD acknowledges that the data it will use may be less complete than what is used by business, but it cannot guarantee that it will be able to provide data from other sources that its vendor may request to perform the first DOD-wide spend analysis. DOD is instead asking the vendor to make a recommendation on the feasibility of using other DOD financial systems—such as systems used to process invoices and pay commercial vendors for goods and services bought by DOD organizations—that might be considered for use in the future. Although DOD will need to consider how existing problems in its financial management systems could affect spend analysis and services- contracting initiatives, we believe a more businesslike approach is possible. The companies we interviewed faced similar challenges in accumulating accounts payable and other internal data that were highly fragmented across multiple financial and management systems and not easily accessible. However, the companies automated the extraction of accounts payable and other internal data and made the spend analysis process repeatable and more efficient. To see if DOD could engage in similar actions, we discussed this matter with DOD sources and others knowledgeable about DOD and commercial vendor payment systems. Based on these discussions, DOD’s systems could provide the type of accounts payable data that companies use and thus could be a data-rich source for DOD spend analysis. In fact, vendor payment data from multiple processing locations are already centrally collected by the Defense Manpower Data Center for auditing and other financial management purposes. Use of this data could reduce DOD’s need to extract and organize data for spend analysis efforts by providing a “one- stop shop.” DOD is also likely to face resistance to giving up decentralized buying authority, cultural barriers, and other impediments to implementing broad- based management reforms. The companies we studied found several ingredients critical to overcoming such challenges. For example, senior management must provide continued support for common services acquisitions processes beyond the initial impetus, since the companies are engaging in long-term efforts. Second, communication has to be seen as vital in educating and keeping staff on board with changes. To achieve buy-in, companies used spend analysis to make a compelling case to business units that reengineering would enhance service delivery and reduce costs. Companies also involved the business units in a new center- led approach by making extensive use of cross-functional commodity teams to make sure they had the right mix of knowledge, technical expertise, and credibility. To cut across traditional organizational boundaries that contributed to the fragmented approach to acquiring services, companies restructured their procurement organizations, assigning them greater responsibility and authority for strategic planning and oversight of the companies’ service spending. Also, companies extensively used metrics—based on spend analysis—to measure total savings and other financial and non-financial benefits, to set realistic goals for improvement, and to document results over time. DOD recently developed new management structures in response to the 2002 national defense authorization requirements to improve practices for the acquisition of services, but the changes are not as far-reaching as those adopted by companies we studied. For example, although the Under Secretary of Defense (Acquisition, Technology, and Logistics) and each of the military departments now has a process for reviewing particular large-dollar or sensitive acquisitions for adherence to competition and other contracting requirements, the reviews are piecemeal and focused on approving individual acquisitions rather than achieving a coordinated approach for managing services’ contracts. DOD could use spend analysis as a basis for tailoring how the new management structures can adopt the type of organizational tools and metrics employed in the private sector to foster an enterprisewide strategic approach that would meet DOD’s unique requirements. To implement best practices and manage services effectively, DOD must have the right skills and capabilities in its acquisition workforce. This is a challenge given decreased staffing levels, increased workloads, and the need for new skill sets. DOD is engaging in a long-term strategic planning effort to identify the competencies needed for its future workforce. Private sector experience indicates that taking a strategic, integrated, enterprisewide approach can also help DOD address its acquisition workforce challenges. In our study, companies’ efforts to reengineer their procurement operations have often been accompanied by acquisition- staffing reductions. The experience has been that using spend analysis and coordinated sourcing processes allows for more efficient use of procurement personnel resources by streamlining the number of contracting tasks. Reducing duplication and fragmentation in contracting activities also helps free up limited acquisition workforce resources to perform more strategic business functions, such as acquiring and using knowledge of market conditions and industry trends to better manage fewer suppliers and contracts. While seemingly daunting, each of the challenges to be faced by DOD has been faced and overcome by the private sector companies. Careful observation and analysis of their practices will help the agency to adapt variations and even to create new approaches through which it will be able to reach its savings and strategic targets. Without effective spend analysis, organizations are limited in their ability to understand buying patterns; maximize purchasing power; carry out informed acquisition and contracting decisions; measure the impact of changes in purchasing costs and supplier diversity; and carry out other planning and management functions for the acquisition of services. Given that DOD’s spending on services’ contracts is approaching $100 billion annually, the potential benefits of overcoming the challenges and using best practices to establish an effective spend analysis program are significant and can achieve a total-spending perspective across DOD, make the business case for collaboration in joint purchasing rather than organize an effective management structure to assign accountability and exercise oversight, identify potentially billions of dollars in procurement savings opportunities by leveraging buying power, and identify opportunities to achieve other procurement efficiencies such as reducing duplication in purchasing, supporting supplier diversity, and improving supplier performance. With the federal government’s short- and long-term budget challenges, it is more important than ever that DOD effectively transform its business processes to ensure that it gets the most from every dollar spent. At the same time, DOD’s management challenges related to contracting for services will not be resolved overnight. Two common elements that pervade discussions of ways to address DOD’s challenges are the need for (1) sustained executive leadership and (2) a strategic, integrated, and enterprisewide approach. In addition, ensuring that these efforts achieve the intended results will require the Congress’s continued involvement and support. Such support has already been demonstrated through the 2002 national defense authorization legislation requiring that DOD establish a management structure to enhance the acquisition of services and to collect data on the purchase of services. DOD could use this legislation—and its first spend analysis effort—as the means for taking a more strategic approach to contracting for services and for identifying and achieving substantial savings in the future. To achieve significant improvements across the range of services DOD purchases, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to work with the military departments and other DOD organizations involved in the spend analysis pilot to adopt the effective processes employed by leading companies. Key elements of DOD’s approach should address using technology to centrally automate the spend analysis process to make using accounts payable and other internal financial and procurement data to gain a comprehensive and reliable view of spending, supplementing internal data with external information such as purchase card expenditures and business intelligence to gain a more complete picture of DOD spending and to refine analysis, reviewing purchase data for accuracy and consistency, organizing the data by commodity and supplier categories in order to identify opportunities to leverage buying power, promoting enterprise collaboration aimed at gaining the best value, including the establishment of cross-functional teams to continue developing strategic-sourcing projects, and presenting relevant spending reports to appropriate decision makers to establish strategic savings and performance goals, assign accountability, and measure results. To ensure that DOD moves forward in a timely manner on its commitment for taking a more strategic approach to the acquisition of services, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics develop a plan and a schedule for accomplishing changes in management structure and business processes for contracting for services. The plan and schedule should be based on the results of the spend analysis pilot and should be submitted to the congressional defense committees for consultation and approval as part of the fiscal year 2006 budget submission and justification process. In commenting on a draft of this report, DOD agreed with our findings and conclusions that the commercial best practice of spend analysis is important to the design of a strategic approach to acquisitions and can be used by DOD to achieve substantial savings comparable to those in the private sector. Moreover, DOD concurred with the recommendation to adopt the effective spend analysis processes employed by leading companies—and now intends to automate the process of data collection and analysis to make it repeatable, rather than a one-time effort. However, DOD did not concur with the recommendation to develop a plan as part of its 2005 budget submission process (i.e., early in 2004) to institute changes in management structure and business processes for contracting for services. Rather, DOD contends that ongoing initiatives— including follow-on sourcing projects it anticipates developing after the current spend analysis—may make such changes unnecessary. In addition, DOD answers that developing a plan and schedule for making changes in management structure and business processes before completing the current spend analysis pilot (expected by September 2004) would be premature. As we have recognized since our first report on this matter, DOD’s size and complex service needs may lead it to pursue different approaches within the defense agencies, military departments, and individual commands. However, private sector experience suggests that DOD must follow through on its initial spend analysis pilot with organizational and process changes such as the establishment of full-time, dedicated cross- functional teams or commodity managers to improve the coordination and management of key services. The extent to which DOD makes these changes will determine its success in meeting congressional expectations for major management reform of—and substantial savings from—the procurement of services. Moreover, for DOD to change management structure and business processes for services-contracting will require sustained leadership at DOD as well as the involvement and support of Congress. Thus, for purposes of accountability and transparency in support of such involvement and leadership, DOD needs to develop a plan for timely changes necessary to implement a more strategic approach to contracting. In response to DOD’s concern, we modified the recommendation to allow time for DOD to complete its current spend analysis pilot and use the results to develop a plan. Although we are encouraged by DOD’s commitment to undertake the pilot, we firmly believe that once the pilot is complete, DOD needs to make long-term changes to bolster the current organizational structure and processes to foster a more strategic approach to acquiring services. The DOD comments can be found in appendix I. The Chairman and the Ranking Minority Member, Subcommittee on Readiness and Management Support, Senate Committee on Armed Services, requested that we develop a body of work that examines the practices of leading companies and identify best practices that could yield benefits to DOD in the acquisition of services. This engagement focused on (1) the best practices of leading companies as they relate to conducting and using spend analysis, and (2) the extent to which DOD can pursue similar practices. To conduct our best practices work, we conducted literature searches, reviewed studies related to spend analysis and best practices for services contracting prepared by research and consulting organizations, attended private sector seminars and conferences, and contacted experts in purchasing practices. On the basis of these discussions and analyses, we selected five leading companies that were recognized for their strategic approach to managing services acquisitions. We provided a standard agenda to each company prior to our interviews, and conducted interviews to determine the companies’ motivation for undertaking a procurement transformation; corporate strategic goals; the organization and role of the purchasing function; the key processes used for collecting, analyzing, and using spending data—including the use of technology—to be strategic in planning and managing services acquisitions; and performance metrics and accountability. We also asked each company to discuss in more detail a specific service buy that best exemplified the use of spend analysis for making strategic acquisition decisions. In addition, we discussed potential challenges and barriers to employing a spend analysis and subsequent strategic sourcing efforts. After our visits, we provided a summary of the information obtained to ensure that we had accurately recorded and understood the information each company provided. We provided each company a copy of our draft report for review and comment. The companies we visited were Bausch & Lomb, Rochester, New York; ChevronTexaco Corporation, San Ramon, California; Dell Computer Corporation, Round Rock, Texas; Delta Air Lines, Atlanta, Georgia; and International Business Machines Corporation, Somers, New York. To assess current efforts underway by DOD to improve its enterprisewide knowledge of spending on services contracts, and how DOD can better emulate the best practices learned from these leading companies, we interviewed procurement policy and management officials in the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics) and the military departments. To assess the feasibility of using internal accounts payable data similar to the data used in leading companies’ spend analysis programs, we interviewed Defense Finance and Accounting Service officials knowledgeable about DOD systems used to process invoices and pay commercial vendors for goods and services supplied to military and other DOD organizations. We also reviewed policy memorandums, guidance, and other documents pertaining to ongoing and planned initiatives that affected service contracting. We discussed with these officials our assessment of the leading companies’ approaches and obtained their views on their approaches’ similarities and differences. In addition, we discussed potential challenges and barriers to employing the best practices approaches we identified. Our report summarizes the key elements the companies employed to conduct spend analysis as one part of their strategic sourcing initiatives—in particular as they relate to services acquisitions. We did not verify the accuracy of the procurement costs and benefits the companies reported receiving from their strategic approaches and spend analysis outcomes. Our report is not intended to describe or suggest that we evaluated or endorse all business practices of the companies. Nor is this report intended to suggest that all companies have followed exactly the same approach in achieving similar results. Also, we were limited in our ability to obtain and present some relevant data that companies considered proprietary in nature. We conducted our review from March 2002 to May 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees; the Secretary of Defense; the Deputy Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Under Secretaries of Defense (Acquisition, Technology, and Logistics) and (Comptroller); the Director, Office of Management and Budget; and the Administrator, Office of Federal Procurement Policy. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please call me at (202) 512-4841, or David Cooper at (202) 512-4125. Major contributors to this report were Lily Chin, Ralph Dawn, Carolyn Kirby, Nicole Shivers, Shannon Simpson, Cordell Smith, Bob Swierczek, Ralph White, and Dorothy Yee. | Department of Defense (DOD) spending on service contracts approaches $100 billion annually, but DOD's management of services procurement is inefficient and ineffective and the dollars are not always well spent. Recent legislation requires DOD to improve procurement practices to achieve savings. Many private companies changed management practices based on analyzing spending patterns and coordinating procurement in order to achieve major savings. This report evaluates five companies' best practices and their conduct and use of "spend analysis" and the extent that DOD can pursue similar practices. The leading commercial companies GAO studied reported achieving and expecting to achieve billions of dollars in savings by developing companywide spend analysis programs and service-contracting strategies. Spend analysis answers basic questions about how much is being spent for what services, who are the suppliers, and where are the opportunities for leveraged buying to save money and improve performance. To obtain these answers, companies extract internal financial data, supplement this data with external data, organize the data into categories of services and suppliers, and have the data analyzed by managers or cross-functional teams to plan and schedule what services will be bought on a company wide basis. The results of spend analysis are also used for broader strategic purposes--to develop reports for top management, to track financial and other benefits achieved by the company, and to further improve and centralize corporate procurement processes. DOD is in the early stages of a spend analysis pilot. Although DOD is moving in the right direction, it has not yet adopted best practices to the same extent as the companies we studied. Whether DOD can adopt these practices depends on its ability to make long-term changes necessary to implement a more strategic approach to contracting. DOD also cites a number of challenges, such as its large and complex need for a range of services, the fragmentation of spending data across multiple information systems, and contracting goals for small businesses that may constrain its ability to consolidate smaller requirements into larger contracts. Challenges such as these are difficult and deep-rooted, but companies also faced them. For DOD to change management practices for the contracting of services will require sustained executive leadership at DOD as well as the involvement and support of Congress. |
To adjudicate asylum claims, USCIS asylum officers and EOIR immigration judges determine an applicant’s eligibility for asylum by assessing whether the applicant has credibly established that he or she is a refugee within the meaning of section 101(a)(42)(A) of the Immigration and Nationality Act (INA), as amended. An applicant is eligible for asylum if he or she (1) applies from within the United States; (2) suffered past persecution, or has a well-founded fear of future persecution, based on race, religion, nationality, membership in a particular social group, or political opinion; and (3) is not statutorily barred from applying for or being granted asylum. Among other things, the REAL ID Act of 2005 was a legislative effort to provide consistent standards for adjudicating asylum applications and to limit fraud. Consistent with the REAL ID Act, the burden is on the applicant to establish past persecution or a well-founded fear of persecution, and asylum officers and immigration judges have the discretion to require documentary support for asylum claims. To determine whether an applicant is credible, the act requires that asylum officers and immigration judges consider the totality of the applicant’s circumstances and all relevant factors and states that a determination of the applicant’s credibility may be based on any relevant factor. Such factors could include, among others, the applicant’s demeanor, candor, or responsiveness in the asylum interview or immigration court hearing, or any inaccuracies or falsehoods discovered in the applicant’s written or oral statements, whether or not an inconsistency, inaccuracy, or falsehood goes to the heart of the applicant’s claim. However, an asylum officer or immigration judge may determine that an applicant is credible, considering the totality of the circumstances, even if there are inaccuracies, contradictions, or evidence of potential fraud. For example, an applicant may have lied to a U.S. consular officer in order to obtain a visa to travel to the United States when fleeing his or her home country, and still have a credible asylum claim. To apply for affirmative asylum, an applicant submits a Form I-589, Application for Asylum and for Withholding of Removal, to USCIS. An applicant may include his or her spouse and unmarried children under the age of 21 who are physically present in the United States as dependent asylum applicants. The applicant mails paper copies of the application and supporting documentation to a USCIS Service Center, which verifies that the application is complete, creates a hard-copy file, and enters information about the applicant, including biographic information as well as attorney and preparer information submitted with the application, into RAPS. Subsequently, using the applicant’s biographic data, RAPS initiates automated checks against other U.S. government databases containing criminal history information, immigration violation records, and address information, among other things. RAPS also schedules an appointment to fingerprint and photograph the applicant. The Service Center sends the applicant file to one of USCIS’s eight asylum offices based on the applicant’s residential address and the asylum office then schedules the applicant’s interview with an asylum officer. In adjudicating asylum applications, USCIS policy requires asylum officers to review the applicant’s hard-copy file; research country of origin information; verify that an applicant has completed fingerprinting requirements; and document the results of background, identity, and security checks, some of which are repeated in the asylum office to identify any relevant information that may have changed after the initial automated checks. Asylum officers are to use the information obtained through this process to (1) determine who is included in the application; (2) confirm the applicant’s immigration status, asylum filing date, and date, place, and manner of entry into the United States; (3) become familiar with the asylum claim and the applicant’s background and supporting documentation; (4) identify issues that could affect eligibility, such as criminal history, national security concerns, participation in human rights abuses, or adverse credibility or fraud indicators; and (5) identify issues that must be discussed in an interview with the applicant to determine asylum eligibility. During the interview, which is to be conducted in a nonadversarial manner, the asylum officer asks questions to assess the applicant’s eligibility for asylum and determine whether his or her claim is credible. If the asylum officer identifies inaccuracies, inconsistencies, or fraud in the asylum application, the applicant must be given an opportunity to explain such issues during the interview, according to the USCIS Affirmative Asylum Procedures Manual. An independent interpreter monitor listens to each affirmative asylum interview to ensure that the applicant’s interpreter is correctly interpreting and to notify the interviewing officer of any discrepancies in interpretation. After the interview, the asylum officer considers the totality of the circumstances surrounding the applicant’s claim and prepares a written decision. The decision is reviewed by a supervisor, who is to check for quality, accuracy, and legal sufficiency. After a supervisor has concurred with the decision, the decision notice is delivered in hard copy to the applicant. If USCIS grants asylum to the applicant, the asylee is eligible to apply for adjustment to lawful permanent resident (LPR) status after 1 year. If USCIS does not grant asylum and the applicant is present in the United States lawfully through other means, USCIS is to issue a Notice of Intent to Deny stating the reason(s) for asylum ineligibility and provide an opportunity for the applicant to respond. Whether or not asylum is granted, the applicant can continue living in the United States under his or her otherwise valid status. If USCIS does not grant asylum and the applicant is present in the United States unlawfully, USCIS is to refer the application to EOIR, together with a Notice to Appear, which requires that the applicant appear before an EOIR immigration judge for adjudication of the asylum claim in removal proceedings. Figure 1 provides an overview of the USCIS affirmative asylum process. EOIR follows the same procedures for defensive asylum applications and affirmative asylum referrals from USCIS. For affirmative asylum referrals, the immigration judge reviews the case de novo, meaning that the judge evaluates the applicant’s affirmative asylum application anew and is not bound by an asylum officer’s previous determination. EOIR asylum hearings are adversarial proceedings in which asylum applicants appear in removal proceedings for adjudication of the asylum claim, and may apply for other forms of relief or protection as a defense against removal from the United States. First, the judge conducts an initial hearing (referred to as a master calendar hearing) to, among other things, ensure that the applicant understands the court proceedings and schedule a hearing to specifically address the asylum application (referred to as a merits hearing). Second, during the merits hearing, the judge hears testimony from the applicant and any other witnesses, oversees cross- examinations, and reviews evidence. ICE trial attorneys represent DHS in these proceedings. An asylum applicant may self-represent or may be represented by an attorney at no cost to the U.S. government. The judge may question the applicant or other witnesses. Judges render oral and, in some cases, written decisions after the immigration court proceedings end. If the judge determines that the applicant is eligible for asylum, the asylee can remain in the United States indefinitely unless asylum status is subsequently terminated. A grant of asylum from an immigration judge confers the same benefits as a grant of asylum from a USCIS asylum officer. If the judge determines that the applicant is ineligible for asylum, and is removable, the judge may order the applicant to be removed from the United States, unless the applicant seeks (and receives) another form of relief from removal. Judges’ decisions are final unless appealed to the Board of Immigration Appeals (BIA). Figure 2 provides an overview of the DOJ affirmative and defensive asylum process. Asylees, or individuals who have been granted asylum, are considered qualified aliens for the purpose of eligibility for federal, and state or local, public benefits. Subject to certain statutory criteria, asylees may be eligible for a number of federal means-tested public benefits including Supplemental Security Income, Supplemental Nutrition Assistance Program, Temporary Assistance for Needy Families, and Medicaid. In addition, asylees may also be eligible for federal student financial aid, among other benefits. Asylees are authorized for employment in the United States as a result of their asylum status and can receive an Employment Authorization Document (EAD) issued by USCIS. In addition, asylum applicants can receive an EAD after their applications have been pending, including in both the USCIS and EOIR adjudicative process, for 180 days, not including any delays requested or caused by the applicant such as requesting to reschedule or failing to appear at the asylum interview or, where applicable, the time between issuance of a request for evidence and receipt of the applicant’s response. Within 2 years of receiving asylum status, asylees can request derivative asylum status for their spouses and unmarried children under age 21, a provision that allows family members to join the asylee in the United States. Immigration benefit fraud involves the willful misrepresentation of material fact for the purpose of obtaining an immigration benefit, such as asylum status, without lawful entitlement. Immigration benefit fraud is often facilitated by document fraud and identity fraud. Document fraud includes forging, counterfeiting, altering, or falsely making any document, or using, possessing, obtaining, accepting, or receiving such falsified documents in order to satisfy any requirement of, or to obtain a benefit under, the INA. Identity fraud refers to the fraudulent use of others’ valid documents. Fraud can occur in the affirmative and defensive asylum processes in a number of ways. For example, an applicant may file fraudulent supporting documents with his or her affirmative asylum application in an attempt to bolster the facts of a claim. Or, an applicant may submit a fraudulent address in order to file for asylum within the jurisdiction of an asylum office or immigration court perceived to be more likely to grant asylum than another office or court. Further, an attorney, preparer, or interpreter can, in exchange for fees from the applicant, prepare and file fraudulent documents, written statements, or supporting details about an applicant’s asylum claim, with or without the applicant’s knowledge or involvement. For the purposes of this report, we define asylum fraud as the willful misrepresentation of material fact(s), such as making false statements, submitting forged or falsified documents, or conspiring to do so, in support of an asylum claim. It is possible to terminate an individual’s asylum status under certain circumstances, including where there is a showing of fraud in the application such that the individual was not eligible for asylum at the time it was granted. By regulation, USCIS may only terminate asylum granted by USCIS; however, EOIR may terminate asylum granted by either USCIS or EOIR. For cases granted by USCIS, except in the Ninth Circuit, USCIS issues the asylee a Notice of Intent to Terminate and conducts an interview in which the individual may present evidence of his or her asylum eligibility. If termination is warranted, USCIS then provides written notice to the individual of termination of his or her asylum status and is to initiate removal proceedings for the individual in immigration court, as appropriate. While in removal proceedings, the individual may reapply for asylum before an immigration judge. The judge is not required to accept the determination of fraud made by USCIS and determines the respondent’s eligibility for asylum anew. For cases granted by an immigration judge, the BIA, or by USCIS in the Ninth Circuit, ICE OPLA may petition the immigration court to re-open a case in which an individual has been granted asylum and request the termination of the individual’s asylum status because of fraud. In such a case, ICE OPLA must prove, by a preponderance of evidence, that there was fraud in the asylum application that would have rendered the asylee ineligible for asylum at the time it was granted. The immigration judge has jurisdiction to conduct an asylum termination hearing as part of the removal proceeding, and if asylum status is terminated, the individual may be subject to removal from the United States. Our Fraud Framework is a comprehensive set of leading practices that serves as a guide for program managers to use when developing efforts to combat fraud in a strategic, risk-based manner. The framework describes leading practices for establishing an organizational structure and culture that are conducive to fraud risk management, designing and implementing controls to prevent and detect potential fraud, and monitoring and evaluating to provide assurances to managers that they are effectively preventing, detecting, and responding to potential fraud. Managers may perceive a conflict between their priorities to fulfill the program’s mission, such as efficiently disbursing funds or providing services to beneficiaries, and taking actions to safeguard taxpayer dollars from improper use. However, the purpose of proactively managing fraud risks is to facilitate, not hinder, the program’s mission and strategic goals by ensuring that taxpayer dollars and government services serve their intended purposes. Figure 3 illustrates our Fraud Framework. The Fraud Framework includes control activities that help agencies prevent, detect, and respond to fraud risks as well as structures and environmental factors that influence or help managers achieve their objectives to mitigate fraud risks. The framework consists of four components for effectively managing fraud risks: commit, assess, design and implement, and evaluate and adapt. Leading practices for each of these components include the following: Commit: create an organizational culture to combat fraud at all levels of the agency, and designate an entity within the program office to lead fraud risk management activities; Assess: assess the likelihood and impact of fraud risks and determine risk tolerance and examine the suitability of existing controls and prioritize residual risks; Design and implement: develop, document, and communicate an antifraud strategy, focusing on preventive control activities; and Evaluate and adapt: collect and analyze data from reporting mechanisms and instances of detected fraud for real-time monitoring of fraud trends, and use the results of monitoring, evaluations, and investigations to improve fraud prevention, detection, and response. The total number of asylum applications (principal applicants and their eligible dependents), including affirmative and defensive applications, increased from 47,118 in fiscal year 2010 to 108,152 in fiscal year 2014, an increase of 130 percent. During this time, affirmative asylum applications filed directly with USCIS increased by a total of 131 percent. Defensive asylum applications filed with EOIR increased 125 percent. Table 1 shows the number of affirmative and defensive asylum applications filed each year for fiscal years 2010 through 2014. The number of principal affirmative applications and their eligible dependents has increased each year from fiscal years 2010 through 2014. The number of principal affirmative applications filed has increased from 28,108 in fiscal year 2010 to 56,959 in fiscal year 2014, a 103 percent increase. The portion of affirmative asylum applicants noted as dependents increased from 6,266 in fiscal year 2010 to 22,526 in fiscal year 2014, a 259 percent increase. Table 2 shows the number of principal and dependent affirmative asylum applications filed each year for fiscal years 2010 through 2014. Asylum applications (including principal applicants and their eligible dependents) filed with EOIR—affirmative applications referred from USCIS and defensive applications—increased from 32,830 in fiscal year 2010 to 41,920 in fiscal year 2014, an increase of 28 percent. Table 3 shows the number of affirmative and defensive asylum cases EOIR received from fiscal years 2010 through 2014. The number of affirmative applications USCIS referred to EOIR increased from 20,086 in fiscal year 2010 to 25,907 in fiscal year 2012, and decreased from fiscal year 2012 to fiscal year 2014. Asylum Division officials attribute the decrease in affirmative asylum cases referred to EOIR to the increased number of credible fear and reasonable fear cases USCIS has received, which has caused USCIS to divert resources away from affirmative asylum cases and adjudicate fewer affirmative asylum cases overall. The number of credible fear and reasonable fear cases increased from 11,019 in fiscal year 2010 to 60,085 in fiscal year 2014, an increase of 445 percent. From fiscal year 2010 through fiscal year 2014, China accounted for the largest number of affirmative asylum applicants (26 percent), followed by Mexico (13 percent) and Egypt (6 percent). Figure 4 shows the top 10 countries for affirmative asylum applications filed with USCIS. From fiscal year 2010 through fiscal year 2014, China accounted for the largest number of asylum applicants filing with EOIR (20 percent), followed by Mexico (20 percent) and El Salvador (9 percent). Figure 5 shows the top 10 countries for asylum applicants filing with EOIR. USCIS has eight asylum offices across the United States and, as of April 2015, 353 asylum officers who are responsible for adjudicating affirmative asylum claims. The number of affirmative asylum applications filed per USCIS office varied widely. From fiscal years 2010 through 2014, the New York and Los Angeles asylum offices accounted for 45 percent of all affirmative asylum applications filed. The number of affirmative asylum applications filed in Newark and Los Angeles has grown more than in any other asylum office during this time, with a total increase of 8,352 and 9,070 applications. Figure 6 shows affirmative asylum applications received by each USCIS asylum office from fiscal year 2010 through fiscal year 2014. Final administrative adjudication of an asylum application, not including administrative appeals, is to be completed within 180 days after filing, absent exceptional circumstances and not including any delays requested or caused by the applicant, or, where applicable, the amount of time between issuance of a request for evidence and the receipt of the applicant’s response. USCIS’s backlog of principal affirmative asylum applications as of September 2015 was 106,121. Of those pending cases, 64,254 (61 percent) have exceeded the 180-day requirement. In addition, the number of affirmative asylum cases that were adjudicated in more than 180 days has increased from fiscal years 2010 through 2014. Figure 7 shows the number of affirmative asylum applications adjudicated from fiscal years 2010 through 2014 where USCIS’s adjudication exceeded 180 days. According to Asylum Division officials, several factors have affected USCIS’s ability to adjudicate affirmative asylum applications in a timely manner. For example, officials stated that they have diverted resources to address the growth in credible fear and reasonable fear cases, which increased by over 400 percent from fiscal year 2010 through fiscal year 2014. In addition, these officials stated that they had prioritized applications from unaccompanied alien children based on the time sensitivity of such cases. Asylum Division officials said that this diversion of resources and prioritization of these claims contributed to the increasing backlog of affirmative asylum applications. Asylum Division officials stated that the increasing number of affirmative applications in recent years has also had significant implications for the workload of USCIS’s asylum offices, and that USCIS plans to hire additional staff to help address the current level of applications and the increasing backlog. Both DHS and DOJ have established dedicated antifraud entities, a leading practice for managing fraud risks. Our Fraud Framework states that a leading practice for managing fraud risks is to establish a dedicated entity to design and oversee fraud risk management activities. Within DHS, USCIS created FDNS in 2004 to help ensure immigration benefits are not granted to individuals who pose a threat to national security or public safety or who seek to defraud the immigration system. As of fiscal year 2015, USCIS has deployed 35 FDNS immigration officers and 4 supervisory immigration officers working across all eight asylum offices. FDNS immigration officers working in asylum offices are tasked with conducting background checks to resolve national security “hits” and fraud concerns, which arise when asylum officers conduct required background checks of asylum applicants; addressing fraud-related leads provided by asylum officers and other sources; and liaising with law enforcement entities, such as HSI, to provide logistical support in law enforcement and national security matters. In September 2007, DOJ established an EOIR antifraud officer through regulation. The regulation states that the antifraud officer is to (1) serve as a point of contact relating to concerns about fraud, particularly with respect to fraudulent applications or documents affecting multiple removal proceedings, applications for relief from removal, appeals, or other proceedings before EOIR; (2) coordinate with DHS and DOJ investigative authorities with respect to the identification of and response to fraud; and (3) notify EOIR’s Disciplinary Counsel and other appropriate authorities as to instances of fraud, misrepresentation, or abuse related to an attorney or accredited representative. The activities of the antifraud officer (also known as the Fraud Prevention Counsel) and supporting staff collectively are referred to as the Fraud and Abuse Prevention Program. According to EOIR’s Fraud Prevention Program fact sheet, the goal of the program is to protect the integrity of EOIR and other immigration proceedings by promoting efforts to deter fraud and provide a systematic response to identifying and referring instances of suspected fraud and abuse. In practice, according to the Fraud Prevention Counsel, they collect data and review records of proceedings in response to reports of suspected fraud. In addition, through the program, EOIR coordinates with law enforcement agencies to refer appropriate matters for investigation and assist in fraud investigations and prosecutions. Further, the program provides training for EOIR staff, including immigration judges, and distributes a monthly newsletter about fraud related activity. Table 4 shows the total number of complaints received, the number of case files opened, and the number of asylum-related case files opened from fiscal year 2010 through fiscal year 2014. EOIR’s Fraud and Abuse Prevention Program tracks the number of complaints it receives about potential fraud, but does not create a formal case file if the complaint or request for assistance can be closed quickly with minimal investment of staff time. As a result, not every complaint has a corresponding file. USCIS has not assessed fraud risks across the affirmative asylum application process. The Fraud Framework states that it is a leading practice for agencies to create an organizational culture to combat fraud at all levels and designate an entity to lead fraud risk management activities, such as planning regular fraud risk assessments to determine a fraud risk profile for their program. There is no universally accepted approach for conducting fraud risk assessments, since circumstances among programs vary; however, assessing fraud risks generally involves five actions: identifying inherent fraud risks affecting the program, assessing the likelihood and impact of those fraud risks, determining fraud risk tolerance, examining the suitability of existing fraud controls and prioritizing residual fraud risks, and documenting the program’s fraud risk profile. Depending on the nature of the program, the frequency with which antifraud entities update the assessment can range from 1 to 5 years. USCIS officials stated that USCIS has not conducted an enterprise-wide fraud risk assessment, as the agency has implemented individual activities that demonstrate that it is conducting risk assessments. According to USCIS officials, such activities include the prescreening of asylum applications by FDNS immigration officers in advance of asylum interviews, security and background checks of applicants, information sharing agreements between the United States and other countries to access records related to persons of interest, fraud training for asylum officers, and mechanisms for the referral of cases to FDNS and to other investigative entities. Investigations of fraud are usually conducted after fraud has occurred and asylum may or may not have been granted. While these efforts can help USCIS detect and investigate potential fraud in individual asylum applications, they do not position USCIS to assess fraud risks across the affirmative asylum application process. The mentioned mechanisms are all tools with which to support a fraud risk assessment; however, an enterprise-wide fraud risk assessment would provide further information on the inherent risks across all applications. For example, asylum officers face fraud risks because they must make decisions, at times, with little or no documentation to support or refute an applicant’s claim. As noted in the Fraud Framework, fraud risk management activities such as a fraud risk assessment may be incorporated into or aligned with internal activities and strategic objectives already in place, and information on fraud trends and lessons learned can be used to improve the design and implementation of fraud risk management activities. Further, regular fraud risk assessments will help identify fraud vulnerabilities before any actual fraud occurs, and allow management to take steps to strengthen controls for fraud. Various cases of asylum fraud demonstrate ways in which applicants and preparers have sought to exploit the asylum system and help illustrate fraud risks in the affirmative asylum application process, especially risks associated with attorney and preparer fraud. For example, As of March 2014, a joint fraud investigation led by the U.S. Attorney’s Office for the Southern District of New York, the Federal Bureau of Investigation (FBI), the New York City Police Department, and USCIS, known as Operation Fiction Writer, resulted in charges against 30 defendants, including 8 attorneys, for their alleged participation in immigration fraud schemes in New York City. According to discussions with USCIS officials and a FBI press release, allegations regarding these defendants generally involved the preparation of fraudulent asylum applications that often followed one of three fact patterns: (1) forced abortions performed pursuant to China’s family planning policy; (2) persecution based on the applicant’s belief in Christianity; or (3) political or ideological persecution, typically for membership in China’s Democratic Party or followers of Falun Gong. Attorneys and preparers charged in Operation Fiction Writer filed 5,773 affirmative asylum applications with USCIS, and USCIS granted asylum to 829 of those affirmative asylum applicants. According to EOIR data, 3,709 individuals who were connected to attorneys and preparers convicted in Operation Fiction Writer were granted asylum in immigration court; this includes both affirmative asylum claims referred from USCIS as well as defensive asylum claims. An asylum fraud investigation prompted in 2009 and led by the Los Angeles asylum office resulted in the indictment and subsequent conviction of two immigration consultants. The indictment alleged that the two consultants charged approximately $6,500 to prepare and file applications on behalf of Chinese nationals seeking asylum in the United States. These applications falsely claimed that the applicants had fled China because of persecution for their Christian beliefs. HSI investigators have linked the consultants to more than 800 asylum applications filed since 2000. In 2002, we reported that the legacy Immigration and Naturalization Service (INS) did not know the extent of immigration benefit fraud. In response, INS initiated the Benefit Fraud Assessment program in 2002 to measure the integrity of specific nonimmigrant and immigrant applications by conducting administrative inquiries on randomly selected cases, but later discontinued the effort because of competing priorities after the terrorist attacks of September 11, 2001. USCIS reinitiated the Benefit Fraud Assessment program through FDNS in 2005 and, in November 2009, FDNS drafted a Benefit Fraud and Compliance Assessment (BFCA) on asylum for internal USCIS discussion. The assessment was intended to study the scope and types of fraud associated with the Form I-589, determine the relative utility of a number of fraud detection methods, and assess the extent to which asylum officers were using the fraud detection measures that were part of the adjudication process at the time. However, FDNS did not release the report to external parties because of questions about the validity and soundness of the methodology used in the BFCA. In 2010, USCIS’s Office of Policy and Strategy assumed responsibility for future BFCAs. USCIS contracted for a review of the BFCA on asylum, and in September 2012, the contractor reported that USCIS should not release the BFCA and made recommendations to improve future studies. For example, the contractor reported that the assessment process was not well planned and had methodological problems and issues with clarity. As of September 2015, officials from the Office of Policy and Strategy stated that USCIS is renaming the BFCA as the Immigration Benefit Fraud Assessment (IBFA). USCIS officials stated that under the new IBFA program, they plan to design rigorous research methods to provide fraud rates for selected benefit types. Office of Policy and Strategy officials did not provide a timeframe regarding the completion of future IBFA studies, and stated that USCIS has no plans to conduct an IBFA on asylum because they are still working to develop a framework for selecting which immigration benefits to study in the future. Office of Policy and Strategy officials said that the IBFA is not a fraud risk assessment and that their efforts will not be used to assess the risk of fraud in benefit types but will, instead, estimate the fraud rate of a given benefit. USCIS officials stated that they do not view the IBFA as a fraud risk assessment and that asylum is more difficult to study than other immigration benefits because asylum claims are generally based on testimonial evidence, making it more difficult to prove fraud than with other claims, and involve confidentiality restrictions. Standards for Internal Control in the Federal Government states that entities should comprehensively identify risks at both the entity-wide and activity levels. A risk assessment will help to determine how risks should be managed through the identification and analysis of relevant risks associated with achieving agency objectives. Because USCIS must balance its mission to protect those with genuine asylum claims with the need to prevent ineligible individuals from fraudulently obtaining asylum, USCIS could benefit from assessing fraud risks across its asylum adjudication process, particularly to assess the fraud risk tolerance of the asylum system—a leading practice for assessing fraud risks. The Fraud Framework states that managers who effectively assess fraud risks attempt to fully consider the specific fraud risks the agency or program faces, analyze the potential likelihood and impact of fraud schemes, and document prioritized fraud risks. The aforementioned examples of fraud investigations further illustrate the need for preventive measures of fraud detection within the asylum program. In addition, risk tolerance reflects management’s willingness to accept a higher level of fraud risk based on the circumstances and objectives of the program. For example, to protect genuine asylum applicants who may be unable to provide documents supporting their applications, asylum law states that testimonial information alone can be sufficient for asylum applicants to meet the burden of proof for establishing asylum eligibility. According to USCIS training materials for new asylum officers, asylum officers are to interview applicants in a nonadversarial manner and assume a cooperative approach as the applicant seeks to establish his or her eligibility. USCIS instructs asylum officers, when assessing whether an applicant has provided sufficient detail about his or her claim, to account for the amount of time that has elapsed since the events occurred; the possible effects of trauma; the applicant’s background, education, and culture; and any other factors that might impair the applicant’s memory. The Asylum Division Branch Chief said that while this cooperative approach aims to protect genuine asylees, it can also create favorable circumstances for ineligible individuals who seek to file fraudulent claims, and asylum officers in seven of the eight asylum offices we spoke with told us that they have granted asylum in cases in which they suspected fraud. For example, three asylum offices said that it was difficult to prove fraud existed in the asylum application. Although there are individual efforts in place to detect fraud, an enterprise-wide assessment of fraud risk could better inform asylum officers when adjudicating cases, and influence training materials regarding such subjects as country conditions. Without regularly assessing fraud risks and determining the fraud risk tolerance of the USCIS asylum adjudication process, USCIS does not have complete information on the inherent fraud risks that may affect the integrity of the affirmative asylum application process and therefore does not have reasonable assurance that it has implemented controls to mitigate those risks. Moreover, given the growth in affirmative asylum applications in recent years, and the USCIS pending caseload of over 100,000 affirmative asylum cases to adjudicate, assessing program-wide fraud risks could help USCIS target its fraud prevention efforts to those areas that are of highest risk in accordance with its fraud risk tolerance. EOIR has not assessed the fraud risks associated with asylum applications across immigration courts. EOIR’s immigration judges serve as the sole adjudicators for all defensive asylum claims made in the immigration courts and affirmative asylum applications referred by USCIS’s asylum officers. Asylum fraud-related cases mentioned below have demonstrated that EOIR faces fraud risks in these claims. The Fraud Framework states that it is a leading practice for agencies to create an organizational culture to combat fraud at all levels and designate an entity to lead fraud risk management activities, such as planning regular fraud risk assessments to determine a fraud risk profile for their program. EOIR officials told us that the Fraud and Abuse Prevention Program has not assessed fraud risks across asylum applications in the immigration courts because it lacks financial and human resources. EOIR’s Fraud and Abuse Prevention Program is composed of one full- time fraud prevention counsel, who serves as the antifraud officer pursuant to EOIR’s regulations, one part-time attorney, and several student interns. Therefore, according to EOIR’s antifraud officer, the Fraud and Abuse Prevention Program has primarily served as an in- house referral system for EOIR employees. EOIR officials also stated that it would be difficult to conduct a fraud risk assessment across immigration courts because fraud is difficult to measure. EOIR has efforts in place to assess fraud identified and referred to the Fraud and Abuse Prevention Program, such as reviewing fraud referrals once received, reviewing records of proceedings, and making referrals to law enforcement entities for investigation. However, recent asylum fraud cases identified in the program’s case files illustrate the presence of fraud risks across asylum applications in immigration courts. For example, according to EOIR data, immigration judges granted asylum to 3,709 individuals who were connected to attorneys and preparers convicted in Operation Fiction Writer. In addition, almost 20 percent (30 of 153) of EOIR’s Fraud and Abuse Prevention Case files opened in fiscal year 2010 through fiscal year 2014 were related to asylum fraud. Further, 17 of the 30 case files we reviewed contained multiple types of immigration fraud, including document fraud and benefit fraud, as well as potential fraud in connection with the unauthorized practice of law. As discussed above and in appendix II, the Fraud Framework states that it is a leading practice for agencies to plan regular fraud risk assessments and determine a fraud risk profile for their programs. Managers who effectively assess fraud risks attempt to fully consider the specific fraud risks the agency or program faces, analyze the potential likelihood and impact of fraud schemes, and document prioritized fraud risks. The Fraud Framework states that it is a leading practice for an agency to designate an antifraud entity as a repository of knowledge for fraud risk, and to tailor its fraud risk assessments process to the program in question. Factors such as size, resources, maturity of the program, and experience in managing fraud risks can influence how an agency plans its fraud risk assessment. Although quantitative techniques are generally more precise than qualitative methods, when resource constraints, expertise, or other circumstances prohibit the use of statistical analysis for assessing fraud risks, other quantitative or qualitative techniques can still be informative. For example, the Fraud Framework discusses the use of risk scoring to quantify the likelihood and effect of particular fraud risks. Our analysis of the Fraud and Abuse Prevention case files indicate that there are multiple types of fraud that could be assessed through a fraud risk assessment such as benefit fraud, marriage fraud, and fraud in connection with the unauthorized practice of law. We recognize that it can be difficult to measure or assess fraud risks and that EOIR has limited resources for assessing and addressing such risks. However, as noted in the framework, fraud risk management activities such as a fraud risk assessment may be incorporated into or aligned with internal activities and strategic objectives already in place, and information on fraud trends and lessons learned can be used to improve the design and implementation of fraud risk management activities. Proactive fraud risk management would also mitigate the risk for fraud so that it is less likely to occur. Without regularly identifying and assessing fraud risks and determining the fraud risk tolerance in immigration courts, EOIR does not have complete information on the inherent fraud risks that may affect the integrity of the defensive asylum process and therefore does not have reasonable assurance that it has implemented controls to mitigate those risks. In addition, as noted in our framework, fraud risk assessments can provide partners and stakeholders with information that can also assist in their operations and efforts. Managers who effectively manage fraud risks collaborate and communicate with internal and external stakeholders to share information on fraud risks, emerging fraud schemes, and lessons learned related to fraud control activities. ICE OPLA attorneys are responsible for presenting evidence of and proving fraud in immigration court, and ICE HSI investigates cases of asylum fraud that are referred from the immigration courts. EOIR officials said that its Office of Planning Analysis and Statistics has previously provided data for OPLA attorneys to assist in court proceedings and investigations when requested. ICE OPLA attorneys we interviewed at all four of the field offices we visited told us that if asylum fraud is detected, it is difficult to prove in immigration court. Attorneys at two of the offices we visited stated that, in their experience, proving fraud requires an immense amount of time and evidence. ICE OPLA attorneys in one location stated that, as a result of factors such as these, there is no incentive for them to litigate asylum fraud cases. An EOIR fraud risk assessment could help ICE OPLA, for example, better educate OPLA attorneys about fraud risks as they represent the government in immigration court proceedings. Moreover, managers can use the fraud risk assessment process to determine the extent to which controls may no longer be relevant or cost-effective. Thus, a fraud risk assessment would help EOIR ensure that it is targeting its limited fraud prevention resources effectively. Within USCIS, FDNS does not have complete or readily available data on fraud referrals and requests for assistance from asylum officers and on its asylum fraud-related investigations and the outcomes of those investigations. First, with regard to data on fraud referrals and requests for assistance from asylum officers, such data are not consistently entered into the FDNS Data System (FDNS-DS), which is USCIS’s agency-wide database for maintaining data and information on all FDNS activities, including activities associated with asylum fraud investigations. According to training materials for new asylum officers, if an asylum officer has questions about a potential fraud indicator while adjudicating an affirmative asylum claim, he or she can submit a request for assistance to the FDNS immigration officers in his or her asylum office. For example, FDNS may be able to provide additional information about an asylum applicant by conducting searches of databases that asylum officers cannot access. In addition, FDNS immigration officers can conduct document reviews and analyses of the application to determine whether fraud may exist. According to USCIS training materials for new asylum officers, each asylum office may have a different process for requesting assistance from FDNS. According to the training materials, as well as FDNS immigration officers we spoke with in asylum offices, officers typically deliver their responses to a request for assistance informally, such as by orally communicating the results of their reviews to asylum officers without supporting documentation. FDNS’s fraud detection standard operating procedures state that requests for assistance are to be entered into FDNS-DS. However, according to FDNS officials in headquarters and field offices, these requests are not consistently entered into FDNS-DS. Additionally, while the requests may be tracked at the office level within individual asylum offices, they are not otherwise tracked across individual offices by either the Asylum Division or FDNS. Moreover, according to the training materials for new asylum officers, in cases where a fraud indicator cannot be quickly resolved by FDNS, such as a suspicion of fraud or a complicated case needing more research by FDNS, the asylum officer is to complete a Fraud Referral Sheet. After receiving a referral, FDNS is to determine whether the referral has sufficient information to warrant further investigation. According to FDNS’s fraud detection standard operating procedures, FDNS immigration officers are to enter all fraud referrals, including those that they will decline, into FDNS-DS to accurately record the number of referrals received, track their processing, and support quality assurance. However, in practice, FDNS headquarters officials stated that officers typically enter referrals into FDNS-DS as “leads” only if they warrant additional investigation. While some FDNS immigration officers track referrals at the asylum office level, not all referrals are entered into the agency-wide FDNS-DS. As a result, FDNS-DS does not have complete data on the number of fraud referrals or requests for assistance in each asylum office or across asylum offices, making it difficult to determine the extent to which asylum officers request assistance from FDNS on fraud- related questions or suspicions in adjudicating asylum applications. Second, FDNS does not have readily available data on the number of asylum fraud cases it investigates, the number of asylum fraud cases in which FDNS immigration officers find asylum fraud, or the number of asylum fraud cases that FDNS refers to HSI for further investigation. According to FDNS’s fraud detection standard operating procedures, if FDNS immigration officers determine that a referral warrants additional investigation, they are to enter that referral into FDNS-DS as a fraud lead. If, after conducting research and analyzing the information associated with a lead, the immigration officer determines that a reasonable suspicion of fraud is articulated and actionable, the lead is elevated to a case. FDNS immigration officers may also enter a referral into the database directly as a case if a reasonable suspicion of fraud is articulated and actionable. According to FDNS officials, FDNS data entry rules require that all immigration forms associated with an individual under investigation be included with the individual’s FDNS-DS case. Not every immigration form associated with an individual or case is the basis for fraud in that case and a case may include multiple immigration forms. For example, if FDNS opened a case about an individual who was legitimately granted asylum, but who later committed marriage fraud, the FDNS-DS case record would include both the legitimate asylum application and the fraudulent marriage-based benefit application. Furthermore, according to FDNS officials, when an immigration officer first enters a case into FDNS-DS, he or she is to categorize the type of fraud that is the subject of the case. For example, the officer would categorize an asylum fraud case as “benefit fraud—asylum” in FDNS-DS. However, FDNS officials stated that, because of the limitations of FDNS- DS, each case record can only reflect one type of fraud at a time, although the system does have the capacity to record and report updates if, for example, the type of fraud associated with a record is changed. FDNS officials stated that a case that begins as an asylum fraud investigation might ultimately result in a fraud finding or referral to HSI based on another type of fraud, such as marriage fraud. FDNS officials stated that if asylum fraud is not the most egregious type of benefit fraud in a particular investigation, the investigation may not be categorized as asylum fraud in FDNS-DS. Because of the limitations of FDNS-DS, FDNS headquarters officials stated that the number of FDNS-DS records categorized as “benefit fraud—asylum” may not accurately represent the number of asylum fraud investigations completed by FDNS or the number of asylum fraud cases FDNS referred to HSI. FDNS headquarters officials stated that making such a determination would require a manual review of each case record in FDNS-DS categorized as “benefit fraud—asylum” or associated with an I-589, the asylum application. Both of these data fields indicate that the case record could be, but is not necessarily, related to an investigation of asylum fraud. Without this manual review, a process that would be extremely labor-intensive, FDNS cannot determine which immigration forms or benefit types are the subject of an investigation or of a referral from FDNS to HSI. According to FDNS data from FDNS-DS, in fiscal year 2014, FDNS opened 336 cases in which the individual implicated was associated with an asylum application, either as the applicant or as an attorney, preparer, or interpreter assisting the applicant, and FDNS found fraud in 210 of those cases. However, FDNS cannot readily determine how many of those cases involved asylum fraud without manually reviewing each individual case. Standards for Internal Control in the Federal Government states that agencies must have relevant, reliable, and timely information to determine whether their operations are performing as expected. Without complete data on the number of requests for assistance from asylum officers to FDNS, the number of referrals that asylum officers submit to FDNS, and the number of FDNS investigations that result in a finding of asylum fraud, USCIS officials cannot determine how often the fraud referral process is used or how often it results in a finding of asylum fraud. Complete data on these matters would also help support a fraud risk assessment, as previously discussed, by giving USCIS additional information about fraud schemes and trends from fraud detection activities so that officials can ensure that fraud detection activities are appropriately tailored to the agency’s risk profile. USCIS uses various tools to attempt to identify fraud in specific affirmative asylum applications. USCIS uses some of these tools, such as biometric identity verification and biographic and biometric background and security checks, on all asylum applications. These tools help asylum officers identify fraud by confirming the applicant’s identity and identifying prior criminal convictions, among other things. Further, the Asylum Division and FDNS have some additional tools available that officers can use to address cases with indicators of fraud; however, our analysis of HSI and USCIS data indicates that some of these tools are of limited utility and use. Specifically, USCIS guidance for FDNS immigration officers discusses the use of two fraud detection tools for verifying applicants’ claims and supporting documents—the ICE HSI Forensic Laboratory and overseas verification. HSI’s Forensic Laboratory specializes in determining the authenticity of documents and identifying the presence of alterations within those documents. In particular, the Forensic Laboratory specializes in verifying travel and identity documents, such as passports, visas, driver’s licenses, and identification cards. However, according to Forensic Laboratory guidance for document submission issued in 2010, the Forensic Laboratory prioritizes matters of national security, criminal violations, cases involving people who have been detained, and cases involving multiple incidents related to organized fraudulent activity. According to Forensic Laboratory officials, the Forensic Laboratory may accept non-priority requests on a case-by-case basis. Asylum applications, which are not criminal cases and usually involve nondetained applicants, therefore generally do not fit within the laboratory’s priorities, according to USCIS and ICE officials. Furthermore, both FDNS and Forensic Laboratory officials stated that the Forensic Laboratory generally cannot verify some types of documents commonly submitted as support for asylum claims, such as foreign police reports and medical records. Forensic Laboratory officials told us that these documents are difficult to authenticate because the laboratory does not have genuine exemplar documents for comparison purposes and because the documents are typically not standardized and do not have security features that can be verified by forensic examination. According to HSI and Asylum Division officials, neither the Forensic Laboratory nor the Asylum Division tracks submissions to the Forensic Laboratory specific to asylum applications; however, according to HSI data, USCIS submitted 60 cases to the Forensic Laboratory in fiscal year 2014 across all immigration benefits. Asylum officers we interviewed in all eight asylum offices said that they rarely use the Forensic Laboratory, in part because of untimely and inconclusive responses. Asylum officers may also submit documents for overseas verification, either by USCIS officers overseas or, in areas where USCIS does not have an overseas presence, by State Department consular officers. Overseas verification refers to the verification of events, education, or work experience that occurred in a foreign country or the authentication of a document or information that originated overseas. From fiscal years 2010 through 2014, asylum offices submitted 111 requests to either USCIS officers or State Department consular officers for overseas verification. Asylum officers we interviewed in all eight asylum offices stated that they rarely use overseas verification, in part because they do not receive responses to their requests in a timely manner. In addition, asylum confidentiality restrictions limit the extent to which asylum officers can verify information overseas; USCIS and State Department personnel generally cannot share information contained in or pertaining to an asylum application outside the U.S. government in a manner that would disclose the fact that the individual applied for asylum in the United States. Furthermore, asylum officers told us that the outcome of asylum adjudications rarely hinges on the authenticity of a single document, so document verification may not change the outcome of a case. Further, USCIS’s tools for detecting patterns of fraud across affirmative asylum applications are limited because USCIS relies on a paper-based system for asylum applications. After the applicant submits a paper Form I-589 to USCIS, Service Center personnel input certain biographic information, such as the applicant’s name, date of birth, and nationality, from the paper application into the RAPS database. Asylum office personnel use RAPS to track the application’s status and facilitate interview scheduling. In some cases, FDNS immigration officers can use information from RAPS for fraud detection by creating reports of cases with certain biographic characteristics, thereby identifying cases for potential review. However, RAPS does not have the capability to detect fraud trends because, while it captures biographic data about an asylum applicant, it does not capture other key information that could be used to detect fraud. Such information could include the applicant’s written statement, the reason for the applicant’s claim, or the name of the applicant’s interpreter. Asylum officers and FDNS immigration officers told us that they can identify potential fraud by manually analyzing trends across asylum applications they review. Because of USCIS’s reliance on paper asylum applications, asylum officers and FDNS immigration officers use ad hoc, labor-intensive methods to detect such trends among asylum cases. For example, FDNS immigration officers at three of the eight asylum offices stated that they photocopy asylum applications and maintain hard-copy case files for analysis. In our 2008 report on the asylum adjudication process, we surveyed asylum officers across all asylum offices and found that 61 percent of asylum officers stated that scanning all I-589s and using software to identify boilerplate language and trends was “greatly needed,” and 16 percent said it was “moderately needed.” According to the FDNS Branch Chief for USCIS’s RAIO Directorate, automated analytic capabilities for asylum applications, such as tools to detect fraud indicators, would lead to significant increases in efficiencies for fraud detection and investigation. For example, since 2014, FDNS has been reviewing the asylum applications associated with Operation Fiction Writer. FDNS does not have automated analytic tools to review information. Rather, FDNS immigration officers must manually review hundreds of asylum applications, requiring large investments of time and resources. In our interviews with asylum officers, officers in all eight asylum offices stated that they would benefit from greater access to analytic tools. According to the FDNS Branch Chief for RAIO, an automated analytic capability for asylum applications is a “critical need” for fraud detection. As we previously reported, in 2005, USCIS embarked on its multiyear Transformation Program to transform its paper-based immigration benefits process to a system with electronic application filing, adjudication, and case management. The main component of the program is the USCIS Electronic Immigration System (ELIS), which is to provide case management for adjudication of immigration benefits. However, USCIS has faced longstanding challenges in implementing its Transformation Program, which raise questions about the extent to which its eventual deployment will position USCIS to collect and maintain more readily-available data. In May 2015, we reported that USCIS expects the Transformation Program will cost up to $3.1 billion and be fully deployed no later than March 2019, which is an increase of approximately $1 billion, and a delay of more than 4 years from its initial July 2011 baseline. USCIS’s most recent Life Cycle Cost Estimate for the Transformation Program states that USCIS will not complete deploying functional capabilities for USCIS’s humanitarian mission, which includes asylum, until September 2018. Officials from USCIS’s Transformation Program told us that, as of June 2015, they have not yet developed business requirements for asylum adjudication in USCIS ELIS or determined how USCIS ELIS implementation will affect asylum adjudications because they are currently focused on developing and deploying USCIS ELIS for other immigration benefits. Because USCIS has not yet developed business requirements for asylum in USCIS ELIS, it is too early to assess how the information contained in USCIS ELIS could facilitate USCIS’s asylum fraud detection efforts. Additionally, as we reported in May 2015, USCIS’s ability to effectively monitor USCIS ELIS program performance and make informed decisions about its implementation has been limited because department-level governance and oversight bodies were not using reliable program information to inform their program evaluations. The Fraud Framework states that it is a leading practice for agencies to use data analytics to identify and monitor trends that may indicate fraud and use information to improve fraud risk management activities, such as addressing control vulnerabilities and improving training. Identifying and implementing additional fraud detection tools, such as automated analytic software, could enable FDNS and asylum officers to detect fraud more readily while using limited resources more efficiently. Without such tools, FDNS immigration officers are not well positioned to identify cases associated with particular asylum fraud rings or aid in the investigation and prosecution of the attorneys, preparers, and interpreters who perpetrate asylum fraud. Some asylum offices have strengthened their capability to detect and prevent fraud by using FDNS immigration officers to prescreen affirmative asylum applications; however, the use of this practice varies across asylum offices. Prescreening applications, that is, reviewing the application for potential fraud indicators in advance of the asylum interview, allows FDNS to identify fraud trends and detect patterns that may not be evident in a small sample of asylum applications. Asylum officers we spoke with in all eight asylum offices stated that they face time constraints in adjudicating asylum applications. For example, asylum officers we spoke with in three asylum offices stated that they have limited time to review the details of the applications that they are adjudicating in advance of the applicant interview. Additionally, each asylum officer adjudicates approximately eight affirmative asylum applications per week. Therefore, an individual officer might not see patterns of fraud in single applications that would be visible if he or she were reviewing the entire universe of applications in each asylum office. For example, asylum officers or supervisors we spoke with in six of eight asylum offices stated that FDNS prescreening was, or would be, helpful in identifying fraud indicators or fraud trends. USCIS training materials state that it is important to identify indicators of fraud before the applicant’s interview so that asylum officers can ask appropriate questions during the interview. Before an interview, asylum officers can consult with their supervisors or FDNS about indicators of potential fraud in an application; however, they are not required to do so. As previously discussed, consistent with the REAL ID Act of 2005, credible testimony from the asylum applicant may be sufficient, without corroboration, for the applicant to receive asylum. Asylum officers are to raise discrepancies, inconsistencies, or identified fraud in the asylum application during the interview, and upon completion of the interview, the applicant or the applicant’s representative must have an opportunity to respond to the evidence presented. When FDNS does not prescreen applications, the asylum officer is responsible for identifying potential fraud in the application prior to the interview and using that information during the interview to assess the applicant’s credibility unless he or she temporarily pauses the interview to seek support from supervisors or FDNS. After an interview, the asylum officer may call applicants back to answer additional questions before a decision is rendered or conduct a full reinterview with applicants. However, in two asylum offices, supervisory asylum officers we spoke with stated that they prefer not to reinterview applicants because doing so adds to their adjudication backlog. Supervisory asylum officers we spoke with in three asylum offices stated that they conduct reinterviews when needed or in particular circumstances. In three offices where FDNS prescreens asylum applications for indicators of fraud, FDNS immigration officers we spoke with stated that FDNS provides information to the asylum officer about the nature of the potential fraud in the application in advance of the applicant interview. This allows the asylum officer to ask relevant questions during the interview; gives the applicant the opportunity to provide an explanation for any discrepancies, inconsistencies, or identified fraud in the file; and ensures that the asylum officer is in the strongest position to assess the credibility of the applicant. According to FDNS immigration officers we spoke with in two asylum offices, prescreening also allows FDNS to identify applications that are affiliated with attorneys, preparers, or interpreters under FDNS investigation. FDNS immigration officers we interviewed in five of the eight asylum offices stated that they prescreen some affirmative asylum applications; one asylum office prescreens all applications; and two asylum offices do not prescreen applications. FDNS officials stated that staffing and resource constraints, coupled with the increase in affirmative asylum applications in recent years, have made it difficult for FDNS to prescreen all asylum applications. For example, in January 2015, immigration officers in one asylum office that does not prescreen asylum applications developed a plan to begin prescreening, but were unable to implement the plan because of a lack of administrative resources. In the five offices that prescreen some applications, officers may select applications for prescreening at random or based on certain characteristics such as the applicant’s country of origin. Immigration officers set their own prescreening priorities in most of these offices. In both offices that do not prescreen affirmative asylum applications, FDNS officials stated that prescreening would be helpful and is an effective system for identifying fraud patterns but that resource constraints and national security priorities have limited their ability to prescreen asylum applications. However, the asylum office that prescreens all asylum applications is also the office that received the most affirmative asylum applications from fiscal years 2010 to 2014, and from fiscal years 2010 to 2013, this office was staffed with two full-time FDNS immigration officers, which is equal to or less than the staffing of all other FDNS immigration officers in asylum offices in that time period. This office was able to prescreen all asylum applications even though it had similar staffing resources and a higher volume of asylum applications than any other asylum office. Moreover, the head of the Asylum Division stated that FDNS prescreening is helpful to asylum officers and that he would like FDNS to prescreen all asylum applications prior to the interview. The FDNS Branch Chief for RAIO also stated that she supported more robust prescreening of affirmative asylum applications and noted that the process would need to be tailored to the specific needs and resource levels for each office. According to the Fraud Framework, designing and implementing specific control activities to prevent and detect fraud is a leading practice for managers. Additionally, the framework states that preventive control activities generally offer the most cost-effective investment of resources and that, while targeted controls, such as prescreening, may be more costly than agencywide controls, such as general fraud detection responsibilities, targeted controls may lower the cost of identifying each instance of fraud because they are more effective than controls that are not targeted. Although prescreening asylum cases may require additional time from FDNS immigration officers, it could ultimately help save time and resources by helping FDNS officers build large-scale asylum fraud investigations and detect new fraud patterns in a timely manner. Moreover, prescreening could help save resources by identifying indicators of fraud before the asylum interview. This would allow asylum officers to ask relevant questions during the interview and reduce the need for time-consuming reinterviews, in which the asylum office requests that an applicant return for a second interview to address issues not covered in the initial interview. Requiring that FDNS immigration officers prescreen all affirmative asylum applications for indicators of fraud, to the extent that it is cost-effective and feasible, would allow FDNS to better detect any such indicators at the point where that information is most useful for preventing asylum fraud. FDNS has not established clear responsibilities related to fraud detection for its immigration officers in asylum offices, and FDNS fraud detection activities vary widely by asylum office. In March 2011, FDNS issued standard operating procedures for fraud detection, which describe the procedures that FDNS immigration officers are to follow when investigating referrals related to immigration benefit fraud, as well as the process for referring immigration benefit fraud cases to HSI or other government or law enforcement agencies. These standard operating procedures are intended to guide fraud detection in all USCIS adjudications, including those at Service Centers and Field Offices, in addition to asylum offices. However, the standard operating procedures do not provide further details or guidance on the roles and responsibilities of FDNS immigration officers working in asylum offices. According to RAIO officials, FDNS immigration officers working in asylum offices face unique fraud detection challenges and the standard operating procedures state that immigration officers working in asylum offices must be sensitive to the unique legal requirements and issues involved with asylee processing, such as confidentiality requirements. FDNS immigration officers we spoke with in all eight asylum offices stated that they have limited guidance about their roles and responsibilities with respect to fraud detection, and officers at seven of the eight offices stated that the limited guidance creates challenges for them in addressing asylum fraud. Further, some of the processes outlined in the standard operating procedures differ from the processes we observed FDNS immigration officers following during our site visits to asylum offices. For example, the procedures state that FDNS will refer single-scheme cases—that is, individual cases of fraud—to HSI when they involve an attorney, interpreter, or preparer. FDNS immigration officers we spoke with at seven of eight asylum offices told us that they generally do not submit single-scheme cases to HSI. HSI officials we spoke with confirmed that they rarely accept single-scheme asylum fraud cases for investigation because single-scheme cases are difficult to prosecute, and the penalties for individual instances of fraud are low. In addition, FDNS immigration officers at three asylum offices expressed confusion about whether they were permitted to conduct site visits for asylum fraud investigations, which the standard operating procedures list as one of the duties of an immigration officer. Site visits allow FDNS immigration officers to verify information presented in an asylum application, such as an applicant’s home address. According to FDNS officials, immigration officers may have been confused because they were not permitted to conduct site visits in the past because of limited resources and concerns about officer safety. However, in September 2015, FDNS headquarters officials stated that officers are permitted to conduct site visits, as appropriate for case- specific needs, and the additional FDNS officers hired in 2014 helped address prior resource constraints. Further, the standard operating procedures do not discuss prescreening asylum cases in advance of the asylum interview; however, as we previously stated, we found that immigration officers at six of the eight asylum offices were prescreening at least some asylum applications. Additionally, FDNS’s fraud detection activities varied widely across the eight asylum offices. For example, one asylum office we visited was responsible for submitting 87 of the 111 total overseas verification requests submitted by asylum offices from fiscal years 2010 through 2014. FDNS immigration officers at this office told us that they regularly prescreened asylum cases for potential fraud indicators, tracked potential fraud indicators in internal spreadsheets, submitted fraud referrals to HSI, and testified about asylum fraud in immigration court at the request of ICE OPLA trial attorneys. In another asylum office we visited, FDNS immigration officers we spoke with told us that they devote “very little time” to fraud detection and investigation because they focus on national security priorities. Immigration officers at this office did not submit any overseas verification requests from fiscal years 2010 through 2014, nor do they regularly prescreen applications. Asylum officers from one asylum office we spoke with said they report identified fraud trends to FDNS immigration officers in their office, but FDNS does not take action on the referrals or disseminate fraud trends or feedback regarding fraud referrals. In another asylum office, asylum officers said that fraud referrals and fraud trends are discussed informally between individual asylum and FDNS officers. USCIS issued guidance in December 2014 detailing FDNS’s priorities for immigration officers in the field for fiscal year 2015. The guidance states that FDNS will develop, implement, and monitor policies and programs that enhance USCIS’s ability to detect and resolve fraud issues. Standards for Internal Control in the Federal Government states that a good internal control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility and establish appropriate lines of reporting. Furthermore, the Fraud Framework states that effective managers of fraud risks establish roles and responsibilities for fraud detection activities and describe the fraud risk management activities intended to prevent, detect, and respond to fraud as part of an overall antifraud strategy. According to FDNS officials, FDNS did not think it was necessary to issue asylum- specific guidance for some fraud detection activities, such as site visits, because the number of immigration officers assigned to asylum was so small in the past that immigration officers had very little time for fraud detection activities. However, between fiscal years 2014 and 2015, the number of FDNS immigration officers working in asylum offices more than doubled, from 18 to 39. This increase in staffing levels will allow FDNS immigration officers to devote more time to detecting asylum fraud, according to FDNS headquarters officials. Developing asylum-specific guidance on the fraud detection roles and responsibilities of FDNS immigration officers working in asylum offices would better position those officers to understand their fraud detection roles and responsibilities, tools that are available to them in carrying out those roles and responsibilities, and features that are unique to the asylum system. USCIS training for asylum officers includes basic training for new asylum officers and weekly training for all asylum officers; however, these trainings include limited information on fraud as compared to other topics. The training program for asylum officers is comprised of three main components. First, new asylum officers participate in 3 weeks of self- paced RAIO Directorate and Asylum Division distance training in their respective asylum offices. Distance training consists of webinars and video teleconference presentations, and asylum officers are expected to read the training materials and complete exercises and quizzes in preparation for residential training. Second, asylum officers participate in a 6-week residential basic training program, which includes 3 weeks of RAIO Directorate training and 3 weeks of Asylum Division training. Both courses include classroom instruction, practical exercises, and mock interviews on a variety of topics, such as national security, case law, children’s claims, gender-related claims, human trafficking, and interviewing. At the end of the residential training courses, new asylum officers must pass final exams about the course with a score of at least 70 percent. Third, USCIS policy requires asylum offices to allocate 4 hours per week for formal or informal training for asylum officers and supervisory asylum officers. The training can range from classroom instruction by the asylum office’s Training Officer to individual study time that asylum officers can use to study case law, research country conditions affecting prospective asylees, and read new USCIS procedures and guidance. The Asylum Division requires Training Officers to track the date and topic of each weekly training session and report that information to Asylum Division headquarters on a quarterly basis. Regarding the distance training and residential training for new officers, USCIS’s training materials include some information related to identifying and addressing potential fraud. Specifically, the RAIO distance training includes a webinar about fraud, and during the RAIO residential training sessions, asylum officers receive classroom instruction on various topics such as interviewing, evidence, and gender-related claims. Asylum officers also participate in mock interviews. In addition, the Asylum Division residential training includes in-class instruction on the topics mentioned above, as well as on fraud-related issues. Asylum officers participate in practical exercises and mock interviews related to various topics. Specifically, during the Asylum Division residential training session, new asylum officers receive four hours of fraud training delivered via PowerPoint slide presentations taught by various FDNS officials. During this session, asylum officers also complete practical exercises related to the fraud referral sheet. According to USCIS officials, although each instructor has his or her own set of slides and may present the information in different formats or use different asylum case examples, the content of these slides does not vary among FDNS instructors and the instructors teach a core set of principles in each class. We analyzed the RAIO distance training webinar regarding fraud, as well as two presentations that USCIS provided to us as examples of those used during the Asylum Division’s training session. We found that the slides contained information on fraud indicators and the fraud referral process and, in particular, one PowerPoint presentation defined fraud, listed types of asylum fraud, highlighted the FDNS fraud referral sheet, and provided examples of prior fraud investigations. While the distance and residential training sessions include materials related to asylum fraud, these materials do not include the same level of detail, depth, or breadth as the written training modules for other RAIO and Asylum Division training sessions. These materials serve as reference materials for asylum officers after they begin to adjudicate cases. In particular, RAIO’s written training modules on other topics, such as the modules on human trafficking and gender-related claims, provide more robust discussions of each topic, contain links to relevant laws, and include suggested supplemental reading materials. For example, the human trafficking module and Asylum Division supplement contains lists of suggested interview questions, a sample memo that asylum officers can use to document human trafficking concerns, and a sample asylum decision. The gender-related claims module contains substantive definitions of eight types of gender-based harm, proposed interview considerations and sample questions, and an extensive legal analysis of such claims. The materials used for RAIO and Asylum Division training on fraud provide useful information on how fraud is defined and how to make referrals of suspected fraud to FDNS; however, these materials do not include extensive definitions of fraud, a sample memo, a sample decision, or sample interview questions. For example, our review of RAIO and Asylum Division training materials showed that these materials do not explain how asylum officers are to interview applicants when they suspect fraud or document fraud when writing asylum decisions. Moreover, supervisory asylum officers and asylum officers at six of the eight asylum offices we spoke with stated that they need additional fraud training. In particular, asylum officers in three offices cited a need for training on interviewing applicants in cases where they suspect fraud, and officers we spoke with at two offices cited a need for training on how to document and substantiate fraud in asylum decisions. Prior to 2012, USCIS had a written fraud training module. USCIS redeveloped its asylum officer training in 2012 and, since that time, neither the RAIO Directorate nor Asylum Division distance or residential basic training course have been guided by a written module on asylum fraud. Other USCIS materials refer asylum officers to the pre-2012 written fraud training module, which is no longer in place. For example, the Affirmative Asylum Procedures Manual refers asylum officers to the basic training materials for further guidance and instruction on various subjects, including how to address fraudulent evidence in an asylum application. Further, five of the RAIO training modules on other topics—such as the modules covering the affirmative asylum process and procedures, decision making, and evidence—refer asylum officers to the pre-2012 fraud module for more details on how to address and detect fraud in asylum applications. In September 2015, Asylum Division officials told us that they were working to finalize an updated fraud training module, but stated that the module required additional review before being finalized. Officials were unable to provide a time frame for finalizing the module. Officials previously attributed these delays to vacancies in the senior FDNS positions overseeing RAIO and the Asylum Division who would need to approve the updated module. As of March 2015, those positions have been filled. In technical comments that USCIS provided to us on a draft of this report, USCIS stated that it expected to finalize the written training module by March 2016 and provide this training to asylum officers by September 2016. While these plans are a positive step, it is too soon to tell the extent to which the finalized module will address the limitations we and asylum officers identified. Regarding ongoing training for asylum officers, according to Asylum Division officials, USCIS complements its basic training program by providing weekly training sessions on a variety of topics, including fraud issues. However, our analysis of quarterly Training Officer reports for all eight asylum offices in fiscal year 2014 found that 8 of 408 training sessions were reported as being dedicated to fraud, and four of the eight asylum offices did not report providing dedicated specific fraud training in fiscal year 2014. According to Asylum Division officials, many of the weekly training sessions in fiscal year 2014 focused on credible and reasonable fear because of the increased number of those cases. According to our analysis of the training reports, the most common use of weekly training time was staff meetings or cancellations of formal training for the week, and the second most common use was information on country conditions. According to Asylum Division officials, training on country conditions can provide asylum officers with information they can use to detect fraud in interviews; however, these trainings are not directly focused on identifying fraud. Weekly training topics also included security checks, immigration and asylum law, and USCIS policy changes. Three asylum offices we spoke with said that weekly training was not helpful for asylum adjudications. In one office, for example, officers stated that the weekly training was not helpful for identifying fraud, and was a burden at times because of their adjudication workload. The Fraud Framework states that it is a leading practice for agencies to design and implement specific controls to prevent and detect fraud, which include fraud awareness initiatives such as training. Increasing managers’ and employees’ awareness of potential fraud schemes through training and education can serve a preventive purpose by helping to create a culture of integrity and fraud deterrence. Providing asylum officers with additional training on asylum fraud, including finalizing the fraud training module and asylum division supplement for new asylum officers, would better position to USCIS to ensure that asylum officers have the training and skills needed to detect and address fraud indicators. USCIS has taken steps to assess training needs among asylum officers; however, USCIS has not conducted an agencywide training needs assessment for asylum officers since 2010. In 2008, we recommended that the Chief of the Asylum Division develop a framework for soliciting information in a structured and consistent manner on asylum officers’ and supervisors’ respective training needs. In response to our recommendation, USCIS delivered an online training needs assessment to asylum officers and supervisors in July 2010 and committed to creating a training agenda by soliciting and evaluating training needs and priorities annually thereafter. However, USCIS has not conducted regular training needs assessments since 2010. In 2012, as part of an effort to redesign its training programs, RAIO hired an independent contractor to identify critical skills for RAIO officers, develop strategies to deliver training content, and support the development of new officer exams. However, the exercise was a one-time effort, not an ongoing mechanism. As of April 2015, RAIO and Asylum Division officials stated that they collect feedback from new asylum officers immediately following each basic training course using an online survey collection tool. Asylum officers are encouraged to fill out a questionnaire related to the course and the instructor after each basic training module. At the conclusion of distance and residential training, RAIO officials compile the feedback and discuss ways to improve future sessions. However, both Asylum Division and RAIO officials stated that they review survey results as they are collected after each session rather than tracking trends across multiple classes of participants. Furthermore, asylum officers cannot use this feedback mechanism once they return to their asylum offices and begin adjudicating cases. According to RAIO officials, in June 2015, RAIO began developing a new post-training survey to assess the effectiveness of basic training for new officers. As of September 2015, the survey instrument is in draft form and undergoing internal review. RAIO officials said they plan to survey participants from the calendar year 2015 basic training program, and may include participants who attended basic training prior to 2015. However, RAIO officials stated that, like the online surveys following basic training modules, this survey will be limited to new asylum officers. Asylum Division officials stated that they collect information on training needs through monthly calls with Training Officers in each asylum office, as well as a recently implemented Quality Workplace Initiative to allow asylum officers to provide feedback within asylum offices on any topic or issue. However, the Asylum Division does not request information on training needs from the officers themselves on a regular basis and has not formally analyzed officer training needs over time. Further, the Asylum Division does not specifically solicit feedback on training needs through the Quality Workplace Initiative. Asylum Division officials stated that they previously collected feedback from new officers several months after they returned from basic training, but they discontinued this practice because of low response rates and a lack of resources. Asylum Division officials stated that it is difficult to devote resources to assessing the training needs of existing asylum officers when much of the Asylum Division’s training resources are devoted to training newly hired asylum officers. GAO’s Guide for Strategic Training and Development Efforts in the Federal Government states that evaluating training can aid decision makers in managing scarce resources and provide agencies information to systematically track the cost and delivery of training and assess the benefits of these efforts. Further, Standards for Internal Control in the Federal Government states that effective management of an organization’s workforce includes relevant training and that management must continually assess and evaluate its internal control activities to ensure that the control activities being used are effective and updated when necessary. During our interviews with asylum officers and RAIO and Asylum Division officials, the perspectives regarding the effectiveness of the training program varied. Both RAIO and Asylum Division officials said that the asylum officer basic training was sufficient and thoroughly prepared officers to adjudicate cases; however, officers we spoke with in six of eight asylum offices stated that the basic training was insufficient. Specifically, asylum officer perspectives on the sufficiency of their training on credibility differed from those of RAIO and Asylum Division officials. According to Asylum Division officials, training on credibility provides information to asylum officers on how, for example, they can ask questions during interviews to determine whether an applicant’s claim is credible. Suspected contradictions in an applicant’s testimony may indicate credibility concerns or fraud. Therefore, officials stated that ongoing training related to credibility is crucial for new officers. However, asylum officers we spoke with in seven of the eight asylum offices stated that USCIS’s credibility training is insufficient for asylum officers. Both RAIO and Asylum Division basic training includes modules on credibility; however, as of June 2015, the Asylum Division’s credibility training materials were under revision. Although the draft credibility training materials we analyzed discussed legal standards of credibility and case law analysis, the lesson plan contained blank sections, and unlike other RAIO and Asylum Division training materials, did not include sample decisions, or memos asylum officers can use to document such concerns. According to our analysis of the weekly training reports, 11 of 408 training sessions were reported as being dedicated to credibility determinations. The Guide for Strategic Training and Development Efforts in the Federal Government states that agencies should be able to evaluate training and development programs and demonstrate how these efforts help develop employees and improve the agencies’ performance. Additionally, because the evaluation of training and development programs can aid decision makers in managing scarce resources, our guide notes the importance of agencies’ need for developing evaluation processes that systematically track the cost and delivery of training and development efforts and assess the benefits of these efforts. USCIS does not have mechanisms in place to allow asylum officers to provide feedback about training needs after they begin adjudicating cases, making it difficult for Asylum Division headquarters officials to regularly obtain perspectives from asylum officers and supervisory officers about asylum officer training. In addition, asylum officers at one asylum office we spoke with said that a training feedback loop would improve training for asylum officers by allowing them to make suggestions for future training. Asylum officers within that office said they have made training requests to supervisors in the past, but did not see any follow-up or improvements as a result of their suggestions. Developing and implementing a mechanism to regularly collect feedback from asylum officers and supervisory asylum officers on their training needs would provide USCIS with insights to help the agency better evaluate its training program, and enhance the training courses based on asylum officer feedback. According to the Chief of the Asylum Division and other senior division officials, it has been difficult for USCIS to retain asylum officers because of the challenging nature of the position and the variety of other career opportunities available to asylum officers; however, USCIS does not systematically collect or analyze attrition data for asylum officers—a key component of strategic workforce planning. Asylum Division officials told us they use DHS’s staffing database, the Table of Organization Position System (TOPS), to track net asylum officer staffing changes for each fiscal year. However, these officials stated that this database does not capture comprehensive asylum officer attrition rates. For example, Asylum Division officials stated that TOPS does not track total hiring for each position type within the division and does not record departures from the asylum officer position when officers transfer within USCIS. Asylum Division officials also stated that they collect information monthly from each asylum office on all personnel changes, including new hires, transfers, and departures. However, Asylum Division officials told us that they do not collect these data in a systematic manner and rely on asylum offices to manually collect and report them to headquarters. In April 2015, we requested asylum officer attrition data from the Asylum Division for fiscal years 2010 through 2014. At the conclusion of our audit work, in September 2015, the Asylum Division provided updated attrition data that officials stated were reliable. These data differed significantly from the initial data provided in August 2015. Asylum Division officials stated that they had compiled these data by manually reviewing all personnel changes in the Asylum Division for fiscal years 2010 through 2014, a process that was labor-intensive and required several weeks to complete. Asylum officers and supervisory asylum officers we interviewed stated that, from their perspectives, attrition is high among asylum officers and this poses several challenges in effectively adjudicating asylum applications. For example, they stated that attrition has increased time pressures on each officer as asylum officers resign or transfer out of the Asylum Division. Officers we interviewed at all eight asylum offices told us that they face pressure from time constraints, which affects their ability to devote time to detecting fraud in asylum applications. In addition, according to senior Asylum Division officials, attrition requires USCIS to hire new, inexperienced officers who are not as knowledgeable about how to detect asylum fraud as more experienced officers. Supervisory asylum officers we spoke with told us that fraud detection is a skill honed through experience, and that newer asylum officers hired as a result of increased attrition are less skilled at being able to detect fraud in asylum applications. Asylum Division officials told us that they have faced challenges because of attrition and are working to reduce attrition among asylum officers. For example, Asylum Division officials told us that they created a new “senior asylum officer” position in 2014 to provide greater opportunity for advancement and have worked to support staff through training and mentoring programs. However, without reliable attrition data, it is difficult for USCIS to assess the effectiveness of these efforts in retaining staff. Key Principles of Effective Strategic Workforce Planning states that federal agencies should develop a strategic workforce plan that incorporates management, employee, and stakeholder input, and identifies critical skills and competencies needed to achieve programmatic goals. Further, the strategic workforce plan should address gaps in the number of staff, ensure that administrative and educational requirements are supporting workforce planning strategies, and monitor and evaluate progress toward programmatic goals. Without reliable, readily-available attrition data, USCIS does not have the information needed to develop an effective workforce planning strategy to determine the number of staff needed to address the increase in affirmative asylum applications and the applications backlog. USCIS has implemented some quality assurance procedures for asylum decisions that are designed to ensure asylum officers’ decisions are legally sufficient. However, USCIS’s random quality assurance reviews of asylum cases do not include examination of potential indicators of fraud in the case file. USCIS has a three-tiered framework for conducting quality reviews of asylum decisions. First, the Asylum Division requires a supervisory asylum officer to review every case file to assess whether the asylum officer’s decision is supported by law and the asylum officer followed proper procedures. For fiscal year 2014, USCIS also released new guidelines for asylum officer performance evaluation, which specify that supervisory asylum officers are to evaluate and provide feedback on whether asylum officers appropriately referred fraud indicators to FDNS and submitted fraudulent documents to the Forensic Laboratory or for overseas verification. Second, the Asylum Division’s Quality Assurance Branch requires that asylum offices submit certain types of cases to Asylum Division headquarters for review. According to Quality Assurance Branch officials, these reviews focus on sensitive asylum cases, such as cases involving complex issues of law or cases that could result in particularly negative outcomes if the applicant is improperly denied asylum, such as cases involving a juvenile. For example, as of July 2015, the Quality Assurance Branch requires asylum offices to submit to headquarters all cases for which the principal applicant is under 18 years of age and the officer had decided not to grant asylum. Our review of Quality Assurance Branch data found that, from fiscal years 2010 through 2014, the Quality Assurance Branch reviewed 5,696 applications. The most common type of application reviewed (3,213) involved juvenile applicants. The next most common reviews were of applications granted by an asylum officer for applicants from a country contiguous to the United States (Canada or Mexico) that relate to “novel” legal issues or criminal activity by the applicant in the United States or abroad (829 cases), applications that USCIS determined are likely to be publicized (425), and applications involving potential national security or terrorism risks (414). Third, each asylum office has a Training Officer, who, in addition to developing weekly training for asylum officers, also plays a quality assurance role. However, the extent of this function varies from office to office. Training Officers in six of the eight asylum offices stated that they generally review cases that are required to be submitted for headquarters review. None of the Training Officers we interviewed conducted random reviews of asylum applications and none reviewed applications for indicators of fraud, according to our interviews and observations. In 2008, we reported that although the Asylum Division had a quality review framework to ensure the quality and consistency of asylum decisions, local quality assurance reviews did not always occur. We recommended that USCIS develop a plan to more fully implement its quality review framework to, among other things, ensure that a sample of decisions was reviewed for quality and consistency. DHS concurred with the recommendation and, in response, in April 2009, the Asylum Division developed a program plan for reviewing a sample of asylum officers’ decisions and subsequently piloted the materials it developed for implementing the program. Over a 2-year period in 2012 and 2013, the RAIO Directorate reviewed a sample of decisions from each of the eight offices. Since that time, USCIS has not reviewed further samples of asylum decisions because it is still implementing the action items that resulted from the previous review and because RAIO plans to study credible fear in its next review. RAIO officials told us they tentatively plan to conduct another review of a random sample of affirmative asylum cases in 2017. However, USCIS’s random quality assurance reviews of asylum applications do not include examination for fraud or fraud indicators. RAIO’s 2012-2013 random review of asylum decisions did not include fraud because, according to RAIO officials, asylum officers should have referred any cases with fraud indicators to FDNS. The Asylum Division’s reviews of specific types of asylum applications are not random and do not include a review for fraud indicators. Asylum Division officials told us that they do not conduct random reviews of all asylum cases because they have already implemented 100 percent supervisory review of asylum decisions in the field. Furthermore, the Asylum Division’s review does not include a review for fraud indicators because, according to Asylum Division officials, fraud is not a component of legal sufficiency in asylum decisions. The Fraud Framework states that ongoing monitoring and periodic evaluation provide assurances to managers that they are achieving the objectives of fraud prevention, detection, and response. For instance, monitoring and evaluation activities can support managers’ decisions about allocating resources and can help managers to demonstrate their commitment to effectively managing fraud risks. Although supervisory review is an important step in fraud detection and quality assurance, it does not position USCIS to ensure quality and consistency across supervisors and asylum offices, does not provide insight into quality concerns across the Asylum Division, and does not allow USCIS to evaluate whether supervisors are reviewing cases for fraud appropriately. Given USCIS’s plans to conduct future random reviews of asylum applications, including an examination of possible fraud indicators in such reviews would help strengthen USCIS’s oversight of officers’ adjudication of asylum applications and supervisory asylum officers’ reviews of the officers’ adjudications. Random reviews for fraud would also help USCIS evaluate how effectively supervisory asylum officers are implementing the new fiscal year 2014 performance evaluation guidelines for addressing fraud. Law enforcement agencies can pursue criminal charges against individuals who commit asylum fraud; however, according to an official from the Executive Office for U.S. Attorneys, individual asylees who commit asylum fraud may be subject to removal proceedings, but are not generally criminally prosecuted. Under the terms of a memorandum of agreement between USCIS and ICE, HSI has the right of first refusal to investigate all FDNS fraud referrals. However, FDNS immigration officers we interviewed in six of eight asylum offices reported that HSI rarely accepts asylum fraud referrals from FDNS, or that HSI accepts asylum fraud referrals and then does not pursue them or closes them without further investigation. In four of the eight asylum offices, FDNS immigration officers referred 0 or 1 asylum fraud cases to HSI from fiscal years 2010 to 2014. In one asylum office, FDNS immigration officers reported that HSI had not accepted a referral from FDNS in the previous 2 years, and that the U.S. Attorney’s Office, which is responsible for prosecuting asylum fraud cases, does not generally accept asylum fraud referrals. The understanding of these FDNS officers was that the U.S. Attorney’s Office in that district prefers to have at least 100 asylum applicants connected to an asylum fraud case before the office will consider prosecution. According to FDNS officials, fraud cases associated with 100 or more asylum applicants provide for sentencing enhancements, which is one of the factors that influence the willingness of HSI and U.S. Attorney’s Offices to accept a case. In another asylum office, FDNS immigration officers reported that HSI had not accepted an asylum fraud case for investigation since 2010. From fiscal years 2010 to 2014, FDNS immigration officers working in asylum offices referred 40 cases to HSI; however, as discussed above, FDNS cannot determine how many of these cases involved asylum fraud. In fiscal year 2014, HSI initiated 37 asylum fraud investigations, which resulted in 7 criminal arrests, 6 indictments, and 4 convictions. ICE headquarters officials stated that criminal investigations for asylum fraud are more likely to be brought against attorneys, preparers, and interpreters who perpetrate large-scale asylum fraud than against individuals. For example, in April 2014, an immigration consultant who was linked by HSI to more than 800 asylum applications filed since 2000 in the Los Angeles Asylum Office was sentenced to 4.5 years in federal prison after pleading guilty to conspiracy, immigration document fraud, and aggravated identity theft. HSI began investigating this individual’s business in 2009. HSI agents in all four of the locations we visited stated that they face challenges in investigating asylum fraud cases, such as competing priorities, confidentiality restrictions, and low interest from the U.S. Attorney’s Offices that prosecute these immigration-related criminal cases. The FBI has also pursued asylum fraud investigations such as Operation Fiction Writer; according to FDNS officials, the asylum office sent repeated referrals to HSI about the asylum fraud ring associated with Operation Fiction Writer from 2005 to 2009. In 2009, HSI requested that the asylum office stop sending it information about Operation Fiction Writer, at which time the asylum office began working with the FBI to pursue the case. As of March 2014, 30 individuals had been charged in connection with Operation Fiction Writer. According to HSI field office officials, asylum fraud prosecutions are time and labor-intensive and typically do not result in lengthy prison sentences; as a result, both HSI and the U.S. Attorney’s Office tend to focus on large-scale asylum fraud rings, such as those involving attorneys, preparers, and interpreters, rather than individual applicants. Because HSI does not prioritize investigations of single instances of asylum fraud, FDNS immigration officers we interviewed in seven of the eight asylum offices stated that they generally do not submit single-scope cases, in which only one individual is implicated in the fraudulent activity, to HSI. EOIR’s Disciplinary Counsel can pursue a variety of penalties against attorneys who perpetrate asylum fraud in immigration courts. However, as of June 2015, the EOIR Disciplinary Counsel has not taken action to publicly discipline any attorney for having committed immigration fraud who had not already been disbarred by his or her state bar authority. EOIR’s Disciplinary Counsel has jurisdiction over the regulation of practitioners, who are private immigration attorneys, and other accredited representatives authorized to practice before the BIA and the immigration courts. The Disciplinary Counsel investigates complaints about practitioners who may be engaging in criminal, unethical, or unprofessional conduct or in frivolous behavior before EOIR and takes disciplinary action, as appropriate. The Disciplinary Counsel works closely with EOIR’s Fraud and Abuse Prevention Program, although the Disciplinary Counsel seeks to impose disciplinary sanctions against practitioners, while the Fraud and Abuse Prevention Program can refer cases to ICE HSI or other law enforcement agencies for criminal investigation. The Disciplinary Counsel may choose to resolve potential disciplinary issues prior to issuance of a Notice of Intent to Discipline by taking certain confidential actions against a practitioner. Such confidential discipline includes warning letters or informal admonitions for low-level misconduct or for first-time offenders. According to EOIR’s Disciplinary Counsel, confidential discipline is intended to educate the lawyer about what he or she did wrong and how to improve conduct in the future. Public discipline imposed by the BIA includes a range of disciplinary actions, such as public censure, suspension, or disbarment. Disbarment, in which an attorney is prohibited from practicing law before EOIR’s immigration courts and the BIA, is the most severe disciplinary sanction that the BIA can impose. According to the Disciplinary Counsel, to date, the Disciplinary Counsel has not prosecuted any original jurisdiction cases to the point of disbarment, which means that the Disciplinary Counsel has not requested disbarment for any attorneys who engaged in asylum fraud and who were not already disbarred by their state bar or a federal court. Disciplinary Counsel officials stated that they have not initiated any original jurisdiction disbarments against attorneys in part because of a lack of administrative resources to pursue such cases. The Disciplinary Counsel has completed reciprocal disciplinary cases, in which attorneys who may have engaged in fraud and have already been suspended or disbarred by their state bar or by a federal court, or who have been convicted of a crime, are also disbarred by EOIR. An attorney who has been disbarred by a state bar or a federal court is permitted to practice before the immigration courts until EOIR takes the proper reciprocal action. Asylum terminations due to fraud are not common and have decreased in recent years. USCIS data indicate that USCIS terminated the asylum status of 374 individuals for fraud from fiscal years 2010 through 2014. In the same time period, USCIS granted asylum to 76,122 individuals. The number of USCIS asylum terminations for fraud has decreased in recent years, from 103 in fiscal year 2010 to 34 in fiscal year 2014. If a final order by an immigration judge or the BIA specifically finds that the individual knowingly filed a “frivolous” asylum application and the individual initially received a warning regarding the consequences of filing a frivolous application, then he or she will be barred from receiving future immigration benefits. Asylum Division officials attributed the decrease in asylum terminations due to fraud from fiscal year 2010 to fiscal year 2014 to several factors. First, according to Asylum Division officials, USCIS made several policy changes in order to comply with two decisions of the United States Court of Appeals for the Ninth Circuit. In Robleto-Pastora v. Holder, the court noted the BIA’s conclusion that asylees who adjust to LPR status no longer qualify as asylees and held, among other things, that an alien who has previously adjusted to LPR status retains that status unless he or she receives a final order of removal. Accordingly, a former asylee who had already adjusted to LPR would no longer have asylum status to terminate. According to Asylum Division officials, USCIS changed its policy nationwide in June 2012 and no longer pursues termination of asylum status for fraud after someone has adjusted to LPR. In June 2012, USCIS developed a process, called Post Adjustment Eligibility Review, for addressing suspected fraud with respect to former asylees who have already adjusted to LPR. Under the Post Adjustment Eligibility Review process, an FDNS immigration officer reviews adverse information about the individual, documents a summary of findings, and forwards the file to an asylum officer. An asylum officer then reviews the evidence to determine whether sufficient evidence of fraud exists, and, if a preponderance of the evidence supports the finding of fraud, forwards the case to ICE OPLA, which reviews the case and determines whether the individual should be placed in removal proceedings. Additionally, in Nijjar v. Holder (August 2012), the Ninth Circuit held that only the Attorney General has the authority to terminate asylum status because Congress did not confer authority to terminate asylum on DHS. On the basis of this ruling, USCIS does not have the authority to terminate an individual’s asylum status in the Ninth Circuit, which includes the Los Angeles and San Francisco asylum offices. Subsequently, in August 2012, the BIA noted that no other circuits currently share the Ninth Circuit’s position that DHS lacks authority to terminate asylum, and the case before it arose within the Second Circuit; as a result, it would “only apply Nijjar within the jurisdiction of the Ninth Circuit.” Therefore, Asylum Division officials stated that USCIS applied the Nijjar ruling only in the asylum offices located within the Ninth Circuit—San Francisco and Los Angeles. Asylum Division officials stated that these two decisions in the Ninth Circuit resulted in a decrease in the number of terminations conducted by USCIS because, prior to these decisions, USCIS would pursue termination of the asylum status of individuals who had adjusted to LPR nationwide and was able to terminate asylum status in the Ninth Circuit. Second, Asylum Division officials stated that increases in the number of affirmative asylum, credible fear, and reasonable fear applications in recent years have strained resources in the Asylum Division, the immigration courts, and ICE OPLA. Terminations are time and labor- intensive, according to Asylum Division officials, and there are fewer resources available to pursue them than in the past because of the increased asylum caseload. In seven of the eight asylum offices, asylum officers we spoke with stated that terminations are not a priority. Third, Asylum Division officials stated that individuals who lose their asylum status because of fraud generally would not fit within the Secretary of Homeland Security’s enforcement priorities, making the likelihood very low that they would be removed from the United States after their asylum status has been terminated. DHS’s enforcement and removal priorities focus on the removal of aliens who pose a threat to national security, border security, and public safety. On the basis of our analysis of USCIS, EOIR, and ICE Enforcement and Removal Operations data, we found that 14 of the 374 people who had their asylum status terminated for fraud from fiscal years 2010 through 2014 were indicated as having been removed from the country by ICE Enforcement and Removal Operations as of March 2015; 4 were granted voluntary departure; and 20 had been ordered removed by an immigration judge, but ICE had not yet removed them. USCIS has taken some steps to address asylum cases pending termination due to fraud but has not tracked these cases or established goals for completing termination cases. The Asylum Division receives information about potential asylum fraud from a variety of sources, including USCIS offices that adjudicate asylees’ applications for other immigration benefits such as adjustment to LPR and naturalization, and information arising from criminal investigations into attorneys, preparers, and interpreters suspected of engaging in asylum fraud. After receiving such information, the asylum office with jurisdiction over the asylee’s place of residence reviews the case to assess whether to pursue potentially terminating the individual’s asylum status, and, if a preponderance of the evidence supports a finding of fraud, sends the asylee a Notice of Intent to Terminate and schedules a termination interview. However, the Asylum Division does not begin to track cases pending potential termination until the asylum office issues a Notice of Intent to Terminate. The implementation of asylum office procedures for addressing terminations across asylum offices varies. For example, in one office, asylum officers maintain hard copies of the files pending termination in a particular area of the office’s file room. In another office, asylum officers maintained a spreadsheet of pending termination cases. In other offices, there is an asylum officer responsible for handling terminations, typically on a part-time basis. However, the Asylum Division does not track the number of cases that are pending review for potential termination across asylum offices, making it difficult for USCIS to know how many of such cases exist and are pending review. The Fraud Framework states that it is important for agencies to ensure that the response to fraud is prompt and consistently applied. Moreover, the Fraud Framework states that monitoring response activities helps ensure that the response to identified fraud is prompt and consistently applied. Monitoring fraud response activities, such as tracking asylum cases pending termination due to fraud, could help the Asylum Division ensure that cases pending termination due to fraud are managed promptly and consistently. Asylum Division officials told us that they have identified a need for greater tracking of cases pending termination review to better address requests for the asylees’ files from other USCIS offices. In May 2015, Asylum Division officials requested a modification to RAPS that would give asylum officers the capability to record that a case is pending review for termination. As of September 2015, Asylum Division officials stated that this modification would be released in November 2015. In addition, the Asylum Division has limited goals or metrics for reviewing termination cases, such as goals or metrics for the completion of terminations. According to USCIS officials, USCIS faces progressively higher burdens of proof to address potential asylum fraud as the asylee receives additional immigration benefits, which requires more time and resources. In August 2015, the Asylum Division adopted a new target of 180 days for conducting initial termination reviews that applies solely for cases with pending applications for adjustment to LPR. This goal is a positive step, but it addresses the subset of pending terminations for individuals with pending applications for adjustment to LPR and it applies only to initial termination reviews rather than termination completions. Furthermore, asylees who have not applied for adjustment to LPR may be eligible to receive certain federal benefits, such as Supplemental Security Income, Supplemental Nutrition Assistance Program, Temporary Assistance for Needy Families, and Medicaid; the new 180-day target will not apply to these individuals unless and until they apply to adjust to LPR. Asylum Division officials stated that they periodically review the number of terminations pending review in each asylum office to assess staffing needs, and asylum offices may also choose to prioritize certain termination reviews, as needed. However, Asylum Division officials stated that the division has not adopted goals or metrics for the completion of terminations because termination proceedings are extremely labor- intensive and asylum offices have limited resources to allocate to terminations. Asylum Division officials also stated that terminations are not a priority for their officers given increases in their adjudicative case load of affirmative asylum, credible fear, and reasonable fear cases as well as the prioritization of certain time sensitive cases, such as those involving unaccompanied minors. According to the Fraud Framework, the likelihood that individuals who engage in fraud will be identified and punished serves to deter others from engaging in fraudulent behavior. Timely reviews of potential asylum terminations can also help the Asylum Division use its resources more effectively because, according to Asylum Division and FDNS officials, USCIS faces progressively higher burdens to address potential asylum fraud as the asylee receives additional immigration benefits. USCIS’s new 180-day target for conducting initial termination review for cases with pending applications to adjust to LPR is a positive step; however, developing and implementing timeliness goals for all pending termination reviews of asylees granted affirmative asylum would help USCIS to better identify the staffing resources needed to address the terminations workload and better utilize existing resources to address potential fraud before asylees adjust to LPR or receive other immigration or federal benefits. The U.S. asylum process is designed to protect those who legitimately fled persecution, affording them the opportunity to prove their eligibility and credibility. Adjudicating asylum cases is a challenging undertaking because asylum officers do not always have the means to determine which claims are authentic and which are fraudulent. With potentially serious consequences for asylum applicants if they are incorrectly denied asylum balanced against the importance of maintaining the integrity of the asylum system, asylum officers and immigration judges must make the best decisions they can within the constraints they face. Both DHS and DOJ have established dedicated antifraud entities—an important leading practice for managing fraud risks—but these agencies have limited capability to detect and prevent asylum fraud and both agencies’ efforts to date have focused on case-by-case fraud detection rather than more strategic, risk-based approaches. DHS and DOJ could be better positioned to assess and address fraud risks across their asylum processes. Specifically, regularly assessing fraud risks across asylum claims would help provide DHS and DOJ with reasonable assurance that their fraud prevention controls are effective and appropriately targeted to their fraud risks. Further, developing and implementing a mechanism to collect more complete and reliable data on FDNS’s fraud detection activities, including the number of referrals that asylum officers submit to FDNS and the number of FDNS investigations that result in a finding of asylum fraud, would help USCIS officials determine how often FDNS officers have identified and pursued fraud indicators. In addition, identifying and implementing tools for identifying fraud patterns in asylum applications, such as automated analytic software and prescreening, would better position FDNS immigration officers to identify cases associated with particular asylum fraud rings and aid in the investigation and prosecution of the attorneys, preparers, and interpreters who perpetrate asylum fraud. Moreover, developing asylum- specific guidance on the fraud detection roles and responsibilities of FDNS immigration officers working in asylum offices would help those officers better use the tools that are available to them. By providing additional fraud training for asylum officers and regularly assessing asylum officer training needs, USCIS could better ensure that asylum officers have the training and skills needed to detect and address fraud indicators in the asylum applications they adjudicate. Additionally, including an examination of possible fraud indicators in future USCIS random reviews of asylum decisions would help strengthen USCIS’s oversight of officers’ adjudication of asylum applications and supervisory asylum officers’ reviews of the those adjudications. Last, developing and implementing timeliness goals for all pending termination reviews of asylees granted affirmative asylum would help USCIS better utilize existing resources by addressing potential fraud before asylees adjust to LPR or receive other immigration or federal benefits. To provide reasonable assurance that EOIR’s fraud prevention controls are adequate, we recommend that the Attorney General direct EOIR to conduct regular fraud risk assessments across asylum claims in the immigration courts. To provide reasonable assurance that USCIS’s fraud prevention controls are adequate and effectively implemented, and ensure that asylum officers and FDNS immigration officers have the capacity to detect and prevent fraud, we recommend that the Secretary of Homeland Security direct USCIS to take the following ten actions: conduct regular fraud risk assessments across the affirmative asylum application process; develop and implement a mechanism to collect reliable data, such as the number of referrals to FDNS from asylum officers, about FDNS’s efforts to combat asylum fraud; identify and implement tools that asylum officers and FDNS immigration officers can use to detect potential fraud patterns across affirmative asylum applications; require FDNS immigration officers to prescreen all asylum applications for indicators of fraud to the extent that it is cost-effective and feasible; develop asylum-specific guidance on the fraud detection roles and responsibilities of FDNS immigration officers working in asylum offices; develop and deliver additional training for asylum officers on asylum fraud; develop and implement a mechanism to regularly collect and incorporate feedback on training needs from asylum officers and supervisory asylum officers; develop and implement a method to collect reliable data on asylum officer attrition; include a review of potential fraud indicators in future random quality assurance reviews of asylum applications; and develop and implement timeliness goals for all pending termination reviews of affirmative asylum cases. We provided a draft of this report to DOJ and DHS for their review and comment. DOJ did not provide official written comments to include in this report. However, in an e-mail received on November 12, 2015, a DOJ audit liaison official told us that DOJ concurred with our recommendation that the Executive Office for Immigration Review conduct regular fraud risk assessments across asylum claims in the immigration courts. DHS provided formal, written comments, which are summarized below and reproduced in full in appendix III. DOJ and DHS provided technical comments, which we incorporated as appropriate. DHS concurred with our ten recommendations and described actions under way or planned to address them. With regard to our first recommendation that USCIS conduct regular fraud risk assessments, DHS indicated that the Asylum Division and RAIO FDNS plan to develop an assessment tool and implementation plan for completing regular fraud risk assessments of the affirmative asylum process, with the first assessment to be completed no later than the end of fiscal year 2017. With regard to our second recommendation that USCIS develop and implement a mechanism to collect reliable data on FDNS’s efforts to combat fraud, DHS noted that FDNS plans to update user guidance and training materials and conduct training to clarify FDNS-DS data entry rules for asylum fraud referrals, leads, and cases and plans to complete these efforts by the end of fiscal year 2016. With regard to our third and fourth recommendations that USCIS identify and implement tools to detect fraud patterns across applications and require FDNS immigration officers to pre-screen all asylum applications for indicators of fraud, DHS noted that USCIS recently approved a fiscal year 2016 budget request for such tools and stated that the Asylum Division and FDNS are coordinating with the Office of Information Technology to develop requirements and identify tools for acquisition. As part of this acquisition process, the Asylum Division and RAIO FDNS are also discussing the acquisition of software that would aid FDNS immigration officers in prescreening all asylum cases. DHS also stated that the Chiefs of the Asylum Division plan to issue a joint memorandum and companion guidance for asylum offices that will establish the framework for a national prescreening program. Regarding our fifth recommendation that USCIS develop asylum-specific guidance on roles and responsibilities for FDNS immigration officers working in asylum offices, DHS stated that USCIS plans to issue a memorandum to clarify its guidance on the fraud-related roles and responsibilities of FDNS officers working in asylum offices by the end of fiscal year 2016. Regarding our sixth recommendation that DHS develop and deliver additional fraud training for asylum officers, DHS stated that the Asylum Division is in the process of finalizing an updated lesson plan about fraud in asylum claims to be ready for asylum officer training by the end of March 2016. DHS also stated that it would provide this training to its asylum officers by the end of fiscal year 2016. In commenting on our draft report, DHS also stated that the draft did not reflect all of the fraud training currently provided to new asylum officers. In response to this comment, we clarified our discussion of USCIS’s existing fraud training for new officers. Specifically, we added additional details about the fraud- related training sessions USCIS delivers as part of RAIO and Asylum Division basic trainings. Regarding our seventh recommendation that USCIS develop and implement a mechanism to regularly collect and incorporate feedback on training needs from asylum officers and supervisory asylum officers, DHS stated that USCIS is in the process of preparing a division survey to be delivered to officers and supervisors to gather feedback on training needs in fiscal year 2016, and stated that officers and supervisors will be surveyed on training no less than once every 2 years. With regard to our eighth recommendation that DHS develop and implement a mechanism to collect reliable data on asylum officer attrition, DHS stated that, beginning in September 2015, the Asylum Division has expanded the scope and frequency of its tracking of asylum officer attrition data. DHS stated that, moving forward, the Asylum Division plans to update its data on asylum officer transfers, promotions, moves to other USCIS offices, moves to outside employment, and departures from the labor force on a biweekly basis and confirm the accuracy of those data through regular validation. Based on this information, DHS requested that we consider this recommendation closed. While these are positive steps toward addressing our recommendation, USCIS needs to demonstrate that it has implemented its plans to update and validate its asylum officer attrition data to fully address the intent of our recommendation. Regarding our ninth recommendation that USCIS review for potential fraud indicators in future random quality assurance reviews of asylum applications, DHS stated that, in October 2015, the Asylum Division added a fraud-specific question to the Asylum Division quality assurance review checklist. DHS stated that this change will ensure asylum cases selected for Asylum Division quality assurance will be reviewed for fraud indicators to determine whether those indicators were properly identified, analyzed, and processed. Based on this information, DHS asked us to consider this recommendation closed. While DHS has taken positive initial steps toward addressing this recommendation, to fully address the intent of our recommendation, DHS needs to demonstrate the extent to which this change allows them to review for fraud indicators in a random sample of all asylum cases, rather than in only the specific categories of cases that the Asylum Division headquarters currently reviews. As we note in our report, the Asylum Division does not currently conduct random reviews of all asylum cases. Regarding our tenth recommendation that DHS develop and implement timeliness goals for pending termination reviews, DHS stated that the Asylum Division plans to revise its case management system, RAPS, to improve tracking of termination processing. The Asylum Division then plans to analyze the resulting data to develop timeliness goals for termination cases by the end of fiscal year 2016 and plans to implement those goals during fiscal year 2017. These and other actions that DHS indicated are planned or under way should help address the intent of our recommendations if implemented effectively. DHS also noted that judicial constraints imposed by Nijjar v. Holder (9th Cir. 2012) have foreclosed DHS’s ability to terminate asylum status for fraud in the Ninth Circuit, and stated that a legislative change would be necessary to restore USCIS’s authority to terminate asylum status in the first instance. We are sending copies of this report to the Secretary of Homeland Security, the Attorney General of the United States, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix IV. Our objectives were to (1) describe what Department of Homeland Security (DHS) and Department of Justice (DOJ) data indicate about trends in the characteristics of asylum claims, (2) evaluate the extent to which DHS and DOJ have designed mechanisms to prevent and detect fraud in the asylum system, and (3) evaluate the extent to which DHS and DOJ have designed and implemented processes to address any fraud that has been identified in the asylum system. To describe trends in the characteristics of asylum claims, we analyzed U.S. Citizenship and Immigration Services (USCIS) Refugee, Asylum, and Parole System (RAPS) data on asylum applications, adjudications, and grants by asylum offices nationwide for fiscal years 2010 through 2014. In addition, we analyzed record-level data from RAPS for asylum applications adjudicated from fiscal years 2010 through fiscal year 2014. To assess the reliability of the RAPS data, we reviewed USCIS documents about the design of the RAPS system, completed data entry and duplicate record checks, and discussed the reliability of the data with USCIS officials. We also analyzed two reports issued by the Executive Office for Immigration Review’s (EOIR) Office of Planning, Analysis, and Statistics from fiscal year 2010 through 2014—Asylum Statistics and Statistics Yearbook. These reports contain data about the characteristics of asylum applications adjudicated through the immigration courts in the period of our analysis. To assess the reliability of the data in EOIR’s reports, we reviewed EOIR documentation about the management of EOIR cases and appeals and spoke with officials about how EOIR collects and monitors data. The EOIR Office of Planning, Analysis, and Statistics changed the methodology it used to compile EOIR statistics in the reports issued in fiscal year 2013, and data from previous fiscal years are not comparable with those reported in fiscal year 2013 and 2014 reports. As a result, we relied on the fiscal years 2013 and 2014 reports for our analyses. We determined that the USCIS and EOIR data about the characteristics of asylum claims were sufficiently reliable for the purposes of this report. To evaluate the extent to which DHS and DOJ have designed mechanisms to prevent and detect fraud in the asylum system, we identified the antifraud entities responsible for detecting and preventing asylum fraud within USCIS and EOIR and reviewed their asylum fraud data, policies and practices. We analyzed data from the Fraud Detection and National Security Directorate’s (FDNS) case management system, FDNS Data System (FDNS-DS), about the number of benefit fraud cases associated with asylum applications that were opened from fiscal years 2010 to 2014 and the number of those cases in which FDNS found fraud. To assess the reliability of these data, we reviewed policies about how data are entered into FDNS-DS, such as the Fraud Detection Standard Operating Procedures and the FDNS Basic Training presentation that FDNS uses to introduce FDNS-DS to staff. We interviewed FDNS immigration officers and headquarters officials about their use of FDNS- DS and observed FDNS immigration officers using FDNS-DS. We discuss our findings about the reliability of the FDNS-DS data in this report. We also analyzed the extent to which the data captured in RAPS can be used to identify and detect asylum fraud. We compared FDNS immigration officers’ reported use of the FDNS-DS system and FDNS-DS data capabilities with procedures in the Fraud Detection Standard Operating Procedures and standards in Standards for Internal Control in the Federal Government. To assess USCIS policies and procedures to prevent and detect fraud in the USCIS affirmative asylum process, we reviewed USCIS Asylum Division policy documents such as the Affirmative Asylum Procedures Manual, FDNS policy documents such as the Fraud Detection Standard Operating Procedures and FDNS Field Priorities FY15, and guidance such as the 2015 memorandum of agreement between FDNS and the Refugee, Asylum, and International Operations Directorate (RAIO) regarding the governance structure for FDNS. We reviewed Asylum Division workforce planning efforts to address asylum fraud and interviewed Asylum Division officials about attrition among asylum officers. We reviewed the Asylum Division Staffing Allocation Models, which officials stated were used to support Asylum Division workforce planning efforts, for fiscal years 2012 through 2014, the most recent years available, as well as the Staffing Allocation Model for fiscal year 2015. We also reviewed staffing levels for Asylum Offices, including asylum officer staffing and FDNS immigration officer staffing, from fiscal year 2010 to 2014 and compared actual staffing levels with estimates in the Staffing Allocation Models. We reviewed asylum officer attrition data, which USCIS compiled manually at our request. We compared Asylum Division workforce planning efforts with principles in GAO’s Key Principles for Effective Strategic Workforce Planning to assess how USCIS workforce planning efforts align with the key principles. We also reviewed USCIS quality assurance policy documents such as the Quality Sampling Reference Guide and the Quality Handbook and spoke with Asylum Division and RAIO officials about the extent to which these reference materials are used in asylum quality assurance. We reviewed documents associated with the random quality assurance reviews that RAIO conducted in each asylum office in 2012 and 2013, including the checklists used to evaluate asylum adjudications and the quality assurance results. We evaluated the extent to which these quality assurance reviews included reviews for fraud. We reviewed performance evaluation documents for asylum office staff, including asylum officers and supervisory officers, and examined the extent to which fraud detection efforts are reflected in staff performance evaluations, including the extent to which supervisory asylum officers evaluate the fraud detection efforts of asylum officers. We spoke with Asylum Division headquarters officials about ongoing Asylum Division headquarters quality assurance reviews of certain asylum adjudications. We reviewed past USCIS efforts to examine fraud in the USCIS asylum system and spoke with officials in the USCIS Office of Policy and Strategy about past efforts and plans for future efforts to examine asylum fraud. We compared these policy documents and their role in preventing and detecting asylum fraud with standards in GAO’s A Framework for Managing Fraud Risks in Federal Programs (Fraud Framework) and Standards for Internal Control in the Federal Government. To learn about FDNS policies and procedures to detect and prevent asylum fraud, we reviewed FDNS guidance such as the Fraud Detection Standard Operating Procedures and training materials for FDNS immigration officers about asylum fraud as well as training materials for asylum officers about how to refer potential fraud to FDNS. We reviewed the extent to which asylum officers and FDNS immigration officers used other fraud detection tools such as overseas verifications and HSI’s Forensic Laboratory. We compared USCIS efforts to prevent and detect fraud with leading practices in GAO’s Framework for Effective Fraud Risk Management. We reviewed USCIS asylum officer basic training materials from RAIO and the Asylum Division, as well as training materials for FDNS immigration officers. We reviewed USCIS Asylum Division quarterly training reports for fiscal year 2014 and used them to analyze the weekly training activities in each asylum office for each week of the reporting quarter. We compared RAIO and Asylum Division training materials with material in GAO’s Guide for Strategic Training and Development Efforts in the Federal Government. We visited five of the eight asylum offices — Newark, New Jersey; New York, New York; Los Angeles, California; Houston, Texas; and Arlington, Virginia. We selected these offices for site visits based on a variety of factors, including their number of asylum officers, the number of asylum applications they receive, and geographic proximity to EOIR immigration courts. During our site visits, we visited immigration courts and observed asylum hearings in New York, Los Angeles, Houston, and Arlington. In addition, we interviewed approximately 11 ICE OPLA attorneys and 10 ICE HSI investigators in the New York, Los Angeles, Houston, and Arlington offices. In each asylum office, we observed asylum interviews and spoke with supervisory asylum officers, asylum officers, training officers, and FDNS immigration officers to obtain their perspectives on asylum fraud and the risk of asylum fraud. Although the results of our visits cannot be generalized to officers in all asylum offices or to all immigration courts, they provided first-hand observations on asylum adjudication practices and insights regarding policies and procedures to detect asylum fraud. We conducted in-person interviews during our site visits and telephone interviews with supervisory asylum officers, asylum officers, training officers, and FDNS immigration officers in the remaining three asylum offices –Miami, Florida; Chicago, Illinois; and San Francisco, California. Across the eight asylum offices, we spoke with 35 supervisory asylum officers, 37 asylum officers, 24 FDNS immigration officers (including four supervisors), and 12 training officers. We spoke with supervisory asylum officers, asylum officers, and FDNS immigration officers in all eight asylum offices about the tools and systems that they use to identify and detect asylum fraud and the roles of asylum officers and FDNS immigration officers in asylum fraud detection. We spoke with Asylum Division and RAIO headquarters officials about how asylum officers are trained to detect and prevent fraud, and how training needs are assessed. We also spoke with training officers in each of the eight asylum offices about how they develop and present training, as well as evaluate training needs. We spoke with Asylum Division and RAIO Performance Management and Planning officials about quality assurance mechanisms in the asylum program, such as 100 percent supervisory review of asylum officer decisions, and about the extent to which fraud detection and prevention is part of the Asylum Division quality assurance process. The EOIR antifraud officer and the EOIR Fraud and Abuse Prevention Program are responsible for detecting and preventing asylum fraud within the immigration courts. We analyzed EOIR Fraud Abuse Prevention Program case files to determine the number of complaints received, number of case files opened, and number of asylum-related case files opened from fiscal year 2010 through fiscal year 2014. We also reviewed 35 EOIR case files, which EOIR identified as being all cases associated with asylum fraud. During this review, EOIR classified two of these files as unauthorized practice of law rather than asylum fraud, and opted not to include a case file re-opened in fiscal year 2012 due to a prior case closure in fiscal year 2008.Two other case files were outside of the fiscal year 2010 through fiscal year 2014, which was the time period of our review. We reviewed EOIR’s Fraud and Abuse Prevention Program guidance and policy documentation, including the regulation that established EOIR’s antifraud officer position. We also reviewed the Immigration Judge Benchbook, which includes tools, templates, and legal resources for immigration judges to use in their adjudications. We analyzed EOIR’s fraud-related training materials for immigration judges, and spoke with the antifraud officer about the fraud detection and prevention activities associated with her role. While observing immigration court proceedings in New York City, Los Angeles, Houston, and Arlington, including asylum cases, we spoke with court administrators and immigration judges about asylum fraud. To evaluate the extent to which DHS and DOJ have designed and implemented processes to address any fraud that has been identified in the asylum system, we analyzed Immigration and Customs Enforcement (ICE) Homeland Security Investigations (HSI) data on the number of asylum fraud indictments, criminal arrests, convictions, and administrative arrests as well as the number of asylum fraud cases initiated by HSI from fiscal year 2010 through fiscal year 2014. We also analyzed USCIS RAPS data to identify the number of individuals who have had their asylum status terminated because of fraud from fiscal years 2010 through 2014 and any trends in asylum terminations because of fraud over those years. We used ICE Enforcement and Removal Operations data to analyze the outcomes for individuals whose asylum status was terminated for fraud from fiscal years 2010 through 2014. We assessed the reliability of these data by reviewing documentation about how data were collected; interviewing knowledgeable agency officials about the data; and conducting electronic testing for missing data, outliers, and obvious errors. We determined that these data were sufficiently reliable for the purposes of analyzing the number of asylum terminations due to fraud and the outcome of those terminations. We reviewed USCIS policy documents related to asylum terminations, such as the Affirmative Asylum Procedures Manual, which details termination policy and procedures that are to be followed for asylum terminations. We also reviewed U.S. Circuit Court of Appeals decisions that were identified by Asylum Division officials as influencing how USCIS pursues asylum terminations due to fraud and USCIS policy documents related to asylum termination, such as the Post Adjustment Eligibility Review memo, that reflect USCIS policy changes made as a result of circuit court decisions. We visited five HSI locations – New York, New York; Washington, D.C.; Houston, Texas; Los Angeles, California; and Fairfax, Virginia –-and interviewed officials about how they receive asylum fraud referrals and how they investigate allegations of asylum fraud. We interviewed officials from EOIR about mechanisms to address identified asylum fraud in the immigration courts and how those mechanisms are used, including disciplinary measures available to EOIR for attorneys and other practitioners who commit asylum fraud, and how frequently they are used. We interviewed officials in the eight USCIS asylum offices as well as Asylum Division officials to determine how USCIS handles cases with identified fraud, including cases in which fraud is identified after asylum has been granted, and how USCIS tracks, monitors, and adjudicates cases in which an individual’s asylum status is pending termination for identified fraud. We compared USCIS and EOIR mechanisms to address identified asylum fraud and the frequency of their use with mechanisms in GAO’s Fraud Framework to assess their likely effectiveness as a fraud deterrent. We conducted this performance audit from September 2014 to November 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. GAO’s A Framework for Managing Fraud Risks in Federal Programs notes that managers who effectively assess fraud risks attempt to fully consider the specific fraud risks the agency or program faces, analyze the potential likelihood and impact of fraud schemes, and then ultimately document prioritized fraud risks. Moreover, managers can use the fraud risk assessment process to determine the extent to which controls may no longer be relevant or cost-effective. There is no universally accepted approach for conducting fraud risk assessments, since circumstances vary among programs; however, assessing fraud risks generally involves five actions, as noted in figure 8. In addition to the contact named above, Kathryn Bernet, Assistant Director; Ashley Vaughan Davis; David Alexander; Dominick Dale, Imoni Hampton; Grant Mallie; Mara McMillen; Linda Miller; Jan Montgomery; Jon Najmi; and Mary Pitts made significant contributions to this report. | Each year, tens of thousands of aliens in the United States apply for asylum, which provides refuge to those who have been persecuted or fear persecution on protected grounds. Asylum officers in DHS's USCIS and immigration judges in DOJ's EOIR adjudicate asylum applications. GAO was asked to review the status of the asylum system. This report addresses (1) what DHS and DOJ data indicate about trends in asylum claims, (2) the extent to which DHS and DOJ have designed mechanisms to prevent and detect asylum fraud, and (3) the extent to which DHS and DOJ designed and implemented processes to address any asylum fraud that has been identified. GAO analyzed DHS and DOJ data on asylum applications for fiscal years 2010 through 2014, reviewed DHS and DOJ policies and procedures related to asylum fraud, and interviewed DHS and DOJ officials in Washington, D.C., Falls Church, VA, and in asylum offices and immigration courts across the country selected on the basis of application data and other factors. The total number of asylum applications, including both principal applicants and their eligible dependents, filed in fiscal year 2014 (108,152) is more than double the number filed in fiscal year 2010 (47,118). As of September 2015, the Department of Homeland Security's (DHS) U.S. Citizenship and Immigration Services (USCIS) has a backlog of 106,121 principal applicants, of which 64,254 have exceeded required time frames for adjudication. USCIS plans to hire additional staff to address the backlog. USCIS and the Department of Justice's (DOJ) Executive Office for Immigration Review (EOIR) have limited capabilities to detect asylum fraud. First, while both USCIS and EOIR have mechanisms to investigate fraud in individual applications, neither agency has assessed fraud risks across the asylum process, in accordance with leading practices for managing fraud risks. Various cases of fraud illustrate risks that may affect the integrity of the asylum system. For example, an investigation in New York resulted in charges against 30 defendants as of March 2014 for their alleged participation in immigration fraud schemes; 829 applicants associated with the attorneys and preparers charged in the case received asylum from USCIS, and 3,709 received asylum from EOIR. Without regular assessments of fraud risks, USCIS and EOIR lack reasonable assurance that they have implemented controls to mitigate those risks. Second, USCIS's capability to identify patterns of fraud across asylum applications is hindered because USCIS relies on a paper-based system for asylum applications and does not electronically capture some key information that could be used to detect fraud, such as the applicant's written statement. Asylum officers and USCIS Fraud Detection and National Security (FDNS) Directorate immigration officers told GAO that they can identify potential fraud by analyzing trends across asylum applications; however, they must rely on labor-intensive methods to do so. Identifying and implementing additional fraud detection tools could enable USCIS to detect fraud more effectively while using resources more efficiently. Third, FDNS has not established clear fraud detection responsibilities for its immigration officers in asylum offices; FDNS officers we spoke with at all eight asylum offices told GAO they have limited guidance with respect to fraud. FDNS standard operating procedures for fraud detection are intended to apply across USCIS, and therefore do not reflect the unique features of the asylum system. Developing asylum-specific guidance for fraud detection, in accordance with federal internal control standards, would better position FDNS officers to understand their roles and responsibilities in the asylum process. To address identified instances of asylum fraud, USCIS can, in some cases, terminate an individual's asylum status. USCIS terminated the asylum status of 374 people from fiscal years 2010 through 2014 for fraud. In August 2015, USCIS adopted a target of 180 days for conducting initial reviews, in which the asylum office reviews evidence and decides whether to begin termination proceedings, when the asylee has applied for adjustment to lawful permanent resident status; however, this goal applies only to a subset of asylees and pertains to initial reviews. Further, asylees with pending termination reviews may be eligible to receive certain federal benefits. Developing timeliness goals for all pending termination reviews would help USCIS better identify the staffing resources needed to address the terminations workload. GAO recommends that DHS and DOJ conduct regular fraud risk assessments and that DHS, among other things, implement tools for detecting fraud patterns, develop asylum-specific guidance for fraud detection roles and responsibilities, and implement timeliness goals for pending termination reviews. DHS and DOJ concurred with GAO's recommendations. |
To assess how the structure of the D.C. criminal justice system affected systemwide operations, we first compiled a detailed description of the system’s structure, including its unique attributes. In doing this, we conducted interviews with and obtained and reviewed documents and data from, among other agencies, MPDC, Superior Court, USAO, Corporation Counsel, CJCC, the U.S. Department of Justice, the D.C. Mayor’s Office, D.C. Council, Court Services, Pretrial Services, Corrections Trustee, Defender Service, U.S. Marshals Service, and the D.C. Office of the Chief Medical Examiner. We then compiled a list of potential issues. After discussing these potential issues with the committees of jurisdiction, it was agreed that we would rely on other studies for some issues, such as scheduling of cases in Superior Court and forensics, that were the subject of ongoing or recent reviews. We also excluded juvenile justice, an area that involved noncriminal justice agencies such as the D.C. Department of Human Services. However, as requested, we did develop a description of the juvenile justice case flow process (see app. VI). As a case study of coordination issues, we examined D.C.’s method of processing cases from arrest through initial court appearance before Superior Court—an area of concern for many D.C. criminal justice system officials. We compared this process to the methods used in Philadelphia and Boston. These cities were judgmentally selected because they were both large East Coast cities that had recently revised their methods of processing cases from arrest through initial court appearance. To assess the mechanisms that exist to coordinate the D.C. criminal justice system, we interviewed agency officials and reviewed agency organizational charts, policies, and procedures. To identify initiatives planned or under way to improve D.C.’s criminal justice process, we requested that each CJCC member agency with responsibilities for adult offenders provide us with a list of initiatives, as of November 2000, and include the goal of each initiative, other participating agencies, the status of the initiative, and any results to date plus any planned or completed evaluations of each initiative. We compared the information provided by participating agencies about individual initiatives to determine whether they were in agreement and contacted agencies to clarify and reconcile these differences. A more detailed description of our objectives, scope, and methodology is found in appendix I. We performed our work between October 1999 and December 2000 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Mayor of D.C., Chair of the D.C. Council, Chair of the D.C. Control Board, Chief of the Metropolitan Police Department, U.S. Attorney General, U.S. Attorney for D.C., D.C. Corporation Counsel, Chief Judge of Superior Court of D.C., U.S. Marshal of the Superior Court for D.C., Interim Director of Court Services and Offender Supervision Agency for D.C., Director of D.C. Pretrial Services Agency, D.C. Corrections Trustee, Chief Medical Examiner of D.C., Director of the D.C. Department of Corrections, and the Director of Public Defender Service for D.C. The D.C. criminal justice system involves a number of D.C. agencies, federal agencies, and private organizations. These agencies and organizations are funded with congressionally appropriated federal funds and local D.C. funds. Table 1 shows the agencies of the D.C. criminal justice system, whether the agencies are D.C. or federal agencies, and each agency’s principal source of funding. Appendix II provides additional information on the agencies involved in the D.C. criminal justice system. The current structure of the D.C. criminal justice system reflects a number of changes that the National Capital Revitalization and Self-Government Improvement Act of 1997 (D.C. Revitalization Act), as amended, made to D.C.’s criminal justice system. The D.C. Revitalization Act brought a number of D.C. functions, such as sentenced felon incarceration and community supervision, that are normally the responsibility of states rather than cities or counties, under federal funding. Areas affected by the D.C. Revitalization Act included (1) Pretrial Services, (2) Defender Service, (3) Superior Court, (4) sentencing, (5) incarceration, and (6) offender community supervision and parole. Specifically, within the time schedule specified in the D.C. Revitalization Act, these changes included federally funding Superior Court, Pretrial Services, and Defender Service; transferring sentenced felons from D.C.’s Lorton Correctional Complex to BOP custody and supervision; transferring responsibility for parole decisions from the D.C. Parole Board to the U.S. Parole Commission; and creating a new federal community supervision agency, Court Services, for those convicted of crimes in D.C. courts. Appendix III includes details on certain changes made by the D.C. Revitalization Act, as amended. In many ways, the structure of the D.C. criminal justice system is unique. For example: Because of its status as the nation’s capital, over 30 law enforcement agencies other than MPDC have a significant presence in D.C. These include the U.S. Capitol Police, the Federal Protective Service, the U.S. Secret Service, and the U.S. Park Police. These and other agencies may make arrests for crimes committed within D.C., and MPDC assists these other law enforcement agencies by performing functions such as fingerprinting, photographing, and housing arrestees prior to their initial court appearance. D.C. has two prosecutors for local crimes. USAO prosecutes felonies, such as homicide or armed robbery, and “serious” misdemeanor violations committed by adults in D.C. Examples of the types of misdemeanors prosecuted by USAO include petty theft, assault, weapon offenses, and narcotics possession. Corporation Counsel, a D.C. agency, prosecutes “minor” misdemeanor violations, such as drinking in public or disorderly conduct, in addition to criminal traffic offenses and offenses committed by children. The Criminal Division of Superior Court is responsible for processing matters that are in violation of the D.C. Code and some related U.S. Code provisions, and municipal and traffic regulations. USAO may, at its discretion, prosecute certain eligible criminal offenses in either Superior Court or U.S. District Court for D.C. In practice, however, USAO brings a large majority of criminal prosecutions in Superior Court. According to a D.C. USAO official, D.C. is the only jurisdiction in the United States in which USAO may prosecute crimes in both local and federal courts. Only in D.C. does BOP assume responsibility for all felony offenders sentenced to prison terms in local (i.e., state, county, and city) courts. The D.C. criminal justice system also has different case flow processes for handling arrestees, depending on which agency is prosecuting the case and the status of the arrestee. Appendix IV contains a detailed description of the process for cases prosecuted by USAO. Appendix V contains a detailed description of the process for cases prosecuted by Corporation Counsel. Appendix VI contains a detailed description of the process for juvenile cases. According to most officials we interviewed and our own analyses, an overarching problem within the D.C. criminal justice system has been the lack of coordination among all participating agencies. Its different sources of funding, reporting structures, and organizational perspectives have complicated the task of coordinating systemwide activities, reaching agreement on the nature of systemwide problems, and taking a coordinated approach to addressing any problem areas that balances competing institutional interests. One reason for this is that the costs of coordinating activities and corrective actions may fall on one or more federally funded agencies, while any savings may accrue to one or more D.C. funded agencies, or vice versa. In the absence of a single hierarchy and funding structure, agencies have generally acted in their own interests rather than in the interest of the system as a whole. Typically, federal and nonfederal criminal justice systems include the following stages: (1) arrest and booking, (2) charging, (3) initial court appearance, (4) release decision, (5) preliminary hearing, (6) indictment, (7) arraignment, (8) trial, (9) sentencing, and (10) correctional supervision. Most stages require the participation of several agencies which need to coordinate their activities for the system to operate efficiently while also meeting the requirements of due process. That is, all involved agencies need to work together to ensure that their roles and operations mesh well with those of other agencies, and to identify any problems that emerge and decide how best to resolve them. Table 2 shows the stages in D.C.’s criminal justice system and the agencies that participate in each stage. As shown in the table, 7 of the 10 stages typically involve multiple agencies with different sources of funding, which results in different reporting structures and different oversight entities. For example, as many as six agencies—one D.C. (MPDC), three federal (USAO, U.S. Marshals Service, and Pretrial Services), and two federally funded D.C. agencies (Superior Court and Defender Service)— need to coordinate their activities before the arrestee’s initial court appearance for a felony offense can occur. At the latter stages of the system, an offender’s sentencing and correctional supervision may require the participation of as many as eight agencies—one D.C.-funded agency (DOC), five federal agencies (USAO, BOP, U.S. Marshals Service, U.S. Parole Commission, and Court Services), and two federally funded D.C. agencies (Superior Court and Defender Service). At any stage, the participation of other agencies might also be required. In addition, the reporting and funding structure for these participating agencies often differs. For example, USAO, the U.S. Marshals Service, BOP, and the U.S. Parole Commission ultimately report to the U.S. Attorney General and are funded by the appropriations subcommittee that funds the Department of Justice; MPDC and Corporation Counsel ultimately report to the D.C. Mayor; and Superior Court, Defender Service, Pretrial Services, and Court Services are independent of both D.C and the U.S. Department of Justice, submit their budgets to Congress, and are funded by the appropriations subcommittee for D.C. While most participating agencies recognize the need to coordinate their activities, agency officials and our analyses of agency data point to a lack of coordination as an overarching problem. Agencies have often been reluctant to coordinate activities where their own interests do not align with those of other agencies. One reason for this reluctance is that some actions that could help achieve greater systemwide efficiency might not benefit each agency equally, and in fact could benefit one agency at the expense of another. As a result, agreement on actions to benefit systemwide efficiency has been slow. Through discussions with D.C. criminal justice officials and our own analysis, we identified many examples where coordination lapses have hindered system operations. In some cases, the problem has been identified and a coordinated approach to resolving the problem is under way. In other cases, the problems persist. To better illustrate the coordination problem, in the following sections we highlight the issues surrounding four problem areas, as well as any corrective actions planned or under way. We also conducted a case study of coordination among the various agencies involved in case processing from the arrest through initial court appearance because a number of D.C. criminal justice officials identified this process as problematic. The scheduling of court cases has had adverse affects on several criminal justice agencies involved in case processing. As noted in table 2, MPDC, prosecutors, Defender Service, U.S. Marshals Service, Pretrial Services, Court Services, and Superior Court could be involved in the court-related processing of a case from the preliminary hearing to the trial and subsequent sentencing. Representatives from several of these different agencies are typically required to be present at court trials and hearings. Because specific court times are not established, individuals who are expected to appear in court are required to be present when the court first convenes in the morning. These individuals might be required to wait at the courthouse for some period of time for the case to be called, if (1) more trials or hearings are scheduled than can be conducted, (2) any one of the involved individuals is not present or prepared, or (3) the case is continued for any number of reasons. MPDC recorded that during calendar year 1999 its officers spent 118 full-time staff years in court- related activities such as preliminary hearings and trials. While MPDC officials stated that officers often spent many hours at court waiting for cases to be called, data were not available on the proportion of the 118 full-time staff years that were attributable to actual court time compared to the time spent waiting for cases to be called, including cases that were rescheduled. CJCC selected the Council for Court Excellence and the Justice Management Institute to conduct a detailed study of criminal justice resource management issues, with particular emphasis on court case processing and the utilization of police resources. In its March 2001 report, the Council for Court Excellence and the Justice Management Institute concluded that major changes were needed in the D.C. criminal justice caseflow system to improve the system’s efficiency. Among other things, the report found inefficiencies and counterproductive policies at every stage in case processing. The report also concluded that little use was being made of modern technology in the arrest, booking, papering, and court process that could improve system operations. The Council for Court Excellence and the Justice Management Institute determined that an unnecessarily large number of police officers were notified to appear for prosecutorial and court-related proceedings. The Council for Court Excellence and Justice Management Institute found that during September 2000 an average of 670 MPDC officers a day appeared for these proceedings, costing MPDC approximately $823,000 in overtime costs. The Council for Court Excellence and the Justice Management Institute identified priority areas for system improvements, such as redesigning court procedures in misdemeanor cases, improving the methods used to process cases from arrest through initial court appearance by automating the involved processes, and improving the systems used to notify police officers about court dates. Congress provided $1 million for fiscal year 2001 to implement some of the recommended case management initiatives, such as a differentiated case management system for misdemeanors and traffic offenses, the papering pilot project between MPDC and Corporation Counsel, and a mental health pilot treatment project for appropriate, nonviolent pretrial release defendants in coordination with the D.C. Commission on Mental Health Services. D.C.’s criminal justice system is complex, with more than 70 different information systems in use among the various participating agencies. These systems are not linked in a manner that permits timely and useful information sharing among disparate agencies. For example, as the information systems are currently maintained, it is very difficult to obtain data to determine the annual amount of time MPDC officers spend meeting with prosecutors about cases in which prosecutors eventually decide not to file charges against the arrestee. We determined that such an analysis would require data about: (1) MPDC arrests, (2) MPDC officer time and attendance, (3) charges filed by USAO or Corporation Counsel, and (4) Superior Court case dispositions. All of this information is currently maintained in separate systems with no reliable tracking number that could be used to link the information in each system for a specific case and no systematic exchange of information. This lack of shared information diminishes the effectiveness of the entire criminal justice system. For example, according to a CJCC official, there is no immediate way for an arresting officer to determine whether an arrestee is on parole, or for an arrestee’s community supervision officer to know that the parolee had been arrested. Such information could affect both the charging decision and the decision whether or not to release an arrestee from an MPDC holding cell. In 1999, CJCC attempted to address problems with D.C. criminal justice information systems by preparing, among other things, an Information Technology Interagency Agreement that was adopted by CJCC members. The agreement recognized the need for immediate improvement of information technology in the D.C. criminal justice system and established the Information Technology Advisory Committee (ITAC) to serve as the governing body for justice information system development. ITAC recognized that it was difficult for a single agency involved in the criminal justice system to access information systems maintained by other agencies, and pursued developing a system that would allow an agency to share information with all other criminal justice agencies, while maintaining control over its own system. ITAC devised a District of Columbia Justice Information System (JUSTIS). In July 2000, CJCC partnered with the D.C. Office of the Chief Technology Officer in contracting with a consulting firm to design JUSTIS based on modern dedicated intranet and Web browser technology. On August 31, 2000, the consulting firm delivered to CJCC and the D.C. Office of the Chief Technology Officer a draft version of a blueprint for a finalized JUSTIS. When completed, JUSTIS is to allow each agency to maintain its current information system, while allowing the agency to access selected data from other criminal justice agencies. Initially, Court Services, Pretrial Services, and MPDC will pilot JUSTIS by allowing a portion of each agency’s data to be shared with other D.C. criminal justice system agencies. The initial operation is to be evaluated and changes can be made before the JUSTIS model is finalized. According to a CJCC official, after any necessary modifications are complete, ITAC plans to implement JUSTIS throughout the D.C. criminal justice system. While JUSTIS, if implemented, would allow D.C. criminal justice agencies to share data, it would not assure the quality of data that was being shared. For example, if an arrestee’s name and/or social security number are entered incorrectly into the system, the corresponding data would be inaccurate. In addition to the JUSTIS project, CJCC’s Data Group, composed of representatives of nine agencies involved in the D.C. criminal justice system, has outlined a program to implement a unique fingerprint- supported tracking number in the system. A unique identifier will be assigned upon the initiation of case processing to ensure that all entries related to a particular case—from arrest through disposition of those charges, and corrections actions in response to those charges—are linked. The unique identifier can ensure that each case is properly linked to an individual’s criminal history record. The goal is to store the information linked through the tracking number in a central D.C. criminal justice repository. According to a CJCC official, CJCC has begun an initiative in cooperation with other D.C. and federal criminal justice agencies to develop the legislative foundation for the long-term support of an integrated D.C. criminal justice information repository that meets Department of Justice standards and federal regulations. Effective correctional supervision, which includes probation, incarceration, and post-prison parole or supervised released for convicted defendants, requires effective coordination among participating agencies. In D.C., the stage of the criminal justice system referred to as correctional supervision involves several agencies including: (1) Superior Court, which sentences convicted defendants and determines whether to revoke a person’s release on community supervision; (2) Court Services, which monitors offenders on community supervision; (3) DOC, which primarily supervises misdemeanants sentenced to D.C. Jail or one of several halfway houses in D.C.; (4) BOP, which supervises felons incarcerated in federal prisons; (5) the U.S. Parole Commission, which determines the prison release date and conditions of release for D.C. inmates eligible for parole;and (6) the U.S. Marshals Service, which transports prisoners. Gaps in coordination among agencies may lead to tragic consequences, such as those that occurred in the case of Leo Gonzales Wright, who committed two violent offenses while under the supervision of D.C.’s criminal justice system. Wright, who was paroled in 1993 after serving nearly 17 years of a 15-to-60 year sentence for armed robbery and second degree murder, was arrested in May 1995 for automobile theft charges, which were later dismissed. In June 1995, Wright was arrested for possession with intent to distribute cocaine. However, he was released pending trial for the drug arrest, due in part to miscommunication among agencies. Wright subsequently committed two carjackings, murdering one of his victims. He was convicted in U.S. District Court for the District of Columbia and is currently serving a life without parole sentence in federal prison at Leavenworth, KS. The outcry over the Wright case resulted in two studies, including a comprehensive review of the processing of Wright’s case prepared for the U.S. Attorney General by the Corrections Trustee in October 1999. The report included 24 recommendations to help ensure that instances similar to the Wright case do not reoccur. In July 2000, the Corrections Trustee issued a progress report on the implementation of recommendations from the October 1999 report. According to the Corrections Trustee, while not all recommendations in the October 1999 report have been fully implemented, progress has been made in addressing a number of them. For example, with funds provided by the Corrections Trustee, DOC has purchased a new jail-management information system for tracking inmates and implemented a new policy on escorted inmate trips. In addition, in January 2000, the Corrections Trustee began convening monthly meetings of an Interagency Detention Work Group, whose membership largely parallels that of CJCC. The group, and its six subcommittees, have focused on such issues as the convicted felon designation and transfer process, and parole and halfway house processing. In addition to the studies and the actions of the Corrections Trustee, CJCC and Court Services are addressing the monitoring and supervision of offenders. CJCC has begun to address the issues of halfway house management and programs that monitor offenders. Court Services is developing a system in which sanctions are imposed whenever individuals violate conditions of probation or parole. Forensics is another area where lack of coordination can have adverse effects. D.C. does not have a comprehensive forensic laboratory to complete forensic analysis for use by police and prosecutors. Instead, MPDC currently uses other organizations such as the FBI, the Drug Enforcement Administration, the Bureau of Alcohol, Tobacco and Firearms, and a private laboratory to conduct much of its forensic work. MPDC performs some forensic functions such as crime scene response, firearms testing, and latent print analysis. The Office of the Chief Medical Examiner, a D.C. agency, performs autopsies and certain toxicological tests, such as the testing for the presence of drugs in the body. Coordination among agencies is particularly important because several organizations may be involved in handling and analyzing a piece of evidence. For example, if MPDC finds a gun with a bloody latent fingerprint at a crime scene, the gun would typically need to be examined by both MPDC and the FBI. In order to complete the analysis, multiple forensic disciplines (e.g., DNA or firearm examiners) would need to examine the gun. If the various forensic tests were coordinated in a multidisciplinary approach, examiners would be able to obtain the maximum information from the evidence without the possibility of contaminating it. Such contamination could adversely affect the adjudication and successful resolution of a criminal investigation. In April 2000, the National Institute of Justice (NIJ) issued a report on the D.C. criminal justice system’s forensic capabilities. The report concluded that D.C. had limited forensic capacity and that limitations in MPDC prevented the effective collection, storage, and processing of crime scene evidence, which ultimately compromised the potential for successful resolution of cases. NIJ-identified deficiencies included, among other things: lengthy delays in processing evidence; ineffective communications in the collection, processing, and tracking of evidence from the crime scene; and ineffective communications between forensic case examiners and prosecutors. The NIJ report supported development of a centralized forensic laboratory that would be shared by MPDC and the D.C. Office of the Chief Medical Examiner. The report did not examine the costs to build a comprehensive forensic laboratory. We did not independently evaluate the costs and benefits of a comprehensive forensic laboratory. However, such a facility could potentially improve coordination by housing all forensic functions in one location, eliminating the need to transport evidence among multiple, dispersed locations. D.C.’s unique structure has also led to coordination problems in the initial stages of case processing that occur from the time of arrest through initial court appearance. We reviewed this process as a case study of coordination among D.C. criminal justice agencies and the difficulties of balancing competing institutional interests. As many as six agencies—one D.C. (MPDC), three federal (USAO, U.S. Marshals Service, and Pretrial Services), and two federally funded D.C. agencies (Superior Court and Defender Service)—need to coordinate before an arrested person’s initial court appearance for a felony offense can occur. As is true of all stages of D.C.’s criminal justice process, the actions of each participating agency affect the other participants in the process. Appendix IV includes a description of D.C.’s criminal justice process—from arrest through sentencing—for cases USAO prosecutes. Appendix VII contains a more detailed description of the process from arrest through initial court appearance. Here we discuss issues of police-prosecutor coordination during the charging process. Both USAO and Corporation Counsel require a police officer knowledgeable about the facts of an arrest to physically report to the prosecutor’s office for papering. In D.C., papering is the stage of case processing at which officers present their arrest reports to a prosecutor and explain the circumstances of the arrest. For each arrest, prosecutors determine whether the case should be prosecuted (“paper” the case) or not (“no-paper” the case). We focused our study on USAO cases because they are more numerous, typically more complicated, and require significantly more officer time. In 1998, USAO cases (felony and misdemeanor) constituted 64 percent of the criminal cases brought to Superior Court for disposition. In addition, the cases prosecuted by USAO accounted for 87 percent of the 47,810 police hours recorded for papering during 1999. As part of their duties, police officers in all jurisdictions generally must make appearances to provide information about cases at a number of criminal justice proceedings, including grand jury testimony, preliminary hearings, pretrial witness conferences, and trials. In addition to these appearances, USAO and Corporation Counsel prosecutors require that MPDC officers personally meet with prosecutors in order to make a charging decision for all cases. This requirement, particularly for misdemeanors, appears to be unusual. A 1997 Booz-Allen and Hamilton survey found that in 30 of 38 responding jurisdictions (51 were surveyed), police officers were not required to meet with prosecutors until court (i.e., trial), and in 3 cities officers were not required to appear in person until the preliminary hearing. Four cities required officers to meet with prosecutors on a case-dependent basis, and one city was in the process of changing its charging procedures. Corporation Counsel and MPDC have agreed to initiate a pilot project in March 2001 in which officers are not required to appear in person for 17 minor offenses. There is currently no similar pilot planned for misdemeanors prosecuted by USAO. According to USAO officials, the current papering process is critical for USAO to make an initial charging decision correctly. Making an initial charging decision correctly benefits (1) USAO by allowing them to more effectively prosecute “good” cases; (2) arrestees by ensuring that individuals are not inappropriately charged with a crime; and (3) the criminal justice system by allowing USAO to weed out “poor” cases, which would otherwise languish in the system consuming many agencies’ resources. To ensure that it has all the information required for making informed charging decisions, USAO requires that officers appear in person to provide information about the arrest that may be missing from the arrest reports, inaccurate in the reports, or corollary to information recorded in the reports. The purpose of the paperwork that police present to USAO attorneys is to provide evidence that there is probable cause to (1) believe that a crime has been committed and (2) that the person(s) arrested committed the crime. Police documentation could provide evidence that establishes probable cause, but prosecutors may decline to file charges because, for example, they do not believe the evidence would be sufficient to prove the arrestee’s guilt “beyond a reasonable doubt,” the standard required for conviction of a crime in court. The prosecutor’s goal is to prevail in those cases selected for prosecution. Both USAO and MPDC officials said that the paperwork submitted to USAO for charging decisions has been of uneven quality. In the past, MPDC has responded to USAO concerns about the quality of arrest paperwork by conducting report writing training sessions for sergeants, requiring all officers to take annual in-service report writing training, and adopting the use of a new form to document officers involved in an arrest. Prosecutors—federal and nonfederal—generally have considerable discretion in selecting which cases they will prosecute. Within its prosecutorial discretion, USAO could decide not to file charges for a number of reasons unrelated to the completeness and accuracy of the police paperwork submitted to prosecutors. According to data USAO provided to MPDC, USAO declined to file charges for 3,270 cases during the period from November 1999 through June 2000. Of the 3,270 cases in which USAO declined to file charges, USAO listed police-related problems, including paperwork problems, in 8 percent of the cases; problems with witness or victim cooperation or credibility in 20 percent; problems with evidence or proof in 70 percent; and a variety of other reasons in 2 percent of the cases. Several problems exist with D.C.’s current method of processing cases from arrest through initial court appearance. These include Lack of automation inhibits process reform. In order to document an arrest, officers are required to complete several forms by hand or typewriter. Most forms contain a similar set of basic information about the incident, the arrestee, or the arresting officer (e.g., time of arrest, arrestee’s name, and arresting officer’s name). For example, to document a drug arrest in which narcotics were seized and property was removed from the arrestee, an officer would have to complete 10 forms, write or type his/her name 8 times, the arrestee’s name and charges 5 times, and the arrestee’s full address and social security number 4 times. This increases the potential for entry error resulting in inconsistent entries for the same information on different forms. Reducing the number of forms required would itself require the cooperation and coordination of MPDC, USAO, Corporation Counsel, Pretrial Services, Defender Service, and Superior Court. However, linking the existing forms in an automated system would permit an officer to type such information once, after which relevant fields in each report would automatically display the required duplicative information. USAO has noted that problems in the accuracy and completeness of the forms submitted by officers for charging decisions are one reason that its prosecutors require a personal meeting with officers to make a charging decision. More accurate paperwork could increase prosecutors’ willingness to use the paperwork as a source of charging decisions, at least for a number of misdemeanors. Paperwork delays may slow case processing. Paperwork problems, including physical movement of paperwork between various locations, may delay case processing. There is no electronic mechanism, such as a connected automated system, for transferring arrest paperwork from MPDC to the appropriate prosecuting office and then to the courts. Arrest paperwork may be misplaced as it is physically transported between agencies, and the initial court appearance may be delayed because some of the required paperwork is missing. Any resulting delays in the initial court appearance may increase the time that those detained spend in jail prior to their initial court appearance. Required meetings with prosecutors keep officers off the street. Before a papering decision is made, both USAO and Corporation Counsel prosecutors require that an officer knowledgeable about the facts of the arrest meet with an attorney for papering. In addition, the papering process requires officers to spend time at the prosecutor’s office performing clerical duties, such as making copies and assembling documents in file jackets. On-duty officers who make arrests between 3:00 p.m. and 7:00 a.m. are required to meet with prosecutors the morning after an arrest to paper the case. All off-duty officers who appear for papering receive a minimum of 2 hours compensatory time. During 1999, MPDC officers spent the equivalent of 23 full-time staff years in meetings with prosecutors for papering decisions. In other words, the time required for these meetings was the equivalent of taking 23 full-time officers off the streets. Using an MPDC sworn officer’s average salary, 23 full-time officers cost about $1,262,000. However, it would take more than 23 additional full- time officers to replace the duty hours devoted to the meetings with prosecutors. This is because (1) an officer is available for duty only a portion of the entire 2,080-hour work year, which includes vacation and training time, and (2) the data do not take into account that off-duty officers who appeared for less than 2 hours actually received 2 hours of compensatory time for their papering appearances. Although the principal participants in the charging decision are MPDC, USAO, and Corporation Counsel, reducing the hours that officers spend in meetings with prosecutors for charging decisions would require the cooperation and coordination of a number of D.C. criminal justice agencies. During the past few years, criminal justice agencies in Philadelphia and Boston have each made coordinated efforts to improve their collective efficiency in processing arrestees. Both cities have turned to automation to improve the process from arrest to initial court appearance, and both involved the cooperation of multiple agencies in developing their automated systems. In Philadelphia participants continue to meet weekly to review arrestee processing statistics and discuss possible improvements to the system. Neither Philadelphia nor Boston requires face-to-face meetings with prosecutors for processing most cases. Prosecutors principally rely on the automated system for the information needed to make charging decisions. Philadelphia employs a software system and videoconferencing to process arrestees from the point of arrest through initial court appearance. The system, which was developed through collaboration among Philadelphia criminal justice system agencies, allows the Philadelphia Police Department, the District Attorney’s Charging Unit, Philadelphia Municipal Court, and Pretrial Services to send and receive information electronically. In addition, the system is able to track a defendant’s physical location and length of time in the system. The software was developed in conjunction with Philadelphia Municipal Court’s implementation of videoconferencing for the initial court appearances. The courtroom, which operates 24 hours a day, 365 days a year, uses video cameras, monitors, and software that make it possible to conduct live hearings, eliminating the need to transport prisoners to a central location. Defendants are held at one of eight booking stations throughout the city. Rather than have police officers meet face-to-face with charging attorneys to reach a charging decision, Philadelphia charging attorneys review police paperwork submitted electronically. If charging attorneys need additional information, they will contact police for clarification or missing information. As in Philadelphia, Boston has also turned to automation to improve the efficiency of its processing of arrestees. In the spring of 2000, the Boston Police Department, Boston Municipal Court, and the Suffolk County District Attorney’s Office implemented a pilot project designed to automate the charging process by electronically linking the three agencies. Boston’s new system allows the Boston Police Department to electronically file applications for criminal complaints (i.e., charging documents) to Municipal Court for review and acceptance prior to the initial court appearance. The system contains all of the information that is typically available in paper form and allows users to track the status of complaints. In Boston, arresting officers are not required to attend face-to- face meetings to charge cases. The Philadelphia and Boston experiences illustrate the need for cooperation in crafting process changes and the benefits that could result from the greater use of automation. Of course, any specific D.C. changes would need to reflect D.C. statutory and other requirements governing case processing. For additional information on arrestee processing in Philadelphia and Boston, see appendix VII. In the past decade, several attempts have been made to change the initial stages of case processing in D.C. These efforts—which were made by MPDC, Corporation Counsel, and USAO, in conjunction with consulting firms—involved projects in the areas of night papering, night court, and officerless papering. However, the involved agencies never reached agreement on all components of the projects, and each of the projects was ultimately suspended. The Chief of MPDC has publicly advocated the establishment of some type of arrangement for making charging decisions during the evening and/or night police shifts. Night Papering and Night Court Night papering and night court refer to the extension of papering meetings and court hearings into the evening and night hours. Night papering could permit officers on evening and night shifts to generally present their paperwork to prosecutors during their shifts. Currently, both USAO and Corporation Counsel are only open to paper cases during typical workday hours, that is, generally from about 8:00 a.m. to 5:00 p.m., Monday through Saturday. Night court refers to conducting certain court proceedings, such as initial court appearance, during a late evening or night shift. Night papering would require USAO and Corporation Counsel charging attorneys to work evening hours, and night court would involve a much broader commitment of Superior Court resources as well as the participation of other agencies (such as MPDC, USAO, Corporation Counsel, Pretrial Services, Defender Service, and U.S. Marshals Service). Officerless papering refers to a papering process where prosecutors base their charging decisions principally upon the paperwork submitted by officers. In such circumstances, officers would not generally be required to appear in person before the prosecutor, and provisions could be made for the prosecutor to contact the officer to clarify issues, as needed. MPDC and Corporation Counsel have agreed to begin an officerless papering pilot program in March 2001 for 17 minor offenses prosecuted by Corporation Counsel. Until an electronic transmission mechanism is available, any officerless papering system would require arrest paperwork to be physically transported to prosecutors for review. In the absence of an automated system for completing and transmitting the forms required for documenting arrests and making charging decisions, simple entry errors resulting from entering the same information multiple times can hamper the initial stages of case processing. Such errors must be remedied before charging decisions can be completed. USAO has cited such problems as one reason that officers should be required to meet face-to-face with prosecutors for papering decisions. To the extent that the police do not have a reliable process for reviewing and assuring the completeness and accuracy of the paperwork submitted to prosecutors, USAO is likely to continue to resist efforts to pilot or institute officerless papering. Even if these issues were to be successfully addressed, the distribution of costs among the participants in any revised system would still likely pose an obstacle to change. The costs of the current system of processing cases from arrest through initial court appearance are borne principally by MPDC—primarily a locally funded D.C. agency—not USAO or Superior Court, both of which are federally funded. Police officers, for example, currently perform a number of clerical functions associated with processing the paperwork for charging decisions. On the other hand, the costs of instituting night papering would be borne primarily by USAO, Corporation Counsel, and/or Superior Court, depending upon the approach taken, not MPDC. Indeed, this approach would likely reduce MPDC’s costs, principally the cost of compensation time for evening duty officers who must meet with prosecutors during off-duty hours. The fact that costs—the current costs and those associated with potential changes—are not equally shared among the participating agencies makes it more difficult to reach consensus on whether and how to change the current system. Nevertheless, a program that incorporates the benefits of automation and some form of officerless papering for at least some USAO-prosecuted misdemeanors could increase the number of hours that MPDC officers are available for patrol. Depending upon the changes made, there could also be benefits in reduced time to process cases and reduced costs for transporting documents and arrestees among different locations. Yet, without agreement or a coordinated approach these benefits will not be realized. Many criminal justice officials we spoke with noted that CJCC has improved the coordination, cooperation, and dialogue among agencies and has fostered discussion. Although CJCC was created to respond to criminal justice issues that extend beyond the scope of any one agency, the organization has no formal authority over member agencies and its future is uncertain. CJCC was created by the agreement of its members in 1998 to meet the need for better coordination among all of D.C.’s criminal justice agencies, identify ideas of mutual interest, and mobilize resources to improve the D.C. criminal justice system. The D.C. Control Board funded CJCC through fiscal year 2000. Anticipating the Control Board’s suspension of activities in fiscal year 2002, Congress reduced the Board’s fiscal year 2001 funding. The Board subsequently decided not to fund CJCC in fiscal year 2001. CJCC’s mission is to address coordination difficulties among D.C. criminal justice agencies. Its funding and staffing have been modest—about $300,000 annually with four staff. CJCC has functioned as an independent entity whose members represent the major organizations within the D.C. criminal justice system. CJCC members have typically met every 4 to 6 weeks and have formed numerous teams to address criminal justice issues, such as drugs, juvenile justice, halfway houses, information technology, and identification of arrestees. CJCC staff have coordinated meetings, provided data and statistics, summarized workgroup findings, performed best practices reviews, and provided other information and support to D.C. criminal justice agencies. Hence, CJCC has served as a centralized mechanism for collecting and disseminating information and statistics about D.C.’s criminal justice system. CJCC workgroups and teams have succeeded in developing proposals and project plans for several issues. For example, CJCC’s Positive Identification Workgroup has been working on a project to determine how to implement fingerprinting protocols. One issue being reviewed by the workgroup is whether to expand positive identification fingerprinting to all arrestees. CJCC, through its technology committee, has been successful in establishing a draft blueprint for data sharing in part because the initiatives have been funded through grants and in part because each participating agency potentially stands to benefit from the changes being considered. However, CJCC’s ability to effect cooperation among the various agencies has been limited because it has no formal authority or power over any member agency. In 1998 and 1999, for example, CJCC attempted to assist efforts by MPDC, USAO, and Corporation Counsel to reform the process for determining whether to charge arrestees. However, USAO would not agree to participate in the project unless certain conditions were met, and the project was ultimately suspended. In this case, the costs and potential benefits—financial and organizational—of the changes under consideration were perceived by one or more of the involved agencies to be unevenly distributed. Currently, CJCC faces an uncertain future. Its funding expired on September 30, 2000, and in that same month the Executive Director of CJCC was appointed as Deputy Mayor for Public Safety and Justice. CJCC’s one remaining staff member is funded by a grant. Although various CJCC working groups continue to meet, it is not known whether CJCC will continue to formally exist, and if it exists, how it will be funded, whether it will have staff, and whether it will remain independent or under the umbrella of another organization, such as the D.C. Mayor’s office. CJCC members expressed concern about the future of CJCC and where it could appropriately be housed without compromising its essential independence. According to some D.C. criminal justice officials, CJCC’s independence was a key characteristic that brought agencies to the table to discuss issues that affected more than one agency. Officials representing both USAO and Superior Court have stated that they would be reluctant to participate in a CJCC that was under the umbrella of the Mayor’s office because it was not clear that CJCC could be truly independent. CJCC has shown that it can provide a valuable forum for discussion of multiagency issues and serve as a catalyst for action. However, regarding the more contentious issues, such as papering, CJCC’s members have generally agreed to disagree. Proposed solutions have been unable to bridge differing institutional interests and the fact that the costs of proposed solutions were unevenly distributed among agency participants. CJCC has not been required to formally report on its activities, including areas of focus, successes, and areas of continuing discussion and disagreement. Consequently, its activities, achievements, and areas of disagreement have generally been known only to its participating agencies. Oversight agencies—such as congressional appropriations committees and the D.C. Council—have had little information on systemic issues affecting the D.C. criminal justice agencies under their jurisdiction. Currently, the agencies that make up the D.C. criminal justice system are involved in numerous initiatives to improve system operations. In response to our survey, as of November 2000, CJCC and other agencies reported 93 initiatives planned or under way for improving the D.C. criminal justice system. These initiatives cover a wide range of aspects of the criminal justice process, from arrest through correctional supervision. Table 3 summarizes the initiatives by subject area. Participating agencies have reported some success with these initiatives. An example of a corrections subject area initiative is the one led by Pretrial Services and DOC to review and reform halfway houses operations. A goal of this initiative is to improve public safety by strengthening coordination, cooperation, and management among relevant criminal justice agencies. According to agency officials, the initiative has reduced the time to obtain a warrant for halfway house walkaways from 7 business days to 1 business day. As a second example, USAO has led an initiative to enhance crime prevention and law enforcement activity by allowing cooperation through agreements between federal agencies and MPDC. These agreements may span all areas of law enforcement, from equipment sharing to allowing federal law enforcement officers to patrol areas immediately surrounding the federal agencies’ jurisdictions. Since most of the 93 initiatives were ongoing at the time of our review, we were unable to evaluate their collective impact on systemwide operations. Although the initiatives show that coordinated efforts among agencies can improve the D.C. criminal justice system, we found 62 instances in which agencies did not agree on the goals, status, date started, participating agencies, or results of the initiatives other agencies reported. A number of these differences resulted from agencies’ suggested changes to a draft of appendix VIII. For example, while USAO noted that it is part of the Community Justice Partnerships initiative, Court Services did not identify USAO as a participating agency. For the Fingerprinting Arrestees initiative, Superior Court requested that we change a description of the goals provided by Pretrial Services. DOC noted that it had its own victims services initiative and that it intended to proceed alone on that initiative. A DOC official said DOC was unsure about the details of victims services initiatives by MPDC or the D.C. Mayor’s office. When the agencies responsible for an initiative cannot agree on such fundamental questions as who is taking part in the initiative or even on the goals of the initiative, it becomes apparent that coordination is lacking. This lack of coordination could reduce the effectiveness of these initiatives. In addition, CJCC has not played a role in coordinating these initiatives. Without some form of coordination, confusion about various aspects of initiatives is likely to continue, which could ultimately diminish their effectiveness. Effective coordination of the many agencies that participate in a criminal justice system is key to overall success. Although any criminal justice system faces coordination challenges, the unique structure and funding of the D.C. criminal justice system, in which federal and D.C. jurisdictional boundaries and dollars are blended, creates additional challenges. Almost every stage of D.C.’s criminal justice process presents such challenges, and participating agencies are sometimes reluctant to coordinate because the costs of implementing needed changes may fall on one or more federally funded agencies, while any savings accrue to one or more D.C. funded agencies, or vice versa. In the absence of a single hierarchy and funding structure, agencies have generally acted in their own interests rather than in the interest of the system as a whole. CJCC was established and staffed as an independent entity to improve systemwide coordination and cooperation. During its 2 ½-year existence, CJCC has served as a useful, independent discussion forum at a modest cost. It has had notable success in several areas where agencies perceived a common interest, such as developing technology that permits greater information sharing. It has been less successful in other areas, such as papering, where forging consensus on the need for and the parameters of change has been difficult. Without a requirement to report successes and areas of continuing discussion and disagreement to each agency’s funding source, CJCC’s activities, achievements, and areas of disagreement have generally been known only to its participating agencies. This has created little incentive to coordinate for the common good, and all too often agencies have simply “agreed to disagree” without taking action. Further, without a meaningful role in the establishment of multiagency initiatives, CJCC has been unable to ensure that criminal justice initiatives are designed to identify the potential for joint improvements, and that they are carefully coordinated among all affected agencies. These factors notwithstanding, on balance, CJCC has achieved some successes at a modest cost and served as a useful, independent forum for discussing issues that affect multiple agencies. CJCC’s future is uncertain because its funding source, the D.C. Control Board, is scheduled to disband and key CJCC officials have departed. This could leave D.C. without benefit of an independent entity for coordinating the activities of its unique criminal justice system. Funding CJCC through any participating agency diminishes its stature as an independent entity in the eyes of a number of CJCC’s member agencies, reducing their willingness to participate. We recommend that Congress consider: Funding an independent CJCC—with its own director and staff—to help coordinate the operations of the D.C. criminal justice system. Congressional funding ensures that CJCC will retain its identity as an independent body with no formal organizational or funding link to any of its participating members. Requiring CJCC to report annually to Congress, the Attorney General, and the D.C. Mayor on its activities, achievements, and issues not yet resolved and why. Requiring that all D.C. criminal justice agencies report multiagency initiatives to CJCC, which would serve as a clearinghouse for criminal justice initiatives and highlight for CJCC members those initiatives that warrant further discussion and coordination. This reporting requirement could help improve interagency coordination, promote the adoption of common goals, and help reduce redundant efforts. We requested comments on our draft report from 15 agencies, and received written comments on our conclusions and recommendations from 8 of these agencies. These eight agencies supported the concept of CJCC and agreed that CJCC has been a valuable tool for improving coordination among D.C. criminal justice agencies. These agencies generally supported our first recommendation that Congress provide funding to continue the work of an independent CJCC. However, these agencies were generally silent on our second recommendation, which would require CJCC to report annually to Congress, the Attorney General, and the D.C. Mayor. The only comment was from the D.C. Mayor’s office, which noted that the reporting requirement would increase public scrutiny of D.C. criminal justice agencies. With respect to our third recommendation, several agencies expressed concern about having CJCC review D.C. criminal justice initiatives involving more than one agency. The U.S. Attorney for D.C., for example, suggested that CJCC’s review and coordination role regarding initiatives should be limited to those that have a significant impact on multiple agencies in the criminal justice system. Otherwise, the recommendation, if implemented, could hamper each agency’s ability to implement policies and practices within its appropriate sphere of activity. Along these lines, the Director of Pretrial Services stated that it was important that every voice be heard before decisions are made, but it should not be required that all agencies have common goals for every initiative. We continue to believe that CJCC needs to have a role in resolving the types of coordination problems we found in our review. Our intent in making the recommendation was to better ensure that multiagency criminal justice initiatives are designed to maximize their potential for joint improvements, resolve apparent duplication and disagreement regarding roles and responsibilities, and promote the adoption of common goals and measures of success. Having agencies report multiagency initiatives to CJCC would also allow CJCC to maintain a comprehensive database of ongoing initiatives. To limit CJCC’s role to initiatives that have a “significant impact” invites debate on what is and is not significant. Moreover, the fact that agencies did not agree on something as fundamental as the agency with lead responsibility for a number of initiatives suggests a gap in communication and understanding of participating agency roles and responsibilities. We have modified the wording of our recommendation to better reflect our intent that CJCC serve as a clearinghouse for multiagency initiatives, highlighting for CJCC members issues that warrant further discussion and coordination. The U.S. Attorney for D.C. took issue with our characterization of night papering and officerless papering. She noted that night papering had been tried in the late 1980s and that the effort was cancelled because too few cases were presented for papering. The night papering project she referred to was limited to the evening hours before 10 p.m., and given the time required to complete the paperwork needed to make a charging decision, the number of evening arrests that this project could consider for papering was limited. With respect to officerless papering, the U.S. Attorney noted that her office has offered to work with MPDC on a pilot project for officerless papering. However, she cited practical, nonbudgetary reasons that officerless papering had not yet been piloted, and said that USAO and MPDC have been unable to agree on the steps required to initiate it. For its part, MPDC said it appreciated our discussion of the papering process with USAO. In addition to the costs of the process in terms of officer time, the Chief of MPDC said that papering may deter officers from making lower-level arrests. We did not specifically advocate the adoption of any of the papering processes used in other locations. Evidence exists to support or rebut the arguments made by various agencies regarding the merits and drawbacks of the current process. The debate concerning the papering issue has persisted for over a decade, and previous efforts to resolve it have been limited to discussions among D.C. criminal justice system agencies and have been unsuccessful. This is exactly the type of issue that could benefit from a broader perspective and increased visibility, and it underscores the need for CJCC to shed light on this and other complex issues in its annual reports to Congress, the Attorney General, and the D.C. Mayor. The high- level visibility of these reports could further impel agencies to seek resolution of contentious issues affecting the operation of the D.C. criminal justice system. Ten of the 15 agencies provided additional information and technical suggestions, which we evaluated and incorporated as appropriate. The eight letters that contain comments other than clarifications and technical suggestions are printed in appendixes IX through XVI. We are sending copies of this report to the Honorable John Ashcroft, Attorney General; the Honorable Rufus King, III, Chief Judge of the D.C. Superior Court; the Honorable Anthony A. Williams, Mayor of the District of Columbia; the Honorable Linda W. Cropp, Chair of the D.C. Council; Dr. Alice M. Rivlin, Chair of the D.C. Control Board; Charles H. Ramsey, Chief, MPDC; Robert R. Rigsby, D.C. Corporation Counsel; Dr. Jonathan L. Arden, Chief Medical Examiner of the District of Columbia; Wilma A. Lewis, U.S. Attorney for D.C.; Todd W. Dillard, the U.S. Marshal of the Superior Court for the District of Columbia; Cynthia E. Jones, Director of Defender Service; Jasper E. Ormond, Interim Director of Court Services; Susan W. Shaffer, Director of Pretrial Services; John L. Clark, Corrections Trustee; and Odie Washington, Director of DOC. Copies will also be made available to others upon request. If you or your staff have any questions concerning this report, please contact me or William Jenkins Jr., on (202) 512-8777. Key contributors to this assignment were Mary Hall, Geoffrey Hamilton, Mary Catherine Hult, Donald Jack, Michael Little, and Mark Tremba. A provision of the District of Columbia (D.C.) Appropriations Act (P.L. 106-113) for fiscal year 2000 directed us to conduct a study of the D.C. criminal justice system. To identify the agencies involved in D.C.’s criminal justice system and their roles and responsibilities, we reviewed relevant legislation, such as the National Capital Revitalization and Self- Government Improvement Act of 1997 (D.C. Revitalization Act), which altered some of the responsibilities and/or funding of D.C. criminal justice agencies. We also reviewed previous studies of D.C. criminal justice agencies and their operations, agency documents on policies and procedures, and relevant agency budgets for fiscal year 2000 and requests for fiscal year 2001. In addition, we interviewed officials in the following agencies: the U.S. Attorney’s Office (USAO) for the District of Columbia, the Metropolitan Police Department of the District of Columbia (MPDC), Superior Court of the District of Columbia (Superior Court), Public Defender Service for the District of Columbia (Defender Service), District of Columbia Pretrial Services Agency (Pretrial Services), Court Services and Offender Supervision Agency for the District of Columbia (Court Services), Office of the Corporation Counsel for the District of Columbia (Corporation Counsel), Office of the District of Columbia Chief Medical Examiner (Medical Examiner), U.S. Marshals Service, Office of the Corrections Trustee for the District of Columbia, and the Criminal Justice Coordinating Council for the District of Columbia (CJCC). We also interviewed officials from the Council for Court Excellence and the Justice Management Institute. We generally asked each of the agency officials interviewed to identify (1) any prior studies of one or more parts of D.C.’s criminal justice system and (2) the most important problems facing the system. On the basis of our review of available documents and these interviews, we identified five major problem areas as follows: The process used by USAO and Corporation Counsel to determine whether those arrested would be formally charged with a crime (which participants referred to as the “papering” process) was perceived to be problematic. Issues included the completeness and accuracy of police paperwork submitted to prosecutors and compensatory time off for police officers, who were required to meet with prosecutors in person to discuss their paperwork and the circumstances of the arrest. The process used to schedule or “calendar” cases in Superior Court was perceived to be inefficient and resulted in long waiting times for police officers, witnesses, attorneys, and others scheduled to appear in court. MPDC officials said that the process resulted in significant police overtime costs. The juvenile justice system was perceived to have numerous, complex, interrelated problems. A principal issue in the process for collecting and evaluating forensic evidence, such as clothing fibers, tissue, and hair samples, was whether a central forensic facility would significantly enhance D.C.’s forensic capacities. Currently, forensic tests are performed by federal and D.C. agencies. Officials told us that existing criminal justice databases did not necessarily have accurate, reliable data and that their structure impeded the sharing of data among criminal justice agencies. On the basis of our document reviews and interviews, each of these problems appeared to affect multiple D.C. criminal justice agencies, although not always the same ones. In our initial survey work, we also found that a number of initiatives had been proposed or were under way to address a variety of D.C. criminal justice issues. After discussing our initial findings with the committees of jurisdiction, we focused on the following objectives: assess how the structure of the D.C. criminal justice system has affected coordination; assess the mechanisms that exist to coordinate the activities of the system; and describe current initiatives by federal and D.C. agencies for improving the operation of the D.C. criminal justice system. We did not focus on court case scheduling because CJCC had selected the Council for Court Excellence and Justice Management Institute to conduct a study of this issue. The problems officials noted in the juvenile justice system extended beyond criminal justice agencies (e.g., counseling and/or residential care) and we limited the scope of our review to D.C. criminal justice agencies. We also gathered available data on D.C.’s current forensic capabilities and practices and the potential need for a central, comprehensive forensic laboratory in D.C. We reviewed a recent National Institute of Justice (NIJ) report on D.C.’s forensic capabilities and interviewed an NIJ official and a local forensic official who served on the NIJ assessment team. However, we were unable to obtain data that could be used to document the effects of the District’s current forensic practices that would be remedied by a central, comprehensive forensic facility. Because CJCC had a major initiative under way to address the automated data systems within the D.C. criminal justice system, we limited our work to describing this initiative and the problems it is designed to address. To assess how the structure of the D.C. criminal justice system affected coordination, we interviewed officials in each of the offices mentioned above and reviewed available written policies, procedures, and descriptions of the case flow process. We also reviewed the provisions of the D.C. Revitalization Act, which federalized some D.C. criminal justice agencies, such as Defender Service and Court Services. From these sources we drafted three separate case flow descriptions: (1) cases prosecuted by USAO, (2) cases prosecuted by Corporation Counsel, and (3) juvenile cases, and we discussed the responsibilities of each agency in each process. These drafts were reviewed by the participating agencies for accuracy and amended as appropriate. We reviewed the charging process with USAO, Corporation Counsel, MPDC, and Pretrial Services officials—the principal participants in the process. We reviewed available policies and procedures at the participating agencies and observed the process at each agency, from the initial paperwork completed by police officers following an arrest through the decision by USAO or Corporation Counsel to file or not file formal charges. Because of the unique structure of the D.C. criminal justice system, there are no jurisdictions whose criminal justice structures and processes are comparable to those of D.C. However, we reviewed prior studies of the process and interviewed officials in Philadelphia and Boston, two large East Coast cities that have recently revised their methods of processing cases from arrest through initial court appearance. Both Philadelphia and Boston have a large annual number of misdemeanor and felony arrests, but their papering systems are more automated than D.C.’s process. We also reviewed the available monthly reports (November 1999 through June 2000) that USAO provided to MPDC on its papering decisions. These reports identified, by case, the reasons assistant U.S. Attorneys declined to file charges against arrestees. We did not verify the data in these monthly reports. Using spreadsheets, we analyzed these reports by offense and police district to identify any variances or patterns by offense and police district. We discussed the results of this analysis with USAO and MPDC officials. We also discussed an initiative designed to reduce the amount of police officer time expended on the papering process for criminal misdemeanor cases prosecuted by Corporation Counsel with MPDC and Corporation Counsel officials. Available data could not be used to analyze officer time expended on “papered” versus “nonpapered” cases by offense. To identify existing mechanisms to coordinate the multiagency activities of the D.C. criminal justice system, we interviewed D.C. criminal justice agency officials and reviewed agencies’ organizational charts and policies and procedures. To identify initiatives planned or under way for improving the operation of D.C.’s criminal justice system, we contacted D.C. criminal justice agencies and the Department of Justice agencies that provided criminal justice assistance to D.C. (e.g., forensic testing for drugs). We asked each agency to describe (1) its initiatives, if any; (2) the purpose of each initiative; (3) the status of each initiative; and (4) any plans to evaluate the results of each initiative. We summarized these initiatives, obtained clarification or additional information where needed, provided our summary to each agency for its review and comment, and incorporated their changes as appropriate. We did our work between October 1999 and December 2000 in Washington, D.C.; Philadelphia; and Boston, in accordance with generally accepted government auditing standards. The D.C. criminal justice system involves a number of D.C. agencies, federal agencies, and private organizations. These agencies and organizations are funded through congressionally appropriated federal and local funds and/or private sources. This appendix presents basic descriptive information on these agencies and organizations. Through discussions with the Executive Director of the District of Columbia Criminal Justice Coordinating Council (CJCC) and review of CJCC documentation, we identified various District, federal, and private organizations associated with the D.C. criminal justice system, either directly or indirectly. We met with representatives from most of these organizations and obtained their views and comments on the operations of the D.C. criminal justice system and where they saw need for improvement. Responsibilities and funding for some of these agencies was affected by the National Capital Revitalization and Self-Government Improvement Act of 1997 (D.C. Revitalization Act). Table 4 identifies the sources of funding for the various D.C. and federal agencies and private organizations that we identified as being associated with the D.C. criminal justice system. The mission of MPDC is to prevent crime and the fear of crime, as it works with others to build safe and healthy communities throughout the District of Columbia. The Office of the Chief of Police, the Operations Division, and the Office of Corporate Support are responsible for organizing MPDC and deploying resources to achieve MPDC’s mission and goals. According to MPDC documents, these responsibilities include establishing professional standards for members that ensure a higher level of integrity and ethical conduct than is generally accepted of others. The Chief and his staff are responsible for ensuring that all operations of MPDC are oriented toward serving the needs of a diverse community, as well as the federal interests associated with Washington’s unique role as the Nation’s Capital. The Operations Division oversees all operations in the MDPC. Headed by the Executive Assistant Chief, Operations is organized into three Regional Operations Commands (ROC); a Special Services Command; and an Operations Command; Executive Protection; and Central Crime Analysis. The ROCs encompass the seven D.C. police districts. ROC North includes the 2nd and 4th districts; ROC Central includes the 1st, 3rd, and 5th districts; and ROC East includes the 6th and 7th districts. Each ROC is commanded by a regional assistant chief whose office is located within the community being served. The 7 districts are further subdivided into 83 Police Service Areas, which are individual neighborhoods within the districts. The Special Services Group includes specialized units, such as Emergency Response, Major Narcotics, Special Operations, and Major Crash Investigations that have unique training, resource, and operational needs. The Operations Command was established to ensure a 24-hour a day departmentwide command presence to respond to and oversee major incidents that require MPDC presence at any location in the city, at any time of the day. This new structure centralizes critical business functions of MPDC with the goal of streamlining the delivery of services in three main areas: human services, business services, and information technology. Corporate Support is headed by a civilian senior executive director. The units oversee most of the administrative and technical functions that are critical to MPDC’s success. The mission of the Office of the Corporation Counsel is to fairly prosecute those who violate the law, defend or initiate civil action, protect the rights of citizens of D.C., provide expert legal advice and counsel, and to review and advise on commercial transactions. Due to D.C.’s unique status, which involves aspects of state, county, and local government functions, Corporation Counsel provides a variety of legal services, including matters typically handled by State Attorneys General, District or State’s Attorneys, and City or County Attorneys. To accomplish its varied responsibilities, the Corporation Counsel’s work is carried out by six major clusters: (1) the Office of Public Protection and Enforcement, (2) the Office of Government Operations, (3) the Office of Management and Operations, (4) the Office of Torts and Equity, (5) the Commercial Division, and (6) the Appellate Division. The Office of the Public Protection and Enforcement handles, among other things, the prosecution of “minor” adult misdemeanor offenses, all criminal traffic offenses, and all misdemeanor and felony offenses committed by children. The mission of DOC is to ensure public safety and uphold the public’s trust by providing for the safe and secure confinement of pretrial detainees and sentenced inmates. DOC carries out its mission through four major organizational divisions: (1) Office of the Director, (2) Deputy Director for Operations, (3) Deputy Director for Institutions, and (4) Deputy Director for Administration and Program Services. DOC is transforming itself from operating similar to a state/county prison system to operating similar to a city/county jail system in accordance with the D.C. Revitalization Act. This legislation calls for the transfer of all felons sentenced pursuant to the D.C. Code incarcerated at the Lorton Correctional Complex to the Federal Bureau of Prisons (BOP) facilities by December 31, 2001. DOC will remain responsible for inmates incarcerated at Lorton until December 31, 2001, or the date the last inmate housed there has been transferred to BOP. The D.C. Revitalization Act also established the District of Columbia Corrections Information Council to provide BOP with advice and information on matters affecting D.C. sentenced felons. However, the Council was never actually funded or established. During the transition period, an independent Office of the Corrections Trustee was established and a Corrections Trustee selected to oversee financial operations of the DOC until BOP has incarcerated all felons sentenced under the D.C. Code. The Office of the Corrections Trustee is charged with a broad mandate for financial oversight of the operations of the DOC and has been the primary source of DOC’s funding since October 1997. For fiscal year 2001, Congress provided the Corrections Trustee funds to help implement improvements and efficiencies in the disposition of D.C. criminal cases. The Corrections Trustee is required to report annually to Congress on behalf of D.C. and federal agencies on the progress of the transfer of convicted felons from D.C.’s Lorton correctional complex to BOP custody. The Corrections Trustee also prepared the Leo Gonzalez Wright report commissioned by the Attorney General for filing with the U.S. District Court for the D.C. According to the Corrections Trustee, as a result of the report, the Deputy Attorney General requested that the Corrections Trustee coordinate implementation of the report’s recommendations with all affected federal and D.C. agencies. In January 2000, the Corrections Trustee organized an interagency committee of 15 federal and D.C. criminal justice agencies to improve the coordination and logistical planning of various detention-related processes. The committee’s membership largely overlaps that of the CJCC. The District of Columbia Office of the Chief Medical Examiner is located under the D.C. Department of Health. The mission of the Office of the Chief Medical Examiner is to conduct and report on the medical investigation of all known or suspected homicides, suicides, accidental deaths, medically unattended deaths, and deaths that might constitute a threat to the public health and safety. The Office of the Chief Medical Examiner is to be staffed and on call 24 hours a day to determine a cause of death, a manner of death, and investigate the circumstances surrounding all deaths that occur within the District of Columbia. The mission of Defender Service is to provide quality legal service to indigent individuals in order to get them completely and permanently out of the criminal justice system. The agency carries out its mission through the Legal Services program and the Criminal Justice Act (CJA) office. As noted in its fiscal year 2001 budget submission, Defender Service attorneys handle a number of cases charging the most serious offenses. Additionally, Defender Service attorneys are trained to be criminal law “experts,” who are forbidden by statute from engaging in the private practice of law. Defender Service provides support in the form of training, consultation, and legal reference services to members of the local bar appointed as counsel in criminal, juvenile, and mental health cases involving indigent individuals. Defender Service is governed by an 11-member Board of Trustees appointed by a panel consisting of the Chief Judge of the U.S. District Court for the District of Columbia, the Chief Judge of the D.C. Court of Appeals, the Chief Judge of the Superior Court, and the Mayor. The Board appoints the Director and Deputy Director of Defender Service. The mission of the Superior Court of the District of Columbia is to provide fair, accessible, timely, and effective justice for all who appear before it or use its services. The D.C. Court of Appeals, Superior Court, and the Court System constitute the judicial branch of the D.C. government. Superior Court, the largest of these entities, is a trial court of general jurisdiction with responsibility for local trial litigation functions, including civil (civil actions, landlord tenant, and small claims), criminal (felonies, misdemeanors, traffic, city ordinance violations, and criminal tax cases), family (juvenile, domestic relations, neglect and abuse, adoption, and child support), probate, and tax matters. The criminal division of the Superior Court handles the vast majority of adult criminal cases prosecuted by both the USAO and the Corporation Counsel in the District of Columbia. The financing of the District of Columbia Courts was transferred to the federal government by the D.C. Revitalization Act. The CJCC is an expansion of the Memorandum of Understanding (MOU) Partners that was established in December 1996 to oversee a comprehensive reform of the MPDC. During this reform process, as MPDC demonstrated significant and lasting progress in its operations, the MOU Partners began to informally expand its membership and agenda to address more comprehensive, systemwide criminal justice issues. CJCC was formally organized on May 28, 1998. According to a CJCC official, its creation was an acknowledgement by member organizations that reducing crime and improving the quality of life in the city is the responsibility of all member agencies and other city resources. The mission of CJCC is to serve as the forum for identifying issues and their solutions, proposing actions, and facilitating cooperation that will improve public safety and the related criminal and juvenile justice services for D.C. residents, visitors, victims, and offenders. The CJCC draws upon local and federal agencies and individuals to develop recommendations and strategies for accomplishing this mission. Its guiding principles are creative collaboration, community involvement, and effective resource utilization. It seeks to develop targeted funding strategies and comprehensive management information through integrated information technology systems and social science research to reach its goal. The CJCC is comprised of 18 members. Members include the Mayor; Deputy Mayor for Public Safety; Chair, City Council; Chair, City Council Committee on the Judiciary; Corrections Trustee; Acting Director, Court Services; D.C. Corporation Counsel; Chief Judge, Superior Court; U.S. Attorney for the District of Columbia; Chief of Police, MPDC; Chairperson and another member, District of Columbia Financial Responsibility and Management Assistance Authority; Director, Youth Services Administration, Department of Human Services; Director, Pretrial Services; Director, Defender Service; Director, DOC; Director, BOP; and Chair, U.S. Parole Commission. The U.S. Attorney’s Office for the District of Columbia is the largest of the 94 U.S. Attorney Offices and has unique federal and local responsibilities. It has over 350 Assistant U.S. Attorneys and over 350 support personnel. It is responsible for the prosecution of federal crimes and all serious local crimes—felonies and certain misdemeanors—committed by adults in the District of Columbia. It also represents the United States and its departments and agencies in civil proceedings in federal court in the District of Columbia. The mission of this office is to enforce the criminal laws of the United States and the District of Columbia, represent the interests of the United States in civil litigation, and respond to the public safety needs of the community. The office is organized into a number of separate divisions and sections. These include Superior Court Division, which includes six sections: (1) Misdemeanors, (2) Grand Jury/Intake, (3) General Felonies, (4) Sex Offenses and Domestic Violence, (5) Homicide, and (6) Community Prosecution. U.S. District Court Criminal Division, which includes 5 sections: (1) Narcotics, (2) Economic Crimes, (3) Transnational/Major Crimes, (4) Public Corruption/Government Fraud, and (5) Gang Prosecution and Intelligence. Civil Division, which represents the United States and its departments and agencies at both the trial and appellate levels in civil actions filed in this jurisdiction. Appellate Division, which is responsible for handling all appeals from criminal convictions in the D.C. Court of Appeals and the U.S. Court of Appeals for the District of Columbia Circuit. Special Proceedings Section, which handles postconviction prisoner petitions, release hearings, expungement hearings, and other specialized proceedings. Administrative Division, which provides policy and procedural direction and central services support for the office of all areas of management and administration. The mission of the U.S. Marshals Service is to protect the federal courts and ensure the effective operation of the judicial system. Within D.C., the U.S. Marshals Superior Court District of Columbia office was created to provide the D.C. courts with the same services that all other U.S. Marshals Services’ districts provide to U.S. District Courts. Documentation provided by the U.S. Marshals Superior Court District of Columbia states that the office also serves as the de facto sheriff’s office for the District of Columbia. Among the duties performed by the U.S. Marshals in the District are handling prisoners appearing before the D.C. Superior Court and grand jury presentations, providing security for court officers, and serving eviction notices and judicial warrants of all types on D.C. residents. The mission of BOP is to protect society by confining offenders in controlled environments of prisons and community-based facilities that are safe, humane, and appropriately secure, and which provide work and other self-improvement opportunities to assist offenders in becoming law abiding citizens. The D.C. Revitalization Act requires the closing of D.C.’s correctional facilities in Lorton by the end of 2001, and in general, the transfer of felon inmates sentenced pursuant to the D.C. Code and residing at Lorton to penal or correctional facilities operated or contracted for by BOP. After Lorton is closed, DOC is responsible for, among other things, the operation of the D.C. Jail and overseeing the operation of the Correctional Treatment Facility. BOP is responsible for the D.C. sentenced felon inmate population. The mission of the U.S. Parole Commission is to ensure the public safety by exercising its authority regarding the release and supervision of criminal offenders under its jurisdiction in a way that promotes justice. The Parole Commission makes decisions to grant or deny parole to federal and D.C. Code prisoners serving sentences of more than 1 year, sets conditions of parole, supervises parolees and mandatory releasees, recommits parolees in the event of violations of the conditions of supervision, and determines the termination of supervision pursuant to the Parole Commission and Reorganization Act of 1976. In August 1998, the Parole Commission became responsible for, among other things, granting and denying parole to District of Columbia inmates in D.C. and BOP prisons. Under the D.C. Revitalization Act, the Parole Commission is required to exercise its parole authority over D.C. felony offenders pursuant to D.C.’s parole laws and regulations that may be different from federal parole laws and regulations. The D.C. Revitalization Act also, in general, gave the Parole Commission the authority to amend or supplement any regulations interpreting or implementing D.C. parole laws with respect to felons. Court Services (formerly known as Offender Supervision, Defender and Courts Services) was established by the D.C. Revitalization Act, as amended, and assumed responsibility for D.C. government functions related to pretrial services, parole, probation, and supervised release. Court Services shall carry out its responsibilities on behalf of the court or agency having jurisdiction over the offender being supervised. The legislation originally had Pretrial Services and the Defender Service both functioning as independent entities within Court Services. The District of Columbia Courts and Justice Technical Corrections Act of 1998 (Public Law 105-274, Oct. 21, 1998), removed the Public Defender Service from Court Services jurisdiction. Under the terms of the D.C. Revitalization Act, Court Services is federally funded and officially assumed its duties as a federal agency on August 5, 2000. The Director of Court Services is nominated by the President of the United States by and with the advice and consent of the U.S. Senate. The mission of Court Services is to increase public safety, prevent crime, reduce recidivism, and support the fair administration of justice in close collaboration with the community. Court Services is responsible for supervising individuals within the community who are on probation, parole, or supervised release. Pretrial Services functions as an independent entity within Court Services. The mission of Pretrial Services is to assist the trial and appellate levels of both the federal and local courts in determining eligibility for pretrial release by providing background information on the majority of arrestees. Pretrial Services is also responsible for supervising conditions of pretrial release and reporting on compliance or lack thereof to the court. According to a Pretrial Services official, Pretrial Services operates a forensic laboratory that provides drug testing for persons on pretrial release, probation, parole, or supervised release. Pretrial Services is advised by an Executive Committee that includes the four chief judges of the local and federal trial and appellate courts, the U.S. Attorney for the District of Columbia, the Director of Defender Service, and the acting Director of Court Services. The Director, Court Services, shall submit, on behalf of the Pretrial Services Agency and with the approval of its Director, an annual appropriation request to the Office of Management and Budget (OMB). The Director of Pretrial Services is appointed by the Executive Committee. The private organizations discussed below are organizations that are doing a study of the D.C. caseflow management, with a major goal being the increase of police presence in the community through reducing the time police officers now spend in court and prosecutors’ offices. The Council for Court Excellence is a nonprofit, nonpartisan civic organization founded in Washington, D.C., in 1982. The Council works to improve the administration of justice in the local and federal courts and related agencies in the Washington metropolitan area and in the nation by identifying and promoting specific court reforms, improving public access to justice, and increasing public understanding and support of our justice system. The Justice Management Institute is a nonprofit organization that provides services to courts and other justice system agencies throughout the United States and abroad. Its mission is to improve the overall administration of justice by helping courts and other justice system institutions and agencies achieve excellence in leadership, operations, management, and services. Its activities are concentrated in four main areas: (1) technical assistance, (2) education and training, (3) research, and (4) information dissemination. The Justice Management Institute is currently assisting the Council for Court Excellence on a broad-scale criminal caseflow management project that, among other things, is looking at the time MPDC officers spend in court and prosecutors’ offices. The National Capital Revitalization and Self-Government Improvement Act of 1997 (D.C. Revitalization Act) made changes to several D.C. programs, including programs in the D.C. criminal justice system. The criminal justice programs affected by the legislation include (1) corrections, (2) sentencing, (3) offender supervision and parole, (4) District of Columbia Courts, and (5) Pretrial Services Agency (Pretrial Services) and Public Defender Service (Defender Service). The District of Columbia Courts and Justice Technical Corrections Act of 1998 amended various sections of the D.C. Revitalization Act relating to D.C. criminal justice system programs. The following summary reflects certain changes to the statutory framework of the D.C. criminal justice system as a result of the D.C. Revitalization Act, as amended. The D.C. Revitalization Act provided for the closure of the D.C. Lorton Correctional Complex and the transfer of sentenced felons to the federal Bureau of Prisons (BOP). The legislation required that, by October 1, 2001, all felons sentenced to incarceration pursuant to (1) the D.C. Truth in Sentencing Commission issued “Truth-In-Sentencing” requirements or (2) the D.C. Code, be designated by BOP to a facility operated by or contracted for by BOP. The legislation also required that the Lorton Correctional Complex be closed by December 31, 2001, and that the felony population sentenced pursuant to the D.C. Code residing at Lorton be transferred to a facility operated by or contracted for by BOP. BOP is to acquire land and construct new facilities at BOP selected sites or contract for appropriate bed space. The D.C. Department of Corrections (DOC) is, in general, to remain responsible for those inmates housed at Lorton until December 31, 2001, or until the last inmate has been transferred to BOP, whichever is earlier. After this date, the D.C. DOC will no longer be responsible for various functions related to housing a felony population. The D.C. Revitalization Act authorized the establishment of a three- member D.C. Corrections Information Council to provide BOP with advice and information regarding matters affecting D.C. sentenced felons. However, the Council was never funded or established. As of March 2000, BOP had awarded contracts for the construction of two correctional facilities to house D.C. inmates. However, several stop-work orders were issued for one of the facilities. BOP has also contracted to transfer some D.C. inmates to non-BOP correctional facilities. As of March 2001, five of the seven Lorton facilities have been closed— Medium Security, Occoquan, Minimum Security, Maximum Security, and Youth. The two remaining facilities, Central and Modular, are scheduled to close in December 2001. Pursuant to the federal government assuming responsibility for persons convicted of a felony offense under the D.C. Code housed at the Lorton Correctional Complex, the Attorney General, in consultation with certain D.C. and court officials, was required to select a Corrections Trustee, an independent officer of the D.C. government. The Trustee is to oversee financial operations of DOC until such time as BOP has transferred all felons sentenced under the D.C. Code residing at Lorton to a facility operated by or contracted for by BOP. Corrections Trustee responsibilities include (1) financial oversight of DOC and allocation of funds as enacted in law or as otherwise allocated, including funds for short term improvements necessary for the safety and security of staff, inmates, and the community; (2) purchase of any necessary goods or services on behalf of DOC; and (3) to work with BOP to establish a priority employment consideration program to facilitate placement for displaced DOC employees. The Corrections Trustee is to propose funding requests each fiscal year to the President and Congress. Upon receipt of federal funding, the Corrections Trustee is to provide an advance reimbursement to BOP of those funds identified by Congress for construction of new prisons and major renovations. BOP will be responsible and accountable for determining how these funds are used for renovation and construction. Both DOC and BOP shall maintain accountability for funds reimbursed from the Corrections Trustee, and shall provide expense reports by project at the request of the Corrections Trustee. On September 26, 1997, the Attorney General announced the appointment of the D.C. Corrections Trustee. The Truth in Sentencing Commission was established as an independent agency to make recommendations, within 180 days after enactment of the D.C. Revitalization Act, to the D.C. Council for amendments to the D.C. Code with respect to the sentences to be imposed for all felonies committed on or after 3 years of passage of the D.C. Revitalization Act. The Commission members were to include seven voting members with knowledge and responsibility with respect to criminal justice matters: Attorney General (or designee); two judges of the Superior Court of the District of Columbia; and one representative each from the D.C. City Council, the D.C. government executive branch, Defender Service, and the Office of the U.S. Attorney for the District of Columbia. Single representatives from BOP and the D.C. Office of Corporation Counsel shall serve as nonvoting ex officio members. In 1998, the D.C. Council created an advisory body by enacting the Advisory Commission on Sentencing Establishment Act of 1998. The D.C. Council approved the sentencing guidelines recommended by the Commission on July 11, 2000. No later than 1 year after the date of enactment of the D.C. Revitalization Act, the U.S. Parole Commission shall assume jurisdiction and authority of the Board of Parole of the District of Columbia to grant and deny parole and to impose conditions upon an order of parole in the case of any imprisoned felon who is eligible for parole or reparole under D.C. Code. The U.S. Parole Commission shall have exclusive authority to amend or supplement any regulation interpreting or implementing the parole laws of D.C. with respect to felons. On the date of establishment of the Court Services and Offender Supervision Agency (Court Services), the U.S. Parole Commission shall assume any remaining powers, duties, and jurisdiction of the Board of Parole of the District of Columbia, including jurisdiction to revoke parole and to modify the conditions of parole, with respect to felons; Superior Court of the District of Columbia shall assume the jurisdiction and authority of the Board of Parole of the District of Columbia to grant, deny, and revoke parole, and to impose and modify conditions of parole, with respect to misdemeanants; and Board of Parole established in the District of Columbia Board of Parole Amendment Act of 1987 shall be abolished. A trustee will be appointed by the Attorney General in consultation with the Chair of the D.C. Control Board and the Mayor, who will be responsible for the reorganization and transition of functions and funding relating to pretrial services, parole, adult probation, and offender supervision. Beginning with appointment and continuing until establishment of Court Services, the trustee shall have the same powers and duties as the Director of Court Services; have the authority to direct actions of all agencies of the District of Columbia whose functions will be assumed by Court Services and the D.C. Board of Parole; exercise financial oversight over all D.C. agencies whose functions will be assumed by Court Services and the D.C. Board of Parole, and allocate funds to these agencies as appropriated by Congress and allocated by the President; receive and transmit to Pretrial Services all funds appropriated for such agency; and receive and transmit to Defender Service all funds appropriated to such agency. On September 26, 1997, the Attorney General announced the appointment of the D.C. Pretrial Services, Parole, and Offender Supervision Trustee. The trusteeship ended on August 4, 2000, with the certification of the agency by the Attorney General as an independent federal executive agency. Court Services was established within the executive branch of the federal government, to be headed by a director appointed by the President, by and with the advice and consent of the Senate, for a term of 6 years. In general, the agency is to provide supervision, through qualified supervision officers, for offenders on probation, parole, and supervised release pursuant to the D.C. Code. The agency is to carry out its responsibilities on behalf of the court or agency having jurisdiction over the offender being supervised. An interim director has led Court Services since the trusteeship ended on August 4, 2000. As of March 2001, a permanent director had not been appointed. The agency is responsible for the following individuals. Released offenders — The agency shall supervise any offender released from imprisonment for any term of supervised release imposed by the Superior Court of the District of Columbia. Such offender shall be subject to the authority of the U.S. Parole Commission until completion of the term of supervised release. Probationers — The agency shall supervise all offenders placed on probation by the Superior Court of the District of Columbia, subject to appropriations and program availability. Parolees — The agency shall supervise all individuals on parole pursuant to the D.C. Code. The agency shall carry out the conditions of release imposed by the U.S. Parole Commission or, with respect to a misdemeanant, by the Superior Court of the District of Columbia. The D.C. Revitalization Act further provided that Pretrial Services shall function as an independent entity within the agency. (Defender Service was also to function as an independent entity within Court Services; however, provisions of Public Law 105-274 removed Defender Service from the jurisdiction of Court Services and the trustee.) The director of Court Services shall submit, on behalf of Pretrial Services and with the approval of the Director of Pretrial Services, an annual appropriation request to the Office of Management and Budget (OMB). Such request shall be separate from the request for the agency. (A similar provision for Defender Service was removed by Public Law 105-274. However, Public Law 105-274 added a provision that the director of the agency shall receive and transmit to the Defender Service all funds appropriated for such agency.) In addition, there are authorized to be appropriated in each fiscal year such sums as may be necessary for the following supervision of offenders on probation, parole, or supervised release for offenses under the D.C. Code; operation of the parole system for offenders convicted of offenses under the D.C. Code; and operation of the Pretrial Services, Parole, Adult Probation, and Offender Supervision Trusteeship. The administration and financing of the District of Columbia Courts (i.e., Superior Court of the District of Columbia, District of Columbia Court of Appeals, and the District of Columbia Court System) is transferred to the federal government. Funding for these courts is authorized to be appropriated for payment to the Joint Committee on Judicial Administration in the District of Columbia, which shall include in its budget submission to OMB and Congress, the budget and appropriations requests of these courts. The D.C. Revitalization Act amended various D.C. Code provisions regarding Pretrial Services. For example, the D.C. Revitalization Act provided that a seven-member executive committee would advise the agency. The Chief Judges of the U.S. Court of Appeals for the District of Columbia Circuit and the U.S. District Court for the District of Columbia, in consultation with other executive committee members would appoint a director for the agency who shall be a member of the D.C. bar. The D.C. Revitalization Act also provided information relating to the duties and compensation of the director; the director’s employment of a chief assistant and other personnel necessary to conduct the business of the agency; the requirement for the director to submit an annual report to the executive committee and Court Services Director on the agency’s administration of its responsibilities for the previous fiscal year and a statement of financial condition, revenues, and expenses; and that funding and appropriations for the agency will be received and disbursed by the Court Services Director to and on behalf of Pretrial Services. The D.C. Revitalization Act amended various sections of the D.C. Code relating to Defender Service. For example, the D.C. Revitalization Act required the appointment of a director and deputy director by the Chief Judges of the U.S. Court of Appeals for the District of Columbia Circuit and the U.S. District Court for the District of Columbia. The D.C. Revitalization Act also called for the director to assume responsibility for preparing an annual report on Defender Service’s operations and arrange for an independent audit. These amendments added by the D.C. Revitalization Act regarding Defender Service were subsequently repealed by Public Law 105-274. The District of Columbia Courts and Justice Technical Corrections Act also provided that Defender Service shall submit an annual appropriations request to OMB. Also, under the provisions of Public Law 105-274, the Court Services Director is to receive and transmit to the Defender Service all funds appropriated for Defender Service. Appendix IV: Adult Offenses Prosecuted by the Office of the United States Attorney for D.C. Appendix IV describes the typical case flow for offenses prosecuted in the Superior Court of the District of Columbia (Superior Court) by the Office of the United States Attorney for the District of Columbia (USAO). USAO is responsible for prosecuting felony and serious misdemeanor violations committed by adults in D.C. (“U.S. offenses”). USAO prosecutes misdemeanors such as petty theft, assaults, weapons offenses, and narcotics possession. The Office of the Corporation Counsel for D.C. (Corporation Counsel) is responsible for prosecuting “minor” misdemeanor violations (“D.C. offenses”), such as drinking in public or disorderly conduct, in addition to criminal traffic offenses, and offenses committed by children. The case flow process for D.C. offenses is described in appendix V, and the case flow process for offenses committed by children is described in appendix VI. This case flow process description reflects process-related information as described to us by relevant agency officials. We did not verify the accuracy of the information provided to us. As such, we did not test to determine if the descriptions of the processes were functioning as was described to us. We recognize that there may be aspects of a specific case that make its processing unique, and that there may be exceptions in the normal progression of the stages in the justice system. However, this description will focus on the case flow process for a typical adult case, prosecuted by USAO, as it progresses through the basic stages of the criminal justice system. Most cases begin with an incident and subsequent arrest. Police officers may become aware of an incident as a result of a civilian report of a crime or through observation of suspected criminal activity. Civilians may call a nonemergency police department number or call a 911 dispatcher to report a crime. Typically, the 911 dispatcher is to ask a caller for limited information regarding the incident, and then issue a radio communication to the appropriate police district transmitting the information provided by the caller. In D.C., over 30 law enforcement agencies other than the Metropolitan Police Department of the District of Columbia (MPDC), such as the U.S. Capitol Police and the U.S. Park Police, may make arrests for crimes committed within D.C. However, MPDC makes a large majority of arrests in D.C. Arrests can be made with or without an arrest warrant. Arrest warrants are issued by the court when MPDC submits an application for a warrant (a complaint that may be supported by an affidavit) showing there is probable cause to believe that an offense has been committed and that the person named in the complaint has committed the offense. With warrantless arrests, the arresting officer must have probable cause that the person to be arrested has committed or is committing an offense. Arrests resulting from an arrest warrant are processed similarly to warrantless arrests. Different scenarios may occur at the scene of a suspected crime, depending on the type of offense the suspect allegedly committed and the manner in which the officer became aware of the incident. Typically, an officer is to perform several standard tasks when investigating an incident. These tasks may include (1) calling a dispatcher with the suspect’s vehicle tag number, (2) calling the dispatcher to check for outstanding warrants, (3) conducting any field tests, (4) taking statements from witnesses or victims, or (5) calling a dispatcher to send a detective to the scene. If at the arrest scene MPDC determines that the arrestee is entitled to be released without being charged, an officer should take information necessary to make an entry in the detention journal, provide the arrestee with a copy of an “Information to Arrestee Released Without Charge” (PD 731) form, and authorize the release of the arrestee at the scene. The officer should also make an entry in the detention journal and assist in preparing a Detention Report (PD 728). MPDC most often releases arrestees on detention journal in situations within the prosecutorial jurisdiction of USAO, although detention journal release is also available for situations within the prosecutorial jurisdiction of the Corporation Counsel. After a person is arrested, physically secured, and searched on the scene, s/he is to be transported to a police district station for processing. The arresting officer could either transport the arrestee him/herself, or call a dispatcher to request a transport vehicle, which is a secured car. It typically could take at most 15 minutes for a transport vehicle to arrive. MPDC is operationally divided into three Regional Operations Commands (ROC), which are subsequently divided into seven police districts. Officers are assigned to one of the seven districts, known as “1D” through “7D,” or to one of several specialized units, such as Major Narcotics or Vice. Within each district, patrol officers are assigned in teams to Police Service Areas, which are geographically manageable, neighborhood-based subsets of the district. In each district, there is typically one building, or station, used for processing arrestees. At the district station, processing the arrestee includes: (1) collecting and cataloguing property, (2) interviewing the arrestee and completing the standard arrest paperwork, (3) completing a background check, (4) booking the arrest, (5) fingerprinting and identifying the arrestee, and (6) determining if the arrestee is eligible for release. At the district station, the arresting officer should conduct an additional search of the suspect and collect all property in his/her possession. Officers may be required to complete several property and evidence forms, depending on the property and evidence seized from the arrestee. Prisoner’s Property Receipt (PD 58) – The PD 58 inventories the property collected from the arrestee, and allows the arrestee to authorize a third party to claim the property. Property Envelope (PD 14) – The PD 14 is a plastic envelope for prisoner’s property. Any evidence recovered by police officers should be documented on a variety of police forms. The evidence should be turned over to the station’s designated “property clerk,” who is to obtain the evidence from the arresting officer and maintain temporary custody of the evidence in the property office. The arresting officer should also make an entry in one of several logbooks (i.e., there may be separate logbooks for drugs, evidence, vehicles, and prisoners’ property). The evidence is assigned a book and page number, which can be used later to locate the entry in the book. Evidence should later be transported to a central MPDC property office, where it is to remain until needed in court. The following forms are required to document evidence: Property Record (PD 81) – The PD 81 identifies property placed in the custody of the MPDC Property Division. Evidence Envelope (PD 95) – A PD 95 is a plastic bag used for each item confiscated that is considered evidence (e.g., weapons or cash). Report of Drug Property Collected, Purchased, or Seized Drug Enforcement Administration (DEA) Form 7 — If drugs were confiscated, the officer is to complete a Form 7. Drug evidence is assigned both a property number and a DEA lab number. Typically, drug evidence and associated paperwork is placed inside a “heat sealed” evidence bag and deposited in a secured “drug mailbox” in the district station. An officer from the Major Narcotics Branch should collect the drug evidence daily from the district drug mailboxes. The officer is to check the information in the drug logbook with the sample and sign that s/he has collected the drugs. The drugs are sometimes initially tested at the district station. Property Tag (PD 285) –A PD 285 is a tag attached to an item that is too large to be put in a property envelope. The arresting officer is to take the arrestee to a processing area or workroom at the district station. There, the officer questions the arrestee about the incident and could begin preparing the arrest paperwork. MPDC has several different forms used for reporting offenses, arrests, and other steps of processing the arrestee. For every arrest made, the arresting officer is required to complete the following forms: Arrest/Prosecution Report (PD 163) – The PD 163 is the basic prosecution report that contains the arrestee’s background information (name, physical description, address, date of birth, employment, and the names of relatives); information about the arrest; witnesses; and charges, and a factual narrative about the arrest incident. MPDC Court Case Review (PD 168) – The PD 168 lists all of the officers involved in the investigation of the arrest and describes how each individual was involved (i.e., arresting officer, chain of custody). The information is used to determine which officers need to be subpoenaed to testify about the case. MPDC Warning As To Your Rights card (PD 47) — Prior to being interviewed, the arrestee will be read his/her Miranda rights and asked whether s/he is willing to answer questions and to waive his/her rights to have counsel present. The arrestee is asked to sign the PD 47 indicating his/her response. The officer conducting the interview and an additional witness also must sign the Miranda card. The arresting officer also may be required to complete additional forms, depending on the alleged offense and the arrest situation. MPDC Incident- Based Event Report (PD 251) contains information about the incident and is a public document. The MPDC Supplement Report (PD 252) contains information about the suspect, solvability factors, stolen property, and an officer narrative, and it can be used internally by MPDC, as well as by USAO and defense counsel. Once the questioning of a prisoner is completed, submitted, and approved, the prisoner is permitted one telephone call, and then moved to a holding cell while the booking process is completed. The arresting officer’s contact with the arrestee typically ends at this point. MPDC uses two primary criminal history records databases to conduct background checks: the Washington Area Law Enforcement System (WALES) and the National Crime Information Center (NCIC) system. WALES contains some local criminal background information and is connected to NCIC, which is maintained by the Federal Bureau of Investigation (FBI) and contains national data. The officer conducting the background check can be connected to NCIC through WALES. An officer can query WALES using the arrestee’s name, birth date, sex, and race. WALES will display identifying information about the individual, including outstanding warrant information and any criminal justice identification numbers that may exist nationwide. The name of the officer who conducted the background check and whether the arrestee had an outstanding warrant should be recorded on the PD 163. A booking officer is responsible for booking the arrest, which refers to the process of entering the arrest information from the PD 163 into the Criminal Justice Information System (CJIS). CJIS is MPDC’s computer system that contains information on arrests for all individuals who are arrested in D.C. Data fields in CJIS correspond to questions on the PD 163, and CJIS is set up with code menus. Except for the narrative description about the arrest (the statement of facts), the booking officer is to enter the majority of the information from the PD 163 into CJIS. CJIS automatically generates an arrest, or booking, number, which is a counter number assigned to each arrest by an “arrest unit” per year (e.g., the arrest number 030002083 would be automatically generated for the 2,083rd arrest in 3D this year). The booking officer should obtain the arrest number in order to (1) complete the standard arrest paperwork and (2) begin the fingerprinting process. MPDC only positively identifies arrestees who are charged with U.S. offenses. The identification process is conducted using LiveScan machines that electronically capture fingerprints, mugshots, and arrest information. There is a LiveScan machine in each of the district stations and at the Central Cellblock (CCB) at MPDC Headquarters. All of the LiveScan machines are electronically linked to an Automated Fingerprint Identification System (AFIS), which is also located at MPDC Headquarters. LiveScan automatically verifies fingerprints and arrest information contained in AFIS. The system operates like e-mail, with the districts e- mailing information to and from AFIS. LiveScan Process. First, the booking officer is to enter information from the PD 163 into LiveScan. The arrest number connects the arrest in CJIS to the information in LiveScan. Next, the booking officer is to take the arrestee’s photographs (front and left profile), which are stored electronically. Finally, the booking officer is to scan the arrestee’s fingerprints into LiveScan. When the booking officer is finished, LiveScan informs the officer whether the fingerprints were scanned out of sequence or if a fingerprint needs to be rescanned. After the booking officer enters all of the information and fingerprints into LiveScan, s/he is to send the information to AFIS. AFIS automatically reads the arrestee’s fingerprints and compares a selected number of prints in the database in order to identify the arrestee’s fingerprints. If the arrestee had previously been arrested for a U.S. offense, AFIS identifies the arrestee through the submitted fingerprints and retrieves the individual’s Police Department Identification (PDID) Number. The PDID is a six-digit permanent identification number assigned to an individual when s/he is first arrested for an offense that is within the prosecutive jurisdiction of USAO. An individual keeps the same PDID throughout all subsequent involvement in the D.C. criminal justice system. If the arrestee has never been arrested for a U.S. offense, AFIS assigns the arrestee a new PDID. AFIS sends the information about the arrestee’s name and PDID back to the LiveScan machine at the district station. According to MPDC, AFIS takes several minutes to complete a single search for a fingerprint match, whereas a person manually searching a print would take anywhere from 15 to 45 minutes, depending on the classification and clarity of the print. It typically takes AFIS 20 to 30 minutes to complete the identification process for a single arrestee, depending on the number of requests that AFIS is processing. When the booking officer receives the identification information from AFIS, s/he is to write the PDID and the AFIS Search Identification Number, a tracking number automatically assigned to each search that AFIS completes, on the PD 163. Next, the booking officer is to verify the information in LiveScan, confirm the arrestee’s charged offense, and enter the PDID number into LiveScan. When the booking officer has reviewed all of the information, s/he is to send a confirmation to AFIS. LiveScan can automatically generate two fingerprint cards—one to be sent to AFIS and one to be sent to the FBI—and an armband for the arrestee. The armband has the arrestee’s name, PDID, photograph, sex, and race. Finally, the booking officer should update the arrest information in CJIS as needed and enter the arrestee’s PDID. From the district station, an arrestee may be released on citation, released on bond, locked up, or, in some instances, transported to a hospital. Citation Release. For certain misdemeanor offenses, an arrestee may be eligible for citation release, which allows the arrestee to be released on his/her own recognizance. Arrestees are given a citation appearance date to appear in court 4 to 6 weeks from the date of arrest. In order to be eligible for a citation release, the arrestee must live within a 25-mile radius, show means of support, and have three references attesting to his/her identity. The arresting officer is to complete a Citation Release Determination Report (PD 778) while questioning the arrestee. If the officer determines that the arrestee qualifies for citation release, a Citation to Appear (PD 799) is prepared indicating the date and time the arrestee is to appear in court, the charge(s) against the arrestee, the penalty for not appearing, and acknowledgement by the arrestee of the citation. If the arrestee has picture identification, s/he can be released from the station. If the arrestee does not have picture identification and there is no one at the station who can positively identify the arrestee, s/he is locked up. Bond Release. If an individual is not eligible for citation release, police determine if s/he can be released on bond. There is a bond schedule and the arrestee either pays the appropriate bond amount or is locked up. Bond release is rarely used in D.C. Lockup Arrests. MPDC does not release arrestees prior to the initial court appearance if they are arrested for a felony offense. Lockup is typically used for individuals charged with a felony offense; however, it is also used for arrestees charged with misdemeanors who cannot prove their identification for purposes of citation release, and who otherwise are not eligible, or do not have the available funds to post and forfeit collateral or pay a bond amount. Arrestees can be locked up prior to their initial court appearance at the district station, at CCB, or at the U.S. Marshals Service cellblock, depending on the time of the arrest. MPDC Lockup List. A lockup list, which is a list of individuals who were detained after arrest, is generated in MPDC’s CJIS system. The lockup list is a real-time document, meaning that it is updated throughout the day as individuals are arrested and the arrests are entered into the CJIS system. A new lockup list can be generated at any time; however, the lockup list is generated three times per day (9:30 a.m., 11:30 a.m., and 3:30 p.m.) to distribute to those agencies that do not have access to CJIS. The initial list, the 9:30 a.m. list, typically contains 80 to 90 percent of the cases for a day. The remaining cases are typically divided between the 11:30 a.m. and 3:30 p.m. lists. During a typical week, the number of arrestees per day can range from 60 to 130. If an arrestee has visible bruises or abrasions or complains of illness, two officers are to transport him/her to D.C. General Hospital. When transporting an arrestee to the hospital, officers should complete an Arrestee’s Injury/Illness Report (PD 313). At D.C. General Hospital, arrestees are housed in a guarded “strong room” while they wait to see a doctor. If the doctor releases the arrestee, s/he is to be transported back to the district station or to CCB to continue the booking process. If an arrestee is admitted to the hospital, an officer from the district station’s Crime Scene Search Unit is to go to the hospital to fingerprint the arrestee. The Crime Scene Search Unit officer should take the PD 163 and the fingerprint to the CCB for verification of the arrestee’s identity. After the arrestee is identified, CCB processes the paperwork, adds the arrestee to the lockup list, and forwards the paperwork for processing. According to MPDC officials, about three or four arrestees per day are transported to the hospital but not admitted, and about two arrestees per week are admitted to the hospital. The time of day when the booking process is completed determines where the arrestee will be transported if s/he is locked up before the initial court appearance. If processing is complete by the cut-off time, then the arrestee is transported to the U.S. Marshals Service cellblock, where s/he is held until her/his initial court appearance, which will occur the same court workday. Arrestees are to arrive at the U.S. Marshals Service cellblock by the cut-off times of 3:00 p.m. during the week, 2:30 p.m. on Saturdays, and 10:30 a.m. on holidays. If the processing is not completed by the cut-off time, the arrestee will be held overnight at either an MPDC district station or at CCB, and will be transported to the U.S. Marshals Service cellblock to appear in court on the next court workday. For arrestees held overnight, MPDC may begin to transport arrestees to the U.S. Marshals Service cellblock at approximately 6:00 a.m., when the U.S. Marshals Service begins accepting arrestees. After the district station, the arrestee and the arrest paperwork are sent to separate locations. The arrest paperwork for arrestees who are released on citation may be mailed or transported to MPDC’s Court Liaison Division. All of the arrest paperwork for locked up arrestees is transported from the districts to the CCB, which then sends the paperwork to the Court Liaison Division. When arrestees arrive at the U.S. Marshals Service cellblock, a deputy is to check each arrestee’s armband against his/her vansheet to verify that the correct person is being locked up, bring the arrestees into the cellblock area, remove the arrestees’ handcuffs, search the arrestees, and transfer them to holding cells. Several events that involve different criminal justice agencies can occur at the U.S. Marshals Service cellblock. The Pretrial Services Agency (Pretrial Services) conducts drug tests and interviews arrestees. The Criminal Justice Act (CJA) Office of the Public Defender Service for the District of Columbia (Defender Service) interviews arrestees to determine their financial eligibility for court-appointed counsel. Finally, a defense attorney may interview arrestees. Pretrial Services is to test arrestees charged with U.S. offenses (both misdemeanors and felonies) for cocaine, opiates, and PCP. After arrestees are transferred to a holding cell, drug surveillance officers from the Adult Drug Unit at Pretrial Services request that arrestees voluntarily submit to a drug test. According to Pretrial Services, about 80 to 85 percent of arrestees agree to take the test; however, if the arrestee does not agree, s/he is typically ordered by a judicial officer to submit to a drug test at the initial hearing. Pretrial Services is to advise defendants that the results of the test are typically used to determine conditions of release. There is about a 30-minute turnaround time to obtain drug test results. After the drug test, the arrestees are to be transferred into the interview room of the U.S. Marshals Service cellblock. Pretrial Services is to interview all arrestees charged with U.S. offenses who are detained prior to the initial court appearance. The interview may occur at the district station where the arrest occurred, or in the U.S. Marshals Service cellblock. At the U.S. Marshals Service interview room, Pretrial Services officers are to interview arrestees who they did not interview at the district stations, and any newly arrested individuals. Interviewers should read a statement indicating how the interview information will be used before the arrestee answers any questions. Pretrial Services interviewers ask questions about the arrestee’s current residence, family ties, employment, health, criminal history, substance abuse, and other court cases. Criminal records are to be investigated through WALES, NCIC, the Interstate Identification Index (III) system, and National Law Enforcement Telecommunications System. Demographic and personal information is to be verified through personal references, probation, parole, and other pretrial services officers, and an arrestee’s criminal history is confirmed through other criminal justice and law enforcement agencies. The results of the Pretrial Services interview are ultimately compiled in a bail report, containing a release recommendation, which is distributed at arraignment/ presentment, and used by the court to determine conditions of release. CJA representatives are to interview arrestees on the lockup list at the U.S. Marshals Service cellblock to determine if they are eligible for court- appointed counsel. The interviewers are to ask the arrestees about their employment, income, marital status, and number of dependents. Based on the arrestee’s answers, and referring to the Department of Labor poverty guidelines, the interviewer can determine whether the arrestee is eligible for court-appointed counsel and whether the arrestee can contribute a portion of the attorney’s fees. Examiners are to conduct interviews three times per day, as each lockup list is generated. The defense attorney should check a copy of the lockup list to see which cases s/he has been assigned, and then should go to the U.S. Marshals Service cellblock to meet with his/her client. The attorney is to review the Pretrial Services bail report and the charges with the defendant, explain what will happen at the initial court appearance, and explain what will likely happen with respect to the release decision. After being interviewed, arrestees are to be moved to one of the six holding cells located directly behind the Arraignment Courtroom, courtroom C-10, to wait for their case to be called. If an arrestee’s attorney did not meet with the arrestee in the interview room of the U.S. Marshals Service cellblock, s/he may meet with the arrestee in the cellblock behind courtroom C-10. After arrest and booking, USAO must determine whether to charge the arrestee with a crime in Superior Court. USAO requires that a police officer who is knowledgeable about the facts of the arrest to physically report to USAO for “papering.” Papering is the stage of the charging process during which officers present their arrest reports to a prosecutor and explain the circumstances of the arrest. USAO decides for each arrest whether the case should be prosecuted (papered) or not (no-papered). The following describes the charging process for lockup arrests. The officer is required to check in at Court Liaison Division by 7:30 a.m., the court day after an arrest that resulted in a lockup. Under certain circumstances an officer may obtain permission for “late papering,” in which case the officer is required to arrive in the USAO Intake office by 11:00 a.m. Officers normally paper cases on the same court day if the arrest occurs during the day and the arrestee is processed before the cutoff time. At Court Liaison Division, the officer is to complete a Court Appearance Worksheet (PD 140), which lists information about all of the court appearances the officer must make on a given day, and then wait in line for a Court Liaison Division clerk. The Court Liaison Division clerk is to time-stamp the PD 140, initial it, return one copy to the officer, check the officer into the Time and Attendance Court Information System (TACIS), and then give the officer the original arrest paperwork (PD 163). The officer is then to go to USAO to paper the case. After checking in with Court Liaison Division, the arresting officer should go to the USAO Intake office, which is typically open from 7:30 a.m. to 5:30 p.m., and is located in the basement of the Superior Court Building in room C-195. At USAO, the arresting officer is to photocopy the arrest paperwork and assemble the USAO case jacket that contains all of the relevant police paperwork. This paperwork typically includes (1) a PD 163, (2) a green fingerprint card, (3) a copy of the Court Appointment Notification System notice, and (4) various other reports depending on the specifics of the arrest. USAO requires that officers make five copies of all paperwork prepared for a case. Officers are to drop off the PD 163 and accompanying paperwork through a window chute into the Intake office, and then wait to meet with a USAO attorney. USAO employs two to three criminal history analysts who complete a criminal records check on each arrestee on the lockup list. As soon as the arrestees are added to the lockup list, the analysts can begin to conduct criminal history record checks using federal criminal records databases, such as WALES, NCIC, and the III system. The analysts should determine if the arrestee has a criminal record, identify all of the arrestee’s outstanding charges, and search for any local outstanding warrants. USAO typically does not begin papering a case until an arrestee has been positively identified and all background checking has been completed. When the background check is complete, the officer is to meet with a screener, who is a supervising or senior attorney. There are typically three Grand Jury/Intake screeners who review violence misdemeanor and felony cases other than those involving domestic violence, and one Sex Offense/Domestic Violence screener to review cases involving these offenses. The screener is to review the paperwork and discuss the case with the officer to determine whether to paper the case. If the screener decides to paper the case, s/he should also complete a “screener sheet” and determine the following: The lead charge - The screener may choose to charge the arrestee with a more or less serious crime than the arresting charge, depending on the circumstances of the arrest. The bond recommendation - The screener can make a decision regarding what position USAO is to take on conditions of release pending trial. The USAO prosecution section that should handle the case - There are several USAO sections that could handle a case: Misdemeanor Trial, Sex Offenses/Domestic Violence, Grand Jury/Rapid Indictment Program (RIP), Community Prosecution, Homicide, or U.S. District Court section. The type of case and Police Service Area designation - The screener should indicate on the USAO case jacket the type of case (e.g., felony, misdemeanor, or domestic violence), and label the case with a “Lead Charge Police Service Area” sticker that indicates the Police Service Area in which the arrest occurred. Additional papering work required - The screener may make suggestions to the papering attorney about the case. For example, a screener might suggest filing an affidavit or warrant with the court, requesting a jail cell search of the defendant’s clothing, coordinating with other attorneys on related cases, or obtaining laboratory analyses. If the screener determines to no-paper the case, the officer continues to the next step of the process. According to USAO, about 30 percent of cases are not papered. Regardless of whether the case is papered, the screener should enter the case information and papering outcome into CJIS. At this point, other involved agencies with access to CJIS may know which cases USAO intends to prosecute. The screener also designates if the case is a “priority” case. Priority cases are those that are screened prior to 10:00 a.m., and should therefore be ready for the initial hearing (i.e., papered by USAO, and the offender interviewed by Pretrial Services and appointed counsel). This designation alerts all of the involved agencies and defense counsel that these cases should be called in the morning court session, and that Pretrial Services, CJA, and defense counsel interviews should be conducted expeditiously. After meeting with the screener, the arresting officer is to go to the representative from Superior Court, who is located in the USAO Intake office, to pick up a court jacket for the case. The court representative should issue either a felony or misdemeanor court jacket with a pregenerated court docket number, beginning with an “F” in felony cases or an “M” in misdemeanor cases to each arrest. The court representative manually matches the court docket number with the arrestee’s lockup number. Next, the officer is to meet with an Assistant U.S. Attorney (AUSA) who is responsible for papering the case. The papering attorney is to complete the USAO case jacket, interview the officer to find out more information about the case, enter information about the case into the USAO computer system, and prepare the charging document and other documents needed to prosecute the case. If the screener determined to no-paper the case, the papering attorney is to complete a “no-paper” slip containing information about the case and the reason for no-papering the case. The officer is able to move to the next step of the process. There are separate forms, or screens, in the USAO computer system for different offenses that the AUSA papering the case uses to guide the interview with the officer. Information entered about the case includes information about chain of custody, the circumstances of the arrest, the specific involvement of the officers on the scene, and the arrestee and his/her actions. In misdemeanor cases, charges against an arrestee are brought by way of an “information,” a one-page document that states the nature of the alleged crime and the corresponding statutory code section for the charged offense, the date of occurrence, victim information, if any, and information about the defendant. The information is the only document that needs to be filed to bring a misdemeanor case to trial. In felony cases, charges against an arrestee are initially brought by way of a “complaint, “ which is similar to an information, except that a police officer must swear to the allegations in a complaint. The complaint is only the initial charging document that permits a felony case to go forward for the initial court appearance. The AUSA is to manually fill out the charging document, and then forward it to a legal technician in the USAO Intake office. The legal technician is to type a duplicate version of the charging document and generate labels for the court and case jackets. Typically, there are three to five AUSAs who paper misdemeanor cases. During papering, the AUSA in the misdemeanor trial section typically prepares the discovery packet and the plea offer. The AUSA should also delete any witness information from the PD163 and include the PD 163 in the discovery packet. AUSAs in the Sex Offense/Domestic Violence section paper cases involving domestic violence and sex offenses. Domestic violence cases may be complicated because of issues such as protecting the safety of the victim and witnesses and obtaining stay-away orders. In addition, these cases can also involve children and/or allegations of child abuse and neglect. The Sex Offense/Domestic Violence section coordinates with the Domestic Violence Intake Center throughout the papering process. The Center meets with the victim and coordinates the civil and criminal sides of the case. It typically takes 30 to 45 minutes to paper a misdemeanor domestic violence case, and 10 minutes to no-paper a case. Felony domestic violence cases can take over 60 minutes to paper. Domestic violence cases are called in court after 3:00 p.m. to afford the victim adequate time to obtain a civil protection order in appropriate cases. After meeting with the papering attorney, the officer is to return to a screener who reviews the information in the jacket and signs the charging document. The screener also should review and sign the officer’s PD 140. A series of time intervals, from 7:30 a.m. to 6:00 p.m., are preprinted on the PD 140; the screener uses these to indicate how long the officer was papering the case. Next, the officer is to return to the court representative to drop off both of the case jackets and to swear to the Gerstein statement and the charging document. Gerstein refers to a Supreme Court decision that outlines the requirements for a judicial determination of probable cause prior to the imposition of any “significant restraint of pretrial liberty.” This judicial determination requires a sworn statement by the arresting officer of the facts offered to establish probable cause to believe that an offense occurred and that the defendant is the person who committed it. The officer is to swear to the Gerstein statement and sign and date three copies of the statement. The court representative then is to seal and sign each of the copies. When the USAO case jacket is given to the court representative, s/he should screen the paperwork to ensure that everything is complete and should note the time that the case was actually papered and delivered to the court. If the paperwork is not complete the case is to be returned to USAO staff for correction. The case is then transported to the Court Intake office where court staff should download identification information and charges from the CJIS system to the court’s database. It is also at this time that the case number is to be entered into the court computer, a judicial calendar is to be assigned, and the CJA interview sheet is to be inserted into the court case jacket. After the officer has completed the papering process, s/he should return to the Court Liaison Division to check-out. The officer is to return the PD 140 to the Court Liaison Division clerk, who is to verify that the form has been completely filled out. The Court Liaison Division clerk is to time-stamp and return a copy of the PD 140 to the officer, and then check the officer out in TACIS. Arrests in which the arrestee is released on citation are also papered. While the papering process is the same, because the citation has been issued 4 to 6 weeks before the first court appearance, MPDC officers should appear sometime prior to the court date to paper the case. Once a citation case is papered, it is to be immediately forwarded to the court. The court should enter all of the information into Superior Court’s database in order to have the cases prepared to go into court on the date of the citation return. An arrestee has a right to an initial hearing before a court or magistrate “without unnecessary delay,” typically within 48 hours of his/her arrest. Depending on whether the offense is a felony or a misdemeanor, the initial hearing is called a presentment or an arraignment, respectively. Because a defendant cannot be formally arraigned, or charged, with a felony offense until after a grand jury returns an indictment, the initial hearing for felony offenses is called a presentment. The government presents the defendant with the charges that it intends to indict. For misdemeanors, the hearing is called an arraignment and the defendant is formally charged and is called upon to answer the charges, almost always by entering a plea of not guilty and requesting a trial. Arrestees charged with U.S. offenses are initially charged with an offense in Arraignment Court in courtroom C-10. Cases are typically called in Superior Court in the order in which they are ready. A case is considered ready for presentment/arraignment only after the following events have been completed. Typically, the judicial officer in the presentment court is a Superior Court Commissioner with powers similar to a federal court magistrate. There are 15 commissioners who rotate through arraignment court for 1 week at a time. The case has been papered by USAO, and there are completed USAO and court jackets ready at courtroom C-10 for the initial court appearance. The United States is represented in arraignment court by AUSAs assigned to the Grand Jury Section, except in cases handled by specialized sections, such as Homicide or Sex Offenses. After a representative from Pretrial Services interviews the arrestee in the U.S. Marshals Service cellblock (prior to the initial court appearance), Pretrial Services is to assemble a case folder and generate a bail report for each arrestee. The bail report is used to determine conditions of release and it outlines current information from the arrestee’s case; demographic, health, and substance abuse information; pending cases and compliance with release conditions, if any; probation and parole status, if any; and any convictions. According to Pretrial Services officials, Pretrial Services recommends the least restrictive, nonfinancial conditions that should ensure the arrestee’s reappearance in court and the safety of the community. Defendants who are eligible for court-appointed counsel may be represented by attorneys from Defender Service or by attorneys appointed under the Criminal Justice Act (CJA attorneys). It is estimated that about 95 percent of defendants receive court-appointed counsel by either the Defender Service or CJA attorneys. In general, Defender Service attorneys are assigned cases that are high profile (in the media, important to the community), very serious (murder), or that involve multiple defendants. Defender Service attorneys are eligible for appointment based on whether they are “picking up” cases. Defender Service provides several attorneys each day to pick up cases based on each attorney’s level of experience. Thus, they also handle more routine felonies and a few misdemeanors. Effective July 31, 2000, the only attorneys who are eligible for CJA appointment are those on an approved list issued by the Court. This list consists of 250 attorneys authorized to handle U.S. cases and 85 attorneys authorized to handle D.C. cases. About 30 percent of the cases that CJA attorneys are assigned to are serious matters and 70 percent are assigned to minor offenses. Also effective July 31, 2000, the Chief of the CJA program no longer makes any recommendations to the judges about whom to appoint as counsel. Instead, judges assume full responsibility for the appointment of counsel. Currently, the Deputy Presiding Judge of the Criminal Division and other judges with particular knowledge about the operation of the CJA program are making the CJA counsel appointments. The U.S. Marshals Service typically moves three arrestees at a time from the holding cells located directly behind C-10 into the courtroom as their cases are about to be called. At the hearing, the court is to determine whether the defendant should be released, and/or what the conditions of release should be. After the court clerk reads the charges in the case, the commissioner is to review the charging document and the Gerstein statement and ask the defendant if s/he understands the charges against her/him. Charges should be dropped for cases that USAO no-papers, and the arrestee is released. The presentment/arraignment typically takes a couple of minutes, although the hearing may be longer if the charge is a more complicated offense, such as a first-degree murder charge. If USAO is requesting that the defendant be detained prior to trial, or if any significant restraints are placed on his/her liberty, USAO must make at least a showing of probable cause. This is done by way of a statement of facts, the Gerstein statement, sworn to by the police officer. If the court determines that the Gerstein statement is not sufficient to support a finding of probable cause, USAO may request 24 hours to “perfect” the Gerstein. This should generally result in the defendant being held until the next court business day when USAO is to be given a second opportunity to present a Gerstein statement that can establish probable cause. If the court finds that USAO has presented sufficient evidence to establish probable cause to believe the defendant committed the offense, the court is to then set the appropriate conditions of release for the defendant. If, however, the court finds no probable cause to believe the defendant committed the offense, based on the evidence presented, the defendant, if being held on the charge, is released. The case is not dismissed and evidence may be presented to the grand jury for its consideration. There are several release options and conditions of release that a judicial officer may order. A Pretrial Services representative is to complete a release order (signed by the defendant, defense counsel, and the commissioner) that outlines the ordered conditions of release. The following are possible conditions of release that may be ordered by a judicial officer. Release on personal recognizance. In most minor misdemeanor cases, the judicial officer orders that the defendant be released on his/her personal recognizance. The defendant signs a notice, called a “buck slip,” for citation releases that are continued on personal recognizance without any conditions. For lockup cases processed at presentment/arraignment and released on personal recognizance, the defendant is required to sign a release order, which contains the next appearance date. Third-party custody. Third-party custody is an arrangement whereby a defendant is released to the custody of a third party, such as a relative, friend, employer, or an organizational custodian, which is a community- based organization that provides supervision services for released defendants. The third party is to pledge his or her best efforts to see that the defendant complies with the conditions of release and returns for further court appearances. Release on bond (cash/surety). There are basically two types of bonds—cash and surety bonds. A cash bond allows the defendant to post the full amount of the bond, in cash, to the court to guarantee his/her appearance on the next scheduled court date. The court may also order that only 10 percent of the total amount of the bond be posted, in cash, with the court to secure the release of the defendant. This is done to ensure that indigent individuals can be released in certain cases. If the defendant does not appear the bond will be forfeited in the full amount of the set bond. If only 10 percent has been posted, the 10 percent will be immediately forfeited, and the court can require the defendant or the person who posted the bond to pay the remaining amount to the court. The other type of bond is a surety bond. A surety bond can only be posted by a bondsman who has been previously approved by the court to act in that capacity. The surety does not produce cash, but signs an agreement with the court that they will take custody of the defendant and that they will pay the full value of the bond if the defendant fails to appear in court. If the defendant does not appear, the bondsman will have to pay the full face value of the bond to the court. Bonds may be issued as “cash/surety” to give defendants the greatest flexibility on how to secure their release. The judicial officer may order that Pretrial Services supervise the defendant and ensure that the defendant follow certain release conditions, which may include: (1) returning to Pretrial Services within 24 hours with verification of his/her address, (2) abiding by any stay-away orders, (3) reporting to Pretrial Services by telephone or in person with some designated frequency, (4) abiding by a curfew, and/or (5) refraining from committing any violation or criminal offense. Most of the defendants charged with drug offenses or nonviolent felony offenses are released into the community while awaiting trial. Depending on the results of the drug test taken in the U.S. Marshals Service cellblock, interview findings, or prior drug history available to Pretrial Services, the judicial officer may order Pretrial Services drug monitoring and/or treatment. Pretrial Services should recommend an evaluation (drug test) if the defendant declines to take a drug test at lockup, has no history of substance use (i.e., no record of use in the past 30 days), or denies substance use at the Pretrial Services interview. If the defendant is released and the judicial officer orders a drug test, after presentment/arraignment the defendant reports to the Adult Drug Unit to provide a urine sample. The defendant is given a telephone number to call the next day to receive the test results. If the defendant tests positive (either at lockup or after arraignment/presentment), admits use, or has tested positive in the past 30 days, the judicial officer typically orders the defendant enrolled in “Program Placement,” which is weekly drug testing and Pretrial Services monitoring. The defendant is to be tested weekly until s/he has 12 consecutive negative tests. Continued use or failure to comply with the drug testing condition may result in referral to a drug treatment program. If the defendant is in drug treatment at the time of arrest, Pretrial Services generally recommends that the judicial officer order maintenance of treatment. If the treatment is with another agency (not Pretrial Services), the defendant may be ordered to weekly testing with Pretrial Services in addition to maintaining the other agency’s program. The Heightened Supervision Program is administered by Pretrial Services to supervise high-risk defendants. In the Heightened Supervision Program, defendants are required to submit to and/or enroll in drug testing, report to their case manager once per week, and observe a specified curfew. The program has a specific schedule of graduated sanctions for increased supervision that are instantly and automatically imposed for certain violations, leading up to a request for a show-cause hearing before a judicial officer for repeated or serious violations. The Intensive Supervision Program supervises high-risk defendants placed in a Department of Corrections (DOC) halfway house for a specified period of time with eventual release to the community. Once in the community, the defendant has weekly contact with a case manager, submits to drug testing and treatment when appropriate, and has a curfew. Violation of release conditions should result in the defendant returning to the halfway house for 2 weeks, leading to a request for a show-cause hearing before a judicial officer for repeated or serious violations. The Restrictive Community Supervision Program supervises pretrial defendants in the DOC work release program. Defendants reside in DOC and city-contracted halfway houses. Pretrial Services provides defendants with case management and drug testing and treatment when appropriate, and shares compliance and supervision information with DOC. Detention is the highest level of supervision. If the defendant is detained, s/he is remanded into the custody of the U.S. Marshals Service and is returned to the U.S. Marshals Service cellblock to await transport to the Central Detention Facility (D.C. Jail). A defendant can be detained because the charged offense is a crime of violence, or because the defendant was on probation or parole. There is a review of any release conditions at each court appearance, based on the behavior of the defendant with respect to his/her conditions of release. In the post-release interview, which occurs immediately after arraignment, a Pretrial Services representative is to review the conditions of release with the defendant and penalties for noncompliance, rearrest, and failure to appear. The defense attorney typically stays with the defendant while release conditions are explained, after which point the attorney is to return to court for his/her next hearing. In a misdemeanor case, the presentment serves as the formal arraignment, at which the defendant is notified of the charges, called upon to make a plea, and may make a jury demand (if applicable). The defendant typically pleads not guilty, and conditions of release pending trial are established. In the vast majority of misdemeanor cases, defendants are released at arraignment. The defendant signs a form pledging to return for trial, and a trial date is set, typically 30 to 45 days after arraignment. For felony cases, the defendant is notified of the charges against him/her, a preliminary hearing date is set, the judge who will handle the remainder of the proceedings in the case is assigned, and conditions of release until the preliminary hearing are established. There are basically two different categories of offenses adjudicated in D.C. Superior Court— felony and misdemeanor. For calendar and case management purposes, the Court has established three felony calendars: (1) Felony I, (2) Accelerated Felony, and (3) Felony II calendars, and manages the types of cases differently. Misdemeanor cases can be distinguished as U.S. Misdemeanors, D.C. Misdemeanors, and Traffic cases. All are technically misdemeanors; however, they are prosecuted by different offices and calendared in different ways. D.C. Misdemeanors and Traffic Cases are discussed in appendix V. The typical court case flow for felonies is discussed in this section, and the typical court case flow for U.S. misdemeanors is discussed later in the Misdemeanor Cases section in this appendix. The most serious offenses (first-degree murder and serious sexual assaults) are on the Felony I calendars. Felony I calendars were established because the court realized that management of the most serious cases requires more intense judicial activity and greater flexibility to conduct lengthy trials. These calendars were specifically designed so that each calendar would carry a small number of cases. These cases carry the maximum penalty under D.C. law, which is up to life imprisonment without parole. The Accelerated Felony Trial Calendar (AFTC) is for cases, other than first-degree murder and serious sexual assaults, in which the accused is held without bond. These offenses include those designated as “Dangerous Crimes” or “Crimes of Violence” such as assault with intent to kill, armed robbery, burglary, aggravated assault, kidnapping, and armed carjacking. The AFTC calendars were designed primarily to deal with preventive detention cases, which would have to be tried within 100 days. Defendants placed in pretrial detention have a statutory right to have an “accelerated” trial date (100 days from the date they were first detained). Again because these were very serious crimes, which might involve long trials, the calendars were designed to carry a limited number of cases. Felony I and Accelerated Felony cases are assigned to a specific judge, who handles all subsequent matters in the case, including the preliminary hearing. Accelerated Felony cases are assigned to the U.S. Attorney’s Community Prosecution Major Crimes Section. The remaining felony cases fall in the Felony II category. The offenses that fall in this category consist of mostly drug distribution, assaults with weapons resulting in moderately serious injury, and firearms and property offenses. Felony II calendars were originally designed to carry the less serious cases where defendants were not being preventively detained. According to Superior Court officials, changes in the law have resulted in a large number of cases where the defendant is being preventively detained (primarily drug distribution cases) being set on the Felony II calendars. Those defendants who are detained are typically held because they are on parole or probation. In general, these cases, which require shorter proceedings, are usually resolved by a plea of guilty, thus avoiding a trial. Felony II cases are assigned to individual calendars for all purposes except for the preliminary hearing/preventive detention hearing. These hearings are conducted by a commissioner sitting in the preliminary hearing courtroom. Next, the preliminary hearing is to be held anywhere from 3 to 20 days after presentment, depending on the type of offense and the release condition ordered at the presentment/arraignment hearing. The preliminary hearing is an evidentiary hearing where the court determines whether there is probable cause to believe that an offense was committed and that the defendant committed it. At the preliminary hearing, USAO calls police officers to testify and presents evidence to establish probable cause. If the judge finds probable cause that the accused committed a crime, the case is sent to a grand jury. If the judge does not find probable cause or if USAO is not ready to present evidence, the complaint is dismissed and the defendant is released. In addition, the court is to review the level of supervision required. If the defendant is not detained at D.C. Jail, the court may release the defendant into Pretrial Services’ Heightened, Intensive, or Restrictive Supervision Programs. If the defendant is detained as a result of the presentment/arraignment, the preliminary hearing is called a preventive detention hearing. Depending on the statutory basis for the detention request, preventive detention hearings are scheduled in either 3 or 5 days. SCDIP is a collaborative effort between Pretrial Services and Superior Court. Treatment for nonviolent defendants is conducted in-house by Pretrial Services and includes graduated sanctions imposed by the Court for drug testing violations. SCDIP graduates who plead or are found guilty of felony offenses will very likely receive probation. In D.C., unless a defendant waives grand jury consideration, a felony charge can only be brought pursuant to a grand jury indictment. Prosecutors are to conduct witness conferences and present evidence to a grand jury to assist the grand jurors in determining whether there is probable cause to believe that a defendant has committed a crime and should be brought to trial. If so, the grand jury issues an indictment (a written statement of the charges against the defendant) to the court. The grand jury process occurs in the period immediately following papering of a felony case, and may last for weeks and even months as the investigation of the case requires. If the defendant can be indicted before the preliminary hearing, s/he will be, and the preliminary hearing is no longer necessary because the grand jury has already made a determination of probable cause. The preliminary hearing date is converted to an arraignment hearing and the defendant is arraigned on the indictment. It is also possible in some cases for the grand jury to conduct an investigation and initiate criminal proceedings on its own. It then issues what is called a “grand jury original” indictment. If the subject of the indictment is not already in custody, the Court may issue an arrest warrant. RIP cases are standard cases with similar basic fact patterns, such as a standard distribution of cocaine case or an unauthorized use of a vehicle, in which a single officer can testify before the grand jury about the facts of the case. RIP cases typically do not require investigation or civilian witnesses. The goal of the RIP program is to indict cases before the preliminary hearing. During the papering process, at screening, RIP cases are scheduled for grand jury presentment, which usually occurs within 10 calendar days. After the charges are formally filed by indictment, there is an arraignment hearing where the defendant is formally charged with the felony offense(s) found by the grand jury, advised of his/her constitutional rights, and asked to enter a plea to the charges. After the defendant is arraigned, a status hearing typically occurs within a few weeks. At the status hearing, the defendant may agree to a plea bargain, the case may be dismissed, or the judge will set a trial date. If the defendant enters a guilty plea, the judge may accept or reject the plea. If accepted, no trial is held and the defendant is sentenced at this hearing or at a later date. If the defendant enters a not guilty plea, the trial date would be set for 2 to 3 months away, depending on the judge’s calendar and whether the defendant is detained. For preventive detention cases, the trial date is to be set for 100 days from when the defendant was originally locked up. If the defendant pleads not guilty, a trial is to take place and a judge or jury decides if the defendant is guilty or not guilty. Felony cases are typically jury trials, the length of which varies depending on the type of offense. For example, Felony I trials can last from 2 weeks to 2 months. A case can be settled through a plea agreement, dismissed, continued to a second trial date, or adjudicated with the defendant being found guilty or not guilty. If a case is continued, the second trial date will typically be scheduled for 30 to 60 days after the first trial. After a guilty verdict, the court is to order a presentence investigation report from Court Services and Offender Supervision Agency (Court Services), and the presumption is that the defendant should be detained pending sentence. The court also sets a date for the sentencing hearing. After a guilty verdict or plea, the judge will sentence the defendant. The Pre-Sentencing Investigation report gives a judge complete information about the defendant that will be necessary in determining an appropriate sentence. The court can impose incarceration, restitution, community service, fines, or probation. An assessment under the D.C. Victims of Violent Crime Compensation Act is also required for all offenses of which the accused is adjudicated guilty. For offenses committed after August 5, 2000, the sentence must include a period of supervised release. Offenders who are sentenced to incarceration are sent to D.C. Jail pending transfer to a facility to serve their sentences. By 2001, all felons sentenced to incarceration under the D.C. Code are to be designated to facilities operated or contracted by the Federal Bureau of Prisons. If a defendant is not sentenced to incarceration, s/he is to be released and supervised by Court Services, which is involved with determining a defendant’s conditions of probation after sentencing. If the offender violates a condition of his/her probation, the sentencing judge may hold a revocation hearing. The general result of a probation revocation is that the defendant may be sentenced to a period of jail time because s/he was not successful on probation. Misdemeanor cases prosecuted by USAO constitute over half of all criminal cases in D.C. Superior Court. Defendants are rarely detained prior to trial, and the court proceedings for misdemeanor cases are typically much simpler than those for felony cases. There is not a status hearing for misdemeanor offenses, and USAO has a long-standing agreement with MPDC that USAO will not normally conduct witness conferences for misdemeanor cases prior to the day of trial. The court typically sets trial dates for U.S. misdemeanors 30 to 45 days from arraignment. Misdemeanor trials typically do not have juries and last 1 to 2 hours. While misdemeanors are almost always resolved by dismissal or plea agreement, several events can happen at the trial: 1. The case can be continued and a new trial date set. 2. The case can be dismissed for want of prosecution because a government witness is not present, or if for some other reason the government is not prepared to try the case. 3. The defendant can also be placed in a diversion program, successfully complete the program, and have his/her case dismissed. If the defendant fails to successfully complete the program the case would be set for trial. 4. The case may be placed on the stet docket, an informal diversion program typically offered to defendants charged with minor crimes such as unlawful entry. In these cases, the government may agree to suspend the prosecution of the case for a period of time. If the defendant refrains from certain actions (i.e., contact with the complaining witness, no rearrests, etc.,) or takes certain actions (i.e., pay restitution) in that set period of time the government agrees to dismiss the case completely. Stet docket cases are typically set for a status hearing 3 to 9 months from the date of the agreement to place the case on the stet docket. 5. SCDIP (Drug Court) – Eligible pretrial defendants charged with nonviolent misdemeanor offenses may have their charges dismissed as part of the Superior Court Drug Court Misdemeanor Diversion Program if they complete the SCDIP program in 4 to 9 months. 6. The defendant can agree to a plea agreement. USAO estimates that 85 to 90 percent of misdemeanor cases generate guilty pleas. 7. The judge can hear the case on its merits. 8. If the defendant fails to appear, the court can issue a bench warrant. It is estimated that about 10 to 15 percent of defendants fail to appear at misdemeanor trials. Defendants convicted of misdemeanors are rarely incarcerated upon conviction, although those who are sentenced to incarceration would serve their sentences in D.C. Jail. Figures 1 and 2 depict the typical case flow processes for (1) adult felonies and (2) misdemeanors prosecuted by the USAO. Appendix V describes the typical case flow for offenses prosecuted in the Superior Court of the District of Columbia (Superior Court) by the Office of the Corporation Counsel for the District of Columbia (Corporation Counsel). Corporation Counsel is responsible for prosecuting “minor” misdemeanor violations (D.C. offenses), criminal traffic offenses, and offenses committed by children. Examples of misdemeanor offenses within the prosecutive jurisdiction of Corporation Counsel include “quality of life” misdemeanors, such as drinking in public or possession of an open container of alcohol. Criminal traffic offenses include offenses such as leaving the scene of an accident, driving while intoxicated (DWI), no permit, and speeding 30 miles over the limit. The Office of the United States Attorney for the District of Columbia (USAO) is responsible for prosecuting felony and serious misdemeanor violations committed by adults in D.C. (U.S. offenses). The case flow process for U.S. offenses is described in appendix IV. The case flow process for offenses committed by children is described in appendix VI. This case flow process description reflects process-related information as described to us by relevant agency officials. We did not verify the accuracy of the information provided to us. As such, we did not test to determine if the descriptions of the processes were functioning as was described to us. We recognize that there may be aspects of a specific case that make its processing unique and that there may be exceptions in the normal progression of the stages in the justice system. However, this description will focus on the case flow process for a typical adult case, prosecuted by Corporation Counsel, as it progresses through the basic stages of the criminal justice system. Most cases begin with an incident and subsequent arrest. However, at the incident a police officer has the option of arresting the violator or giving the violator a ticket for certain enumerated offenses. The completed ticket lists the charge and advises of the fine that must be paid at one of the police districts within 15 days. When the violator pays the fine, s/he may elect to forfeit the collateral or stand trial. In the event that the violator fails to pay the fine or elects to stand trial, the officer is notified to report to Corporation Counsel to paper the case. In D.C., over 30 law enforcement agencies other than the Metropolitan Police Department of the District of Columbia (MPDC), such as the U.S. Capitol Police and the U.S. Park Police, may make arrests for crimes committed within D.C. However, MPDC makes a large majority of arrests in D.C. Arrests can be made with or without an arrest warrant. Arrest warrants are issued by Superior Court when MPDC submits an application for a warrant (a complaint that may be supported by an affidavit) showing there is probable cause to believe that an offense has been committed and that the person named in the complaint has committed the offense. With warrantless arrests, the arresting officer must have probable cause that the person to be arrested has committed or is committing an offense. Arrests resulting from an arrest warrant are processed similarly to warrantless arrests. After a person is arrested, physically secured, and searched on the scene, s/he is to be transported to a police district station for processing. The arrest process for D.C. offenses is generally the same as that for U.S. offenses; however, there are some differences in the processing of D.C. and U.S. offenses. Most notably, arrestees who are charged with D.C. offenses typically (1) are not positively identified using LiveScan as are arrestees for U.S. offenses; (2) are not given Police Department Identification (PDID) Numbers; and (3) may be eligible to post and forfeit, or pay a fine, after an arrest. MPDC typically does not positively identify arrestees who are charged with D.C. offenses. There are, however, two circumstances in which an individual arrested for a D.C. offense will be positively identified, and thus given a PDID number—first, if an arrestee is arrested for both a D.C. and a U.S. offense, and second, if an arrestee cannot be identified (i.e., “John Doe”). An individual who is arrested for a D.C. offense may already have a PDID number because of a prior arrest. In those cases where there is not a PDID number, MPDC relies on a computer name check to determine if the individual has any previous arrests or outstanding warrants. If a person gives an incorrect name or a minor spelling error occurs, an individual’s prior involvement in the justice system may not be discovered. Depending on the nature of the arrest and the results of the criminal history check, police are to determine if an individual is eligible for one of the following: to post and forfeit, for citation release, or for bond release. Post and Forfeit. Arrestees are eligible to post and forfeit (i.e., pay a fine) if they are arrested for 1 of about 25 to 30 selected offenses. These offenses are generally considered to be minor offenses, such as panhandling, driving without a permit, altered tags, urinating in public, and disorderly conduct. MPDC and Corporation Counsel require that the arrestee have no prior arrests for the same charge within the preceding 12 months to qualify for post and forfeit. In addition, arrestees do not need to verify their identity in order to post and forfeit. There are two opportunities for an arrestee to elect to post and forfeit: at the police district station and at Superior Court through Corporation Counsel. At the district station, MPDC may offer an arrestee the opportunity to post and forfeit. If the person is eligible and elects to post and forfeit, s/he pays a designated fine for the offense and is released from the district station. An individual can change his or her mind within 90 days and have the case heard in court. A defendant who chooses to do so is not allowed a second opportunity to post and forfeit for the offense. If the defendant chooses to let the post and forfeit stand, the case does not go to court, but the post and forfeit is considered a conviction as the arrest and payment are recorded. An eligible arrestee, who has not previously declined the opportunity to post and forfeit, may elect to do so at his or her initial court appearance (arraignment). If the arrestee chooses this option, he or she pays the fine and the case is over. An arrestee may elect to post and forfeit at his/her initial court appearance (i.e., arraignment). When Superior Court is open, Corporation Counsel can offer post and forfeit to arrestees who had been locked up after arrest. If the arrestee accepts Corporation Counsel’s offer, s/he pays the fine and the case is over. All of the cases that Corporation Counsel posts and forfeits are officially considered to be papered. Citation Release. If the individual is not eligible to post and forfeit, police typically will determine if the individual is eligible for a citation release. For certain misdemeanor offenses, an arrestee may be eligible for citation release, which allows the arrestee to be released on his/her own recognizance. Arrestees are given a citation appearance date to appear in Superior Court 4 to 6 weeks from the date of arrest. In order to be eligible, the arrestee must live within a 25-mile radius, show means of support, and have three references attesting to his/her identity. The arresting officer is to complete a Citation Release Determination Report (PD 778) after questioning the arrestee. This form is used to determine if the arrestee can be granted a citation release. If the officer determines that the arrestee qualifies for citation release, a Citation to Appear (PD 799) is prepared indicating the date and time the arrestee is to appear in Superior Court, the charge(s) against the arrestee, the penalty for not appearing, and acknowledgement by the arrestee of the citation. If the arrestee has picture identification, s/he can be released from the station. If the arrestee does not have picture identification and there is no one at the station who can positively identify the arrestee, s/he is locked up. Lockup Arrests. Lockup is typically used for individuals charged with a felony offense; however, it is also used for arrestees charged with misdemeanors who cannot prove their identification for purposes of citation release, and who otherwise are not eligible, or do not have the available funds to post and forfeit collateral or pay a bond amount. Arrestees are locked up prior to their first court appearance (i.e., arraignment) at either the district station, MPDC’s Central Cellblock (CCB), or at the U.S. Marshals Service cellblock depending on the time of the arrest. A lockup list, which is a list of individuals who were detained after arrest, is generated in MPDC’s CJIS system. There are typically 15 to 20 arrestees charged with D.C. offenses on the lockup list per day. The time of day when the booking process is completed determines where the arrestee will be transported if s/he is locked up before the initial court appearance. If processing is completed by the cut-off time, then the arrestee can be transported to the U.S. Marshals Service cellblock, where s/he is held until her/his initial court appearance, which should occur the same court workday. Arrestees are to arrive at the U.S. Marshals Service cellblock by the cut-off times of 3:00 p.m. during the week, 2:30 p.m. on Saturdays, and 10:30 a.m. on holidays. If the processing is not completed by the cut-off time, the arrestee can be held overnight at either an MPDC district station or at CCB, and will be transported to the U.S. Marshals Service cellblock to appear in court on the next court workday. For arrestees held overnight, MPDC may begin to transport arrestees to the U.S. Marshals Service cellblock at approximately 6:00 a.m., when the U.S. Marshals Service begins accepting arrestees. While the U.S. Marshals Service processing of arrestees for D.C. offenses does not vary from how arrestees for U.S. offenses are processed, there are differences in the involvement of other criminal justice agencies. As compared with arrestees who are arrested for U.S. offenses, Pretrial Services Agency (Pretrial Services) does not interview or drug test arrestees charged with D.C. offenses. Representatives from the Criminal Justice Act (CJA) office of Defender Service are to interview arrestees on the lockup list at the U.S. Marshals Service cellblock to determine if they are eligible for indigent counsel. The interviewers ask the arrestees about their employment, income, marital status, and number of dependents. Based on the arrestee’s answers, and referring to the Department of Labor poverty guidelines, the interviewer can determine whether the arrestee is financially eligible for counsel and whether the arrestee can contribute a portion of the attorney’s fees. After arrest and booking, Corporation Counsel must determine whether to charge the arrestee with a crime in Superior Court. “Papering” is the stage of the charging process at which police officers present their arrest reports to a prosecutor and explain the circumstances of the arrest. For each arrest, a police officer who is knowledgeable of facts of the arrest is required to go to Corporation Counsel to paper the case. Corporation Counsel is to determine whether the case should be prosecuted (paper the case) or not (no-paper the case). According to a Corporation Counsel official, Corporation Counsel papering is typically quicker and requires less paperwork than the USAO papering process. As a result, if an officer has to paper both a Corporation Counsel and a USAO charge, s/he is told to paper the D.C. offense before papering the U.S. offense. The officer is to paper the D.C. offense at Corporation Counsel and the U.S. offense at USAO. The officer is required to check-in at the Court Liaison Division by 8:00 a.m., the court day after an arrest that resulted in a lockup. Officers normally paper cases on the same court day if the arrest occurs during the day and the arrestee is processed before the cutoff time. At the Court Liaison Division, the officer is to complete a Court Appearance Worksheet (PD 140), which lists information about all of the court appearances the officer must make on a given day. Next, the officer is to wait in line for a Court Liaison Division clerk. The Court Liaison Division clerk is to time-stamp the PD 140, initial it, return one copy to the officer, check the officer into the Time and Attendance Court Information System (TACIS), and then give the officer the original arrest paperwork (PD 163). The officer can now go to Corporation Counsel to paper the case. It is currently the police officers’ responsibility to obtain the appropriate reports for arrests that require evidence of driving records. These offenses include, for example, no permit and operation after suspension or revocation cases. Officers should obtain the required records prior to going to Corporation Counsel to paper an arrest. After checking in with the Court Liaison Division, the officer goes to the Corporation Counsel office located in the Judiciary Center on Fourth Street. The office is open to paper cases from 8:00 a.m. until 4:30 p.m., Monday through Friday, Saturday from 7:30 a.m. to 3:00 p.m., and holidays from 7:30 a.m. to 10:30 a.m. At Corporation Counsel, the arresting officer is to photocopy the arrest paperwork and assemble the Corporation Counsel case jacket that contains all of the relevant police paperwork. This paperwork typically includes: (1) the Arrest/Prosecution Report (PD 163); (2) the Miranda waiver (PD 47); (3) a copy of the Court Appointment Notification System (CANS) notice; and (4) various other reports, depending on the specifics of the arrest. Next, the officer is to complete the Corporation Counsel case jacket, including the defendant’s name, charge(s), arrest and papering dates, and release information and court date. Next, the officer is to meet with an ACC to paper the case. Because mornings are the busiest times for papering, there are usually two ACCs papering cases until 11:00 a.m., and one ACC thereafter. Like USAO, Corporation Counsel papers arrests that result in lockups, citation releases, and bond releases. If an arrestee is locked up after an arrest, the time of the arrest determines when the case is papered. Typically, cases papered in the morning are from arrests that occurred after 3:00 p.m. the day before. Papering for citation cases is typically scheduled in advance. MPDC is to generate a list of citation release cases, and police officers go to Corporation Counsel on the specified date to paper the case. Once a citation case is papered, it is to be immediately forwarded to Superior Court, where all of the information is to be entered into the Superior Court information system (CIS). This is done in order to have the cases prepared to go into court on the date of the citation return. A daily list of individuals released on bond is also to be drawn up, and officers are required to appear at the Corporation Counsel the following day to paper the case. The ACC is to read the arrest information and the sworn statement of facts, and interview the police officer about the arrest. The ACC then determines whether there was probable cause for the police arrest and whether the elements of a case are present. At that point, the ACC can decide whether or not to paper the case. Currently, Corporation Counsel only requires supervisory review of complicated cases. According to Corporation Counsel officials, the majority of no-papered cases are not papered because Corporation Counsel believes that the elements of a crime have not been met. Corporation Counsel typically handles between 170 and 200 cases per week, and no-papers about 5 to 8 percent of these cases. Cases that are no-papered are concluded with the review by the ACC. The Corporation Counsel’s office is to report the no-papered cases to the Superior Court clerk of the court, who assigns the case a number and records this information. Other than this transfer of information, no action is taken. ACCs use certain guidelines to assist in effectively papering cases. One such guideline requires that ACCs include secondary charges to particular cases. For example, an ACC should include an operating while impaired charge with a DWI arrest, or a failure to exhibit a permit with a no permit charge, when appropriate. In these cases, a judge could find guilt in either one or the other of the charges, depending on the available evidence. After interviewing the officer, the ACC could ask the officer to supplement the statement of facts in the PD 163 to make it more complete. The ACC should make sure that the necessary papers and reports are present and in order in the case jacket. The ACC should also staple any additional information, such as a traffic ticket or the Miranda rights form, to the inside of the case jacket. The ACC then is to complete the remaining information on the case jacket, listing comments on the case based on the interview with the officer. For certain charges, the ACC should complete a worksheet that breaks down the elements of the case. Charges that have a separate worksheet are typically those charges for which Corporation Counsel handles a high volume of cases and includes the following offenses: driving while intoxicated, disorderly conduct, hit and run, and reckless driving. There are specific questions for each charge that has its own worksheet. The ACC is to ask the officer preprinted questions, write the officer’s answers on the worksheet, and include the worksheet in the case jacket for the prosecutor to use during trial. The officer swears to the information on the Gerstein statement, the factual narrative about the arrest, and signs it. Gerstein refers to a Supreme Court decision that outlines the requirements for a judicial determination of probable cause prior to the imposition of any “significant restraint of pretrial liberty.” This judicial determination requires a sworn statement by the arresting officer of the facts that are offered to establish probable cause to believe that an offense occurred and that the defendant is the person who committed it. The ACC, as an officer of the court, is to ask the officer if the statements in the arrest report narrative are true to the best of his/her knowledge. The officer swears to and signs the statement, and then the ACC is to sign the statement. The ACC should write the arrestee’s name and information on a log sheet that Corporation Counsel uses to track the outcomes of arrests, and complete and sign appropriate forms for each charge papered or not papered. Finally, the ACC is to sign the officer out on his or her PD 140. A series of time intervals, from 7:30 a.m. to 6:00 p.m., are preprinted on the PD 140 and the ACC uses these to indicate how long the officer was papering the case. The ACC is to place the case jacket in the corresponding tray for citation release or lock-up. Corporation Counsel employees who are going to Superior Court transport the case jackets to the Court Intake office for preparation for arraignment. After the officer has completed the papering process, s/he should return to the Court Liaison Division to “check-out.” The officer is to return the PD 140 to the Court Liaison Division clerk who verifies that the form has been completely filled out. The Court Liaison Division clerk is to time-stamp and return a copy of the PD 140 to the officer, and then check the officer out in TACIS. An arrestee has a right to an initial hearing before a court or magistrate “without unnecessary delay,” typically within 48 hours of his/her arrest. Arrestees are initially charged with an offense in the Superior Court Arraignment Courtroom 115. At the arraignment, the paperwork is presented to the commissioner, the individual is offered an opportunity to admit or deny the offense, and the commissioner determines the conditions of release. The attorneys may exchange any information or discovery packages, and the commissioner sets a trial date. While the CJA office appoints attorneys to both defendants charged with D.C. offenses and U.S. offenses, CJA attorneys typically represent the former—defendants charged with D.C. offenses. (See app. IV for a description of how attorneys are assigned to cases.) The Defender Service typically does not handle D.C. offenses or traffic violations unless the offense is associated with a more serious charge. Superior Court is permitted to hold an individual for 24 hours for the purpose of allowing Corporation Counsel to perfect the Gerstein statement. If Corporation Counsel does establish probable cause, the case moves forward with the court setting the trial date. If Corporation Counsel does not establish probable cause, the judge can dismiss the case. For citation release cases, at the time of arrest, the officer should give the arrestee a date to appear in Superior Court for arraignment. The arrestee is to sign the citation to appear (PD 799), which includes the court date. The PD 799 allows the commissioner to file a bench warrant if the defendant does not appear on the court date. If the PD 799 is misplaced, then the commissioner is to file a judicial summons requiring the individual to appear on another court date, typically 45 days in the future. If the defendant does not appear at the next scheduled date (after the commissioner files a judicial summons), the ACC may request a bench warrant and submit the Gerstein statement to the commissioner, who uses it to ascertain if there is probable cause to issue a bench warrant for the defendant’s arrest. In most minor misdemeanor cases, the commissioner orders that the defendant be released on his/her personal recognizance. The defendant is to sign a notice, called a “buck slip,” for citation releases that are continued on personal recognizance without any conditions. For lockup cases processed at presentment/arraignment and released on personal recognizance, the defendant must sign a release order, which indicates that s/he is to appear in Superior Court at the next trial date. Third-party custody is an arrangement whereby a defendant is released to the custody of a third party, such as a relative, friend, employer, or an organizational custodian, which is a community-based organization that provides supervision services for released defendants. The third party pledges his or her best efforts to see that the defendant complies with the conditions of release and returns for further court appearances. Cash or surety bonds are very rare in cases prosecuted by the Corporation Counsel. Money bonds are most often requested and imposed when an arrestee does not appear for trial. Upon appearance, various conditions of release are usually set to assure the offenders appearance as required. If the offender is on probation or parole, or has a pending felony matter, the ACC may ask that the arrestee be held for 5 days to determine the arrestee’s probation or parole status, or compliance with conditions of release. If the arrestee presents a substantial risk of flight, the ACC may request the arrestee be held without bond. Offenders charged with the quality of life offenses and no prior record may be eligible for the Corporation Counsel’s quality of life diversion program. If the defendant successfully completes the program, charges are dropped. An individual charged with DWI can participate in the statutory DWI diversion program if the case meets the following criteria: defendant had a blood alcohol level no greater than 0.2 percent, defendant had no charges for which s/he cannot post and forfeit, there was no accident with an injury, defendant had no prior arrests for DWI, and defendant signs up for diversion within 5 days of the service of the eligibility notice (this time can be extended if there was a legitimate unforeseen circumstance that prevented the defendant from signing up). Superior Court probation runs the Indecent Exposure Diversion Program, which was administratively created. Superior Court decided to implement the program, and all of the parties involved (defense counsel, USAO, and Corporation Counsel) agreed to it. To participate in the program, the defendant must (1) admit involvement and (2) have no prior arrests for indecent exposure. If the defendant successfully completes the program, charges are dropped. If, however, the defendant does not comply with the program, the case goes to trial. There are basically two different categories of offenses adjudicated in Superior Court, felonies and misdemeanors. For calendar and case management purposes, the Court has established three felony calendars: (1) Felony I, (2) Accelerated Felony, and (3) Felony II calendars, and manages the types of cases differently. Misdemeanor cases can be distinguished as U.S. misdemeanors, D.C. misdemeanors, and traffic cases. All are technically misdemeanors; however, they are prosecuted by different offices and calendared in different ways. The typical court case flow for D.C. misdemeanors and traffic cases is discussed in this section, and the typical court case flow for felonies and U.S. misdemeanors is discussed in appendix IV. A status hearing is to occur after the initial court appearance. The following events may occur at a status hearing: A case can be dismissed. In the rare instance where a defendant should have been allowed to post and forfeit but was not, s/he may be given the opportunity to post and forfeit. The status hearing can be used to determine if cases that were diverted have been successfully completed or not. If an individual has successfully completed diversion, the case is dismissed. However, if the individual did not successfully complete diversion, the case is to be set for trial. The judge may set a date for the disposition hearing or a trial. A disposition hearing is similar to a status hearing. A defendant may enter a plea at the disposition hearing, or Superior Court may set a trial date. Trials for D.C. and traffic offenses are typically set for 45 to 60 days after arraignment. At the trial, the defendant may enter a plea, the case may be dismissed, or the court may render a verdict. A guilty verdict or plea changes the status of the offender regarding danger to community and flight risk. After a guilty verdict, sentence may be imposed. If the case is continued for sentencing, the judge may reexamine the conditions of release. The defendant could be detained, sent to a halfway house, released on bond, or released with a notice to appear on a date certain at the sentencing hearing. At the sentencing hearing, the judge is to determine the appropriate sentence based on available information. This sentence could include jail time. Defendants sentenced for D.C. misdemeanor and criminal traffic offenses are transferred to the official custody of the Department of Corrections and the D.C. Jail. Figure 3 illustrates the typical case flow process for adult misdemeanors prosecuted by Corporation Counsel. Appendix VI describes the typical case flow for offenses committed by children and prosecuted in the Superior Court of the District of Columbia (Superior Court) by the Office of the Corporation Counsel for the District of Columbia (Corporation Counsel). This case flow process description reflects process-related information as described to us by relevant agency officials. We did not verify the accuracy of the information provided to us. As such, we did not test to determine if the descriptions of the processes were functioning as was described to us. We recognize that there may be aspects of a specific case that make its processing unique and that there may be exceptions in the normal progression of the stages in the justice system. However, this description will focus on the case flow process for a typical case involving a child, as it progresses through the basic stages of the juvenile justice system. There are three primary paths of intake into D.C.’s juvenile justice system: (1) pre-petition custody orders (arrest warrants), (2) police arrests on scene, and (3) Persons in Need of Supervision (PINS) cases. Pre-petition custody orders and arrests typically involve delinquency offenses, and PINS cases involve status offenses. A delinquency offense, or a violation of a law, may come to the attention of police through either a reported crime or through direct observation of a suspected illegal act. The officer must then assess the incident, identify parties involved as witnesses, victims, and potential suspects, and possibly apprehend those suspects for purposes of further investigation. Depending on the incident, officers can either obtain a pre-petition custody order or arrest the suspect on the scene. Police obtain a pre-petition custody order in some arrests. A police officer presents the case to Corporation Counsel for review and an affidavit in support of a custody order is drawn up and signed. A custody order may be issued when a judge has determined on the basis of the sworn affidavit that there is probable cause to believe the child has committed an offense.The case is entered into the Washington Area Law Enforcement System (WALES) by court employees, and the police arrest the child based on the custody order issued by the judge. Police arrest on scene, without a custody order, the majority of the time. When a suspected offense has been committed, the officer will decide the nature of the charge based upon the facts of the case. Before arresting the child, the officer must have probable cause to believe that the specific elements of the crime are met by facts. Assuming these criteria are met, the officer will place the child under arrest and transport him/her to the Metropolitan Police Department of the District of Columbia (MPDC) Youth Processing Center, located on 6th Street and New York Avenue. The child is remanded to the custody of MPDC’s Youth and Preventive Services Division and booked for the offense committed. A child may also become involved with the D.C.’s juvenile justice system by committing a status offense. Children who commit status offenses are referred to PINS. The police may be involved in a PINS case, for example, if an officer picks up a runaway on the street. Parents and guardians may also report to police complaints alleging that a child is beyond control (ungovernability). Often, Superior Court Social Services Division (Social Services) is involved with truancy offenses, and assumes the role of a police agency, or referring agency, for prosecution purposes. All arrested children, except truant youths, are brought to the Youth Processing Center, where it typically takes a police officer about 4 hours to process a child. MPDC processing of a child is similar to that for adults; however, a police officer must remain with the child at all times. Based on the charge and the circumstances of the offense, MPDC may decide to dismiss the arrest with no charges, divert the arrest, release the child to a parent/guardian, or transport the child to intake screening. MPDC retains the authority to dismiss charges made at the time of arrest when there is insufficient evidence to charge. In these instances, the child is released from police custody, no formal charges are brought, and no arrest number is assigned. In cases that result in a dismissal with no charges, a Juvenile Contact Report (PD 379-C) is prepared to document the detention. When police divert an arrest, the arrest is never referred to Corporation Counsel (and therefore never eligible for prosecution). MPDC’s Early Intervention Program is currently the only program used for police diversion cases. It is only used for children living in D.C.; children from other jurisdictions are released to their parent/guardian pending a court hearing. Children charged with felony, narcotics, and weapons offenses are not eligible to participate. A child may be released to the custody of a parent/guardian with a signed notice to return to the intake unit for screening/preliminary investigation. If released, the child must return for an intake screening typically within 4 days. If MPDC chooses not to release the child, MPDC transports the child to Superior Court for screening by Social Services. Depending on the time of day and date of arrest, the child may be brought to one of two different locations in Superior Court. When court is not in session, MPDC transports the child to a holding facility located in the basement of the Superior Court Building B (Central Processing Facility). If court is in session, MPDC transports the child to the juvenile cellblock at Superior Court. The intake of a child, physically occurring at either the Central Processing Facility or Superior Court, begins with receipt of the police complaint (PD 379) alleging delinquency or PINS behavior. Admission procedures are handled by the Department of Human Services’ Youth Services Administration when Superior Court is closed, or the U.S. Marshals Service if Superior Court is open. Social Services’ Central Processing team conducts an initial screening to determine what action should be taken in a child’s case. During this screening, the team reviews the child’s social and criminal history, family situation, and circumstances pertaining to the charge. When court is not in session, the screening occurs at the Central Processing Facility, and the team also determines whether the child should be released to a parent/guardian or held for an initial hearing. Children who are released to a parent/guardian from the Central Processing Facility are given notice to return in 3 to 5 days for a preliminary investigation interview with Social Services. If the determination is made to release the child, but a parent/guardian cannot be contacted, the child is transported to Harambe House, a nonsecure detention facility. Harambe House will try to contact a parent/guardian or bring the child home. If Harambe House cannot contact a parent/guardian, they will house the child and transport him/her to court for the initial hearing. Children who are held for an initial hearing are transported to Oak Hill Youth Center if there is sufficient time to transport the child back to Superior Court for the initial hearing. If there is not enough time, the child is held at the Central Processing Facility until transported to the Superior Court at approximately 7:00 a.m. Following the initial screening, a probation/intake officer assigned to the case reviews all information, interviews the child and the parent(s)/ guardian(s) when possible, and contacts pertinent members of the community who may provide additional information. The probation/intake officer then delivers a recommendation on whether or not to petition the case to Corporation Counsel and prepares a report to be presented to the court orally at the initial hearing. The probation officer’s report provides recommendations for pretrial release status, which may include pretrial detention, shelter care, community-based placement, or release to the custody of parents or guardians pending trial. “Papering” refers to the documents (papers) needed, as well as the process, for filing a petition against a child in Superior Court. Corporation Counsel decides for each case whether to file a petition, (i.e., paper the case). For each papering decision, Corporation Counsel requires MPDC officers who are knowledgeable about the facts of the case to appear with any witnesses. For children arrested when court is not in session, the police officer is required to come to Corporation Counsel at 8:00 a.m. the next morning, excluding Sundays, to paper the case. The Corporation Counsel papering attorney interviews the officer to determine if a crime was committed, if the child committed the crime, and if Corporation Counsel can prove that the child committed the crime. At this time, a decision is made whether to transfer the case to adult court, to no-paper the case (to not file a petition), or to paper (file a petition) and forward the case. Corporation Counsel reviews the Social Services recommendation before deciding whether to bring charges. The decision to transfer an individual to adult criminal court occurs at papering. There are two mechanisms that allow a child to be prosecuted in an adult criminal court: (1) prosecution of certain offenses and (2) judicial transfer. The Office of the U.S. Attorney for the District of Columbia (USAO) has discretion to prosecute, as an adult, individuals who are at least 16 years old and who are charged with certain offenses (i.e., murder, first-degree sexual abuse, first-degree burglary, armed robbery, or assault with intent to commit any of those offenses). Under statute, USAO’s acceptance of jurisdiction means that the individual is to be considered an adult and prosecuted in the adult criminal system. Operationally, after an arrest on scene, police complete both a child and an adult set of paperwork. As a practical timesaving matter, the police officer typically brings the arrest paperwork to Corporation Counsel first for review. According to Corporation Counsel officials, Corporation Counsel generally has a good understanding of which cases USAO will accept. After Corporation Counsel review, USAO decides whether they will accept jurisdiction. Judicial transfer is the second mechanism by which children can be prosecuted in the adult criminal system. If certain conditions are met (i.e., the individual is at least 15 years old and the alleged offense is a felony) Corporation Counsel can initiate procedures to transfer the child to adult criminal court. If Corporation Counsel decides to request a transfer of the case, a transfer hearing occurs. Cases may be “no-papered” for several reasons. Cases may be “no-papered” if Social Services and Corporation Counsel determine that the case is not suitable for prosecution. In these instances, the case is closed and the child is released without further court action. In some circumstances, Corporation Counsel may not be able to “prove” otherwise provable cases because of certain practical issues. For example, Corporation Counsel officials noted that Corporation Counsel does not have power to compel testimony prior to trial (i.e., they do not have pretrial subpoena power). In instances where Corporation Counsel needs additional time to investigate their case and to establish probable cause, if Corporation Counsel shows good cause for why they cannot file a petition, they can request up to a “5-day hold” on the filing of the petition. Corporation Counsel officials described that an example of good cause would be if a witness was in the hospital and could not immediately be interviewed. In certain cases, court diversion may result from an agreement between Corporation Counsel, Social Services, and the child. At the initial screening interview, Social Services may determine that the child should be diverted from further judicial processing. Social Services would notify Corporation Counsel to determine if the child met the guidelines for diversion. For example, Corporation Counsel and Social Services might agree that a first-time offender of a minor crime could be successfully rehabilitated without bringing charges. The child would be allowed to enter a diversion program, for which Social Services would determine the program requirements, for 6 to 12 months. Corporation Counsel would proceed to put the case together as if it was going to be papered; however, if the child successfully completes the program, no charges are brought. If the child does not successfully complete the program, Corporation Counsel could then bring the charges. If Corporation Counsel and Social Services agree to file a petition (the case is papered), the case is forwarded for an initial hearing in the new referrals court. The initial hearing is the child’s first appearance in a courtroom following his/her arrest and intake interview. Corporation Counsel and Social Services make their recommendations regarding the pretrial placement of the child. The presiding judge considers their recommendations, in addition to those made by the defense attorney, in determining whether to approve a consent decree, detain the child, or release the child to a parent/guardian. The initial hearing usually occurs within 3 days of the preliminary investigation interview with Social Services for children released from the Central Processing Facility. For children held after the initial intake, the initial hearing usually occurs after papering (later the same day, or early the next morning). If the child is detained, the initial hearing must occur within 24 hours, except if the child is arrested after the cut-off time on Saturday, then the initial hearing is to occur on Monday. Drug Court qualifications may be addressed at the initial hearing. If the child is found eligible the case will be set for a status hearing within 2 weeks on the Juvenile Drug Court calendar. A consent decree is a court-approved agreement between the involved parties through which the child is placed under court supervision for 6 months. According to a Corporation Counsel official, consent decrees are offered to slightly more needy children or for slightly more serious (but nonviolent) offenses. For a consent decree, Corporation Counsel would file the petition, and the court would postpone the hearing for 6 months. For that time, the consent decree program operates like a diversion program, with Social Services monitoring the child’s progress to see if s/he is in violation of the consent decree. If the supervision is successfully completed, the case can be closed without an adjudication of delinquency. If the child does not abide by the conditions, the program time can be extended, or the case can be reinstated and set for a status hearing and/or for trial. For both court diversion and consent decree cases, a child is assigned a probation officer. According to a Corporation Counsel official, Social Services regularly presents Corporation Counsel with requests to reinstate petitions. Also, Corporation Counsel often finds out if a child is not abiding by her/his conditions when s/he violates a consent decree by picking up a new charge. If the judge decides to detain the child instead of releasing him/her to a parent/guardian, the probable cause/detention hearing will occur at the initial hearing. Corporation Counsel must present evidence to support the allegations found within the petition. If based on the evidence presented the judge does not find probable cause to support further detention, the child is immediately released pending trial. If a child is detained, his/her trial must be scheduled within a 30-day period and the child may be placed in either a secure or nonsecure setting (i.e., shelter care or Oak Hill). For those children released into the custody of a parent/guardian, the judge determines to whom the child is to be released and the conditions of release. Prior to being released, the child is given notice to return to court for additional proceedings. Typically, a status hearing occurs a couple of weeks after the initial hearing; however, if the child is detained the status hearing occurs within 5 days. Corporation Counsel has 30 days to try a case; however, the time may be extended for a total of 45 days for serious offenses (such as murder or first-degree sexual assault). A detained child may also be held for additional 30-day periods, if good cause is shown for each extension. At the status hearing, a trial date will be set. Also, any pretrial issues with the case may be resolved. A plea offer may be extended. Motions or discovery questions may be raised; however, motions are usually, but not always, heard the day of the trial. With few exceptions, Corporation Counsel prosecutes cases under the same rules associated with the adult criminal court. Children have the same constitutional guarantees as adults; there are Fourth, Fifth, and Sixth Amendment motions; and the same rules of evidence apply. There are some differences. In adult and delinquency cases, the standard of proof for a conviction/ involvement is proof beyond a reasonable doubt; in PINS cases, the standard of proof is preponderance of evidence. In addition, there are no jury trials for children. In delinquency and PINS cases, the case is presented to a judge. The child can be found not involved (i.e., not guilty); the child can be found involved (i.e., guilty); or the case can be dismissed after adjudication. Pursuant to D.C. Code 16-2317(d) and Super. Ct. Juv. R. 48(b) a case may also be dismissed at the bench trial, or at any other hearing, because a judge finds the child is not in need of care or rehabilitation. The judge determines whether care or rehabilitation will be helpful to the child. The child may not need services because s/he is already receiving services from another agency or due to a prior disposition. Children found to be not involved are released in that case. For children found to be involved, the disposition/sentencing date is set. The court may order preparation of an in-depth social summary report prior to the disposition of the case to aid in sentencing. If the child is detained at Oak Hill or Shelter Placement past adjudication, the disposition hearing must be held within 15 days. The purpose of the social summary report, which results from a predisposition investigation conducted by Social Services, is to determine the circumstances influencing the child’s behavior in order to arrive at an appropriate disposition. The social summary report also contains a recommendation from Social Services as to whether the child can function in the community, and if so, under what conditions of supervision. The disposition report includes prior involvement, dispositions, status of compliance, family history and status, mental health issues, substance abuse issues, educational status, compliance, pretrial summary, and recommendations. Typically, the first disposition hearing is 15 days after the bench trial if the child is securely detained. If the child is not detained, the disposition hearing is typically 5 to 6 weeks later. The disposition hearing is the sentencing hearing, at which a judge determines whether a child found involved in an offense at his/her trial, is in need of care and rehabilitation and, if so, what the treatment plan should be. The disposition could be probation for 1 year, which may be extended, or commitment to the Youth Services Administration until the child is 21 years old. If the disposition is probation, the child remains under the jurisdiction of the court. If applicable, the court would be responsible for referring a child to needed services. The majority of adjudicated children are given probation. Typically, children placed on probation are those deemed by a judge as not posing a threat to the community and who show potential for rehabilitation. Fines, restitution, and community service are possible conditions of probation, but these conditions are rare given the financial status of the child and his/her immediate custodian. In addition to prohibiting the commission of subsequent violations, conditions of probation may also include curfews, carrying an attendance card, individual counseling, family counseling, drug tests and treatment, electronic monitoring, intensive supervision, and reporting to a probation officer. Revocation petition. If the probation officer alerts Corporation Counsel that the child is suspected of violating his/her conditions of probation, or if the child is arrested on a new charge, Corporation Counsel can file a petition to revoke probation (a revocation petition). A Corporation Counsel revocation petition can, but may not necessarily, result in the child’s disposition/sentence being modified. Once the court receives notice of a probation violation, the court will notify counsel and schedule a probation revocation hearing. If the child is found by the court to have violated probation, the court may extend the period of probation, modify the terms and conditions of the probation order, or enter certain other disposition orders. If the child completes probation successfully, the probation case is closed and the court record is updated to reflect the successful completion of probation. If the disposition is commitment, the Youth Services Administration assumes jurisdiction of the child. Typically, children who are judged as posing threats to the community or who require a more structured setting in order to be rehabilitated at the time of disposition are committed to the Youth Services Administration. A judge may sentence a child to commitment in the Oak Hill Youth Center, a group home, or a residential care setting. Oak Hill Youth Center is a secure facility and has capacity for about 170 children. A majority of the children at Oak Hill are pending trial. Group homes are nonsecure, local facilities with staff supervision that house committed children. Residential care placements are out-of-home placements in facilities located out of the D.C. area. There is a range of facilities to which children are sent, and the physical plant of the facilities can resemble aspects of college campuses, hospitals, or prisons. The child is supposed to receive whatever therapy or services prescribed in the Social Services plan. After the period of commitment is completed, a child may be placed on after care status, which is monitoring comparable to parole in the adult system. Figure 4 illustrates the typical case flow process for offenses committed by children and prosecuted by Corporation Counsel. This appendix provides a detailed description of the method of processing cases from arrest through initial court appearance in the District of Columbia (D.C.), including coordination issues associated with the process. We selected this process as a case study to illustrate the coordination problems that the D.C. criminal justice system faces with its mix of D.C., federal, and federally funded D.C. agencies. This appendix also includes descriptions of case processing in Philadelphia and Boston— two large East Coast cities that recently revised their methods of processing cases from arrest through initial court appearance—and discusses the potential application of these cities’ experience to D.C. Processing a case from arrest to initial court appearance is a multistep process that may involve the Metropolitan Police Department of the District of Columbia (MPDC), the Office of the Corporation Counsel (Corporation Counsel), the U.S. Attorney’s Office for the District of Columbia (USAO), U.S. Marshals Service, District of Columbia Pretrial Services Agency (Pretrial Services), Public Defender Service for the District of Columbia (Defender Service), and Superior Court of the District of Columbia (Superior Court). MPDC is responsible for arresting and booking the arrestee, which includes interviewing the arrestee, completing the standard evidence and arrest paperwork, completing a background check, fingerprinting the arrestee, and determining if the arrestee is eligible for release pending initial court appearance. After arrest and booking, prosecutors, either USAO or Corporation Counsel, depending on the offense, are to determine whether to charge the arrestee with a crime. In D.C., prosecutors require a police officer who is knowledgeable about the facts of the arrest to physically report to the prosecutor’s office for “papering.” In D.C., papering is the stage of the charging process at which officers present their arrest reports to a prosecutor and explain the circumstances of the arrest. For each arrest, prosecutors then determine whether the case should be prosecuted (“paper” the case) or not (“no-paper” the case). Both USAO and Corporation Counsel require officers to appear for a face- to-face meeting with prosecutors to paper cases. However, we focused on USAO cases because they are more numerous, typically more complicated, and require significantly more officer time. In 1998, USAO cases (felony and misdemeanor) constituted 64 percent of cases brought to Superior Court for disposition. Additionally, as shown later in table 7, the cases prosecuted by USAO accounted for 87 percent of the police hours recorded for papering—about 20 of the 23 full-time staff years expended during 1999 for papering. The following steps are to occur for each arrest that police officers present to USAO for papering. Some of these steps are clerical in nature and would not necessarily have to be performed by police officers. The officer is to check in at the Court Liaison Division at MPDC Headquarters, where s/he is to complete paperwork documenting her/his required court appearances for the day, check in with a Court Liaison Division clerk, and collect the arrest paperwork necessary to charge an arrestee with a crime. The officer is to go to the USAO Intake Office, which is open Monday through Saturday from about 7:30 a.m. to 5:30 p.m. At the Intake Office the officer is to photocopy the arrest paperwork and assemble the USAO case jacket containing all of the relevant police paperwork, while USAO is to complete a criminal record check on each arrestee. When the criminal record check is complete, the officer is to meet with a screening attorney who is to review the arrest paperwork and determine whether to prosecute the case. The officer is to collect a Superior Court case jacket from the Superior Court representative situated in the USAO Intake Office. The officer is to then meet with a second attorney (papering attorney) who is to complete the papering process. If the screening attorney determined to paper the case, the papering attorney is to complete the USAO case jacket, interview the officer to obtain additional information about the case, and prepare the charging document and other paperwork required for prosecution. If the screening attorney determined to no-paper the case, the papering attorney is to complete a brief form indicating that USAO has declined to prosecute the case. The officer is to return to the screening attorney who is to review the information, sign the charging document, if applicable, and sign the officer’s timesheet to indicate the length of time the officer has been in the Intake Office. The officer is to return to the Superior Court representative to drop off both case jackets and to swear that the statement of facts in the arrest report and the charging document are true. After the officer is finished at the Intake Office, s/he is to return to the Court Liaison Division to submit his/her timesheet to the Court Liaison Division clerk. Several agencies, other than MPDC and USAO, are involved in processing a case prior to the initial court appearance. All arrestees who are locked up after arrest are brought to the U.S. Marshals Service cellblock prior to their initial court appearance. At the U.S. Marshals Service cellblock, Pretrial Services conducts drug tests and interviews arrestees, examiners from the Criminal Justice Act (CJA) office interview arrestees to determine their financial eligibility for indigent defense counsel, and a defense attorney may interview arrestees. The arrestee’s initial court appearance is conducted at Superior Court, which conducts these hearings from about 10:00 a.m. to about 7:00 p.m., Monday through Saturday. There are several problems with D.C.’s current method of processing cases from arrest through initial court appearance. A principal effect of the current method of processing cases is that it reduces the number of officers available for patrol duty. For example, an on-duty officer who makes an arrest during the beginning of his or her daytime 8-hour shift could possibly spend the remaining portion of that shift processing the arrest (including meeting with the prosecutor). Problems with the initial stages of case processing include (1) forms that must be completed manually, (2) delays that may occur as a result of processing the paperwork and transporting it among different locations, and (3) prosecutors’ requirement that officers must appear to paper the case. In order to document an arrest, officers are required to complete several forms, including an arrest report, an incident report, a court notification report, a Miranda rights form, property or evidence reports, and any reports specific to the arrest situation. Most forms contain a similar set of basic information about the incident, the arrestee, or the arresting officer (e.g., time of arrest, arrestee’s name, and arresting officer’s name). Because the forms are not automated, officers must either write or type this basic information multiple times to complete the documentation required for a single arrest. For example, to document a drug arrest in which narcotics were seized and property was removed from the arrestee, an officer would have to complete 10 forms, write or type his/her name 8 times, the arrestee’s name and charges 5 times, and the arrestee’s full address and social security number 4 times. In contrast, if the forms were linked in an automated system, an officer would simply have to type such information once and relevant fields in each report would automatically display the required duplicative information. Reentering the same information several times on different forms increases the opportunity for entry error with the result that the same information, such as the arrestee’s social security number, may not be identical on all of the forms. Paperwork problems or the physical movement of arrestees may delay case processing. Arrest paperwork may be misplaced as it is physically transported between agencies, and the initial court appearance may be delayed because some of the required paperwork is missing. Officers are to transport the arrest paperwork for arrestees who are locked up after arrest from the police district station where it originates to MPDC’s Central Cellblock (CCB). The paperwork is then to be transferred from CCB to the Court Liaison Division where officers are to collect it before going to the prosecutor’s office. There is no electronic mechanism, such as a connected automated system, for transferring arrest paperwork from MPDC to the appropriate prosecuting office. The following paperwork, none of which is submitted electronically, is to be presented to the hearing commissioner at the initial court appearance: (1) prosecutor’s case jacket with MPDC arrest paperwork and charging documents, (2) Pretrial Services bail report outlining release recommendations, (3) Superior Court case jacket, and (4) a document specifying the assignment of defense counsel. In addition, the arrestee is to be present at the initial court appearance, which in a majority of cases requires involvement from both MPDC and the U.S. Marshals Service. Arrestees who are locked up after arrest can be detained at one of MPDC’s district stations or at the CCB, before being transported by MPDC to the U.S. Marshals Service cellblock located in the basement of Superior Court. The U.S. Marshals Service then is to bring arrestees from the U.S. Marshals Service cellblock to the courtroom for their initial court appearance. The longer the process from arrest to initial court appearance, the longer arrestees who are locked up pending their initial court appearance must remain in detention. Because charging decisions are made only during the day, depending on the time of arrest, arrested citizens may be locked up overnight before the initial court appearance at which probable cause issues are addressed. Additionally, arrestees whom USAO has decided not to charge with a crime must still be presented before a commissioner prior to being released. If, for instance, an individual is arrested for a felony offense at 4:00 p.m. on a Saturday afternoon, s/he would typically not have an initial court appearance until Monday at 10:00 a.m. If USAO declined to paper the case, the arrestee would have been locked up for over 36 hours before being released by a commissioner. Before a charging decision is made, both federal (USAO) and D.C. (Corporation Counsel) prosecutors require that an officer knowledgeable about the facts of the arrest meet with an attorney for papering. The papering requirement requires that the officers spend time at the prosecutor’s office, part of which is spent performing clerical duties. On- duty officers who make arrests between 3:00 p.m. and 7:00 a.m. are required to meet with prosecutors the morning after an arrest to paper the case. All off-duty officers, who appear for papering, receive a minimum of 2 hours compensatory time. In calendar year 1999, MPDC’s database recorded 146,543 officer appearances in criminal proceedings begun in that year (see table 5). More than half of these total appearances were for trial proceedings, and about 15 percent were for papering appearances. Of the recorded 22,307 papering appearances, 17,651 (about 79 percent) were for meetings with USAO prosecutors. For calendar year 1999, MPDC reported that officers spent 404,217 hours in appearances for criminal proceedings begun in that year, of which 47,810 hours (about 12 percent) were for papering appearances—the equivalent of about 23 full-time officer staff years (at 2,080 hours per staff year). The average reported hours per papering appearance were 2.1 and the median was 1.7. Actual hours per papering appearance ranged from 0.02 to 12.4 hours (see table 6). Off-duty officers who were recorded as appearing for less than 2 hours would receive 2 hours of compensation time for their appearance. Table 7 shows data for MPDC officer appearances separately for cases prosecuted by Corporation Counsel and USAO. As shown in the table, the cases prosecuted by USAO accounted for 87 percent of the police hours recorded for papering—about 20 of the 23 full-time staff years for this activity. Using an MPDC sworn officer’s average salary, 23 officer years is roughly equivalent to about $1,262,000. It would take more than 23 additional officers to replace the duty hours devoted to the meetings with prosecutors. This is because (1) an officer is available for duty only a portion of the entire 2,080-hour work year, which includes vacation and training time, and (2) the data do not take into account that off-duty officers who appeared for less than 2 hours actually received 2 hours of compensatory time for their papering appearance. According to USAO officials, the current papering process is critical for USAO to make an initial charging decision correctly. Making an initial charging decision correctly benefits (1) USAO by allowing them to more effectively prosecute “good” cases; (2) arrestees by ensuring that individuals are not inappropriately charged with a crime; and (3) the criminal justice system by allowing USAO to screen out “poor” cases, which would languish in the system consuming resources from everyone if they were not weeded out. USAO cannot make an initial charging decision correctly if it does not have all of the information about the arrest. USAO requires that officers physically appear to paper the case in order to gather information about the arrest that may be missing from the arrest reports, inaccurate, or corollary to information recorded in the reports. In the past, MPDC has responded to USAO concerns about the quality of arrest paperwork by conducting report writing training sessions for sergeants, requiring all officers to take annual in-service report writing training, and adopting the use of a new form to document officers involved in an arrest. Prosecutors—federal and nonfederal—generally have considerable discretion in selecting which cases they will prosecute. According to data USAO provided to MPDC, USAO declined to file charges—did not paper— 3,270 cases during the period from November 1999 through June 2000. Of the 3,270 cases in which USAO declined to file charges, USAO listed police-related problems, including paperwork problems, in 8 percent of the cases; problems with witness or victim cooperation or credibility in 20 percent; problems with evidence or proof in 70 percent; and a variety of other reasons in 2 percent of the cases (see table 8). The two most frequently cited reasons for not filing charges—”insubstantial injury or amount” and “insufficient evidence to prove case”—together accounted for about 40 percent of the cases that USAO declined to prosecute. Within its prosecutorial discretion, USAO could decide not to file charges for reasons such as these, even though the police paperwork may have been complete and accurate. Both USAO and MPDC officials said that the paperwork submitted to USAO for charging decisions has been of uneven quality. However, there were no data to compare the quality of the paperwork for misdemeanors and felonies. The purpose of the paperwork that police present to USAO attorneys is to provide evidence that there is probable cause to (1) believe that a crime has been committed and (2) that the person(s) arrested committed the crime. Police documentation could provide evidence that establishes probable cause, but prosecutors may decline to file charges because, for example, they do not believe the evidence would be sufficient to prove the arrestee’s guilt “beyond a reasonable doubt,” the standard required for conviction of a crime in court. The prosecutor’s goal is to prevail in those cases selected for prosecution. As part of their duties, police officers in all jurisdictions generally must make appearances to provide information about cases at a number of criminal justice proceedings, including grand jury testimony, preliminary hearings, pretrial witness conferences, and trials. In addition to these appearances, USAO and Corporation Counsel prosecutors require that MPDC officers personally meet with prosecutors in order to make a charging decision for all cases. This requirement, particularly for misdemeanors, appears to be unusual. In December 1997, Booz-Allen and Hamilton (Booz-Allen) surveyed 51 jurisdictions to determine how they typically processed misdemeanors. Booz-Allen staff contacted either the District Attorney’s Office and/or the police department and asked how and to what extent officers were involved from the point of arrest to trial, and whether officers were required to formally or informally meet with prosecutors before appearing for trial. In 30 of the 38 jurisdictions that responded to the survey, officers were not required to meet with prosecutors until court (i.e., trial), and in 3 cities, officers were not required to appear until the preliminary hearing. Four cities required officers to meet with prosecutors on a case-dependent basis, and one city was in the process of changing its charging procedures. Philadelphia and Boston, two large urban jurisdictions we visited, do not typically require face-to-face meetings during the charging process. Philadelphia and Boston are using automation to improve the efficiency of their processing of arrestees. Both cities have developed cooperation and coordination among criminal justice agencies to implement their automated systems. In contrast to D.C., Philadelphia has implemented a highly automated system to process arrestees. During 1996, the Philadelphia Police Department, the Philadelphia County District Attorney’s Charging Unit (Charging Unit), and the Philadelphia Municipal Court (including the Pretrial Services Agency) jointly began implementing two key automated programs—a software system and videoconferencing—in an effort to improve the processing of arrestees. Philadelphia’s automated system for processing arrestees resulted from collaboration of, among others, the Philadelphia Police Department, the District Attorney’s Office, Municipal Court, and Pretrial Services officials. The group was formed in response to lawsuits filed to expedite arrestee processing and was led by the President Judge of Municipal Court. The group’s collaboration resulted in the creation of the Preliminary Arraignment System (PARS). Officials told us that the group continues to meet weekly to review arrestee processing statistics and discuss possible improvements to the system. PARS is a Windows-based computer software program that automates the paperwork and defendant processing required from the point of arrest to initial court appearance. PARS allows the Philadelphia Police Department, the Charging Unit, Municipal Court, and Pretrial Services to send and receive all paper-based information electronically and instantly. The system was also designed to track the defendant’s physical location and length of time in the system. Six interfaces between PARS and other computer systems (e.g., other criminal record repositories) allow for continual data transmission and sharing. PARS was developed in conjunction with Municipal Court’s implementation of videoconferencing for the initial court appearances. The courtroom, which operates 24 hours a day, 365 days a year, uses video cameras, monitors, and software that make it possible to conduct live hearings. Videoconferencing eliminates the need to transport prisoners to a central location—the arraignments are held at one of eight booking stations throughout the city. The defendant speaks through a telephone and can see the judicial officer, the prosecutor, the public defender, and him/herself on a quad-screen monitor. A telephone handset located by the public defender allows counsel to interrupt the audio speaker system to either silence, or have a private consultation with, the defendant. Also, a police officer standing near the defendant can hear the process through an external speaker. Unlike D.C., Philadelphia does not require officers to meet face-to-face with charging attorneys to reach a charging decision. Charging attorneys review police paperwork submitted electronically via PARS. If charging attorneys need additional information, they will contact police for clarification or missing information. If the Charging Unit’s decision not to charge the arrestee was based on incomplete or unclear information in the arrest report, police will be notified and have the opportunity to fix the police report and resubmit the case by way of an affidavit. Philadelphia criminal justice officials noted that PARS and video arraignments have improved the processing of arrestees. Despite a significant increase in the number of arrests processed during the past few years, the case processing time from arrest to release on bail or jail has not increased proportionally. PARS has also dramatically reduced duplication because once a case is charged, the system will not allow another person to accidentally recharge the same case. PARS has reduced data entry mistakes, which previously occurred when data were entered multiple times in multiple systems. In addition, videoconferencing has eliminated the need to transport prisoners to a central location, eliminating the problems and costs that the transportation created. As in Philadelphia, Boston has also turned to automation to improve the efficiency of its processing of arrestees. In the spring of 2000, the Boston Police Department, Boston Municipal Court (Court), and the Suffolk County District Attorney’s Office (District Attorney) implemented a pilot project designed to automate the charging process by electronically linking the three agencies. The Electronic Application for Criminal Complaint (EACC) system allows the Boston Police Department to electronically file applications for criminal complaints (i.e., charging documents) to the Court for review and acceptance prior to the initial court appearance. The EACC system contains all of the information that is typically available in paper form, including information about the defendant, the offense, the witness, the complainant (i.e., the police officer in arrest-generated cases). EACC is also capable of providing information about whether the District Attorney, the clerk magistrate, and probation have reviewed a complaint. After arrest, the application for complaint and police reports are reviewed by a Boston Police Department duty supervisor and may be electronically submitted to Court. Next, the application for complaint is reviewed by one of several prosecuting police officers stationed at the Court. The prosecuting officers either approve the application for complaint and forward it to the District Attorney or return the application for complaint to the duty supervisor/arresting officer for any needed corrections. Police officers who make the arrests are not required to attend face-to-face meetings to charge the case. In Boston, the clerk magistrate determines whether to charge a case. The District Attorney performs a screening function and reviews the application for complaint and either (1) forwards the application to the clerk magistrate, (2) makes suggestions about changing the application and returns the application to the police supervisor of cases, or (3) rejects the application entirely. If a case is forwarded to the clerk magistrate, he reviews the case to ascertain whether probable cause exists and ultimately decides whether to generate a criminal complaint. If the clerk magistrate issues a criminal complaint, an initial court appearance is held at the Court, Monday through Friday from 8:30 a.m. to 4:30 p.m. On Saturdays and Sundays, when the Court is closed, a representative from the clerk magistrate’s office is on-call to review cases in which an arrestee is not admitted to bail to make a probable cause and/or a bail determination. Although the pilot was ongoing at the time of our visit, Court officials noted that EACC was already providing benefits. All involved agencies were able to see exactly what stage an application for criminal complaint was. The system reduced the possibility of data entry errors because information was only entered once. Efficiency was improved because the complaint arrives at the Court prior to the arrestee, thereby reducing the amount of time the arrestee waits in lockup. The Boston and Philadelphia examples show the potential uses of automation and technology in improving efficiency and information sharing. Of course, any similar efforts in D.C. would need to reflect D.C.’s specific statutory and other case processing requirements. We contacted 15 agencies participating in or supporting the District of Columbia (D.C.) criminal justice system for information on current initiatives to improve the operations of the D.C. criminal justice system. We asked about each initiative’s goals, status, starting date, participating agencies, and results to date, if any, as of November 2000. We did not validate this information. In some cases, more than one agency provided information about a specific initiative, and individual agencies did not always agree on the particulars of a specific initiative. Table 9 details, by agency, those instances in which one agency disagreed with the information provided by another agency concerning an initiative. The information in this appendix provides a picture of the range of initiatives planned or under way. Some of these initiatives, such as those involving the transfer of prisoners from D.C.’s Lorton Prison to BOP custody, were mandated by the National Capital Revitalization and Self- Government Improvement Act of 1997 (D.C. Revitalization Act). Others were initiated by one or more agencies to address a specific aspect of D.C.’s federal criminal justice system. The following are summaries of the 93 current D.C. criminal justice system initiatives, arranged by the subject of the initiative (e.g., corrections, firearms). The lead agency for each initiative is highlighted in bold print. Goal: To help six neighborhoods in six different wards to overcome barriers to achieving the goals identified in the Mayor’s Plan for Building and Sustaining Healthy Neighborhoods. Status: Ongoing. Date started: Summer 1999. Participating agencies: City Administrator’s Office, Department of Consumer and Regulatory Affairs, Department of Employment Services, Department of Housing and Community Development, Department of Public Works, MPDC, Corporation Counsel, Office of the Deputy Mayor for Public Safety and Justice, and USAO. Results reported by participants: The data collected showed significant declines in many Part I crimes within the six neighborhoods. Goal: To provide legal services to indigent people in their own communities and to educate people about their legal rights and responsibilities. This initiative is to include (1) workshops in neighborhood schools and community centers to educate at-risk youth; (2) workshops to teach community members how to deescalate encounters with law enforcement officials; and (3) efforts to encourage mediation and other alternative dispute techniques to resolve matters within the community. Status: Planning and development. Date started: 1998. Participating agencies: Defender Service. Results reported by participants: The Defender Service has opened the community office and has begun to provide indigent criminal defense services. It has also conducted a number of presentations to alert the community of services provided by the office. Goal: To combine the resources of Court Services, MPDC, and the community to enhance supervision of adult offenders on probation or parole community supervision in D.C.; improve offender accountability; and develop community networks to solve problems and prevent crime. Status: Ongoing. Date started: November 1998. Participating agencies: MPDC, Court Services, and USAO. Results reported by participants: Court Services and MPDC launched the first partnership in Police Service Area 704 in Southeast D.C. in late November 1998. During calendar year 1999, they compared stated goals and objectives to actual results. Goal: To enhance the responsiveness to the needs of the various communities served by USAO and improve the ability to tailor strategies and resource allocations to the particular crime problems in each of those communities. In its effort to achieve these goals, USAO has undertaken a reorganization, which has resulted in a structural overhaul of the Superior Court. In November 1999, USAO expanded the Fifth District pilot project to all seven police districts. The Assistant U.S. Attorneys (AUSA) are responsible for prosecuting serious crimes in designated Police Service Areas within their respective police districts. In addition, AUSAs are expected to build and maintain communications and ties with the police who work in those districts and members of those communities. Status: Ongoing. Date started: November 1999. Participating agencies: MPDC and USAO. Results reported by participants: According to USAO, benefits of the new structure include increased AUSA familiarity with the crime problems in their particular neighborhoods, as well as the persons responsible for those problems and the witnesses and sources with information useful to the prosecution of those persons. This, in turn, allows AUSAs and their MPDC partners to better target resources on the prevention and prosecution of the particular crimes that plague those neighborhoods, and increases community confidence in the responsiveness of law enforcement to their needs. Goal: To plan and execute comprehensive case realignment process by Police Service Area in order to enable the Community Supervision Officer to make individual neighborhoods, as opposed to an office, the place of supervision. Status: The initial plan is complete, and Court Services is currently making adjustments as necessary. Date started: November 1998. Participating agencies: Court Services. Results reported by participants: Too early to evaluate results. Goal: To identify, inspect, post, abate, and/or raze all nuisance properties within D.C. In addition, the task force has worked to eradicate dangerous and abandoned properties and to facilitate safer and cleaner neighborhoods. NATF now functions as the Neighborhood Services Program’s primary response arm for critical nuisance properties within D.C. Status: Ongoing. Date started: December 1999. Participating agencies: MPDC, Fire and Emergency Medical Services, Department of Consumer and Regulatory Affairs, Department of Housing and Community Development, Department of Housing and Regulatory Affairs, Department of Public Works, Corporation Counsel, Office of Tax and Revenue, Office of the Deputy Mayor for Public Safety and Justice, and USAO. Results reported by participants: No information provided. Goal: To provide the public with an independent and impartial forum for the review and resolution of complaints against officers in an effective, efficient, and fair manner. OCCR is an independent agency functioning under the Office of the Deputy Mayor for Public Safety and Justice. Status: Ongoing. Date started: July 2000. Participating agencies: Corporation Counsel, MPDC, Office of the Deputy Mayor for Public Safety and Justice, and USAO. Results reported by participants: OCCR is currently in the process of securing staff, office space, physical resources, and creating a strategic plan. Goal: To reintegrate released offenders into the community so that they will become law abiding and productive members of society, thereby reducing crime and enhancing public safety. Status: Ongoing. Date started: July 2000. Participating agencies: D.C. Justice Grant Administration, University of the District of Columbia, Court Services, and Department of Justice (DOJ). Results reported by participants: Too early to determine. Goals: To (1) enhance the educational opportunities of Amidon Elementary School students through mentorship, tutorial, and other programs; (2) provide recreational, positive role-models, one-on-one mentorships, and educational opportunities for children at Garrison Elementary School through the Superior Court’s Elementary Baseball Program; (3) emphasize to youth throughout D.C. the devastating impact of drugs, firearms, and gun violence, and discuss and highlight positive alternatives and role models; (4) teach youths in all D.C. high schools, through the Superior Court Domestic Violence Initiative, about the impact of domestic violence and empower them to take effective steps to eradicate domestic violence from our communities; and (5) build strong partnerships with community-based organizations aimed at improving D.C.’s neighborhoods. Status: Ongoing. Date started: 1993 (Amidon); 2000 (Operations Ceasefire In-School Education Program). Participating agencies: MPDC; Superior Court, Bureau of Alcohol, Tobacco and Firearms (ATF); and USAO. Results reported by participants: Ceasefire Outreach Teams visited schools in the latter part of the 1999-2000 school year. The program will continue during the 2000-2001 school year. This year over 400 Amidon students were served by over 50 volunteers from USAO. Goal: To develop and assist in the implementation of a management reform plan for DOC to meet the requirements of the D.C. Revitalization Act. Status: The plan was completed on November 25, 1997. Date started: September 1997. Participating agencies: BOP; private agencies (e.g., National Council on Crime and Delinquency, Pretrial Services Resource Center); Pulitzer/Board and Associates; and Creative Management Service. Results reported by participants: No information was provided. Goal: To have all D.C.-sentenced felons classified (e.g., by appropriate security level) by October 1, 2000. Status: Ongoing. Date started: September 1999. Participating agencies: DOC and BOP. Results reported by participants: As of September 30, 2000, BOP had classified over 6,300 inmate files forwarded by DOC. BOP anticipates completing classification of the remaining files in March 2001. BOP continues to work with DOC to obtain the remaining files, which either need clarification or have not yet been received from DOC. All newly sentenced felons are being classified by BOP. To assist BOP in achieving this goal, DOC adopted and implemented BOP’s classification system, trained all case management staff, reclassified the entire felon inmate population, classified all new intakes, and forwarded the results to BOP. DOC delivered over 6,000 inmate files to BOP for review along with detailed medical information and separations data. DOC also screened all 4,000 inmates classified for pending court cases, significant medical issues, pending parole matters, or approaching release dates. This information has been forwarded to BOP. Goal: To close the Lorton Correctional Complex by December 31, 2001. Status: Ongoing. Date started: September 1997. Participating agencies: DOC, BOP, U.S. Marshals Service, and Corrections Trustee. Results reported by participants: Since the enactment of the D.C. Revitalization Act, DOC has closed six institutions (see table 10). As of March 15, 2001, the total remaining beds at the Lorton Correctional Complex was 2,066. Goal: To efficiently and effectively designate D.C. Code felons to BOP custody. Status: Ongoing. Date started: May 2000. Participating agencies: Corrections Trustee, Superior Court, BOP, Court Services, U.S. Marshals Service, and U.S. Parole Commission. Results reported by participants: In September 2000, BOP began scoring all newly sentenced D.C. felons. At the time of our review, BOP was designating existing DOC inmates who were medium security or below. BOP was primarily focusing on inmates expected to still be in custody as of December 31, 2001. As of April 2001, BOP is planning to begin designating high-security inmates. As of September 2000, Superior Court began requesting BOP designations for all newly sentenced felons. Goal: To receive information needed on a consistent basis to conduct hearings. To ensure that it does not release individuals to the community without knowing exactly who they are and what they have done, the U.S. Parole Commission requires each case to have a presentence report or another official document, which describes the confining offense behavior. The U.S. Parole Commission also requires a comprehensive arrest and conviction history as well as descriptions and dispositions of all cases involving serious assaults, whether they resulted in convictions or not. Status: Ongoing. Date started: October 1998. Participating agencies: U.S. Parole Commission, DOC, Court Services, Superior Court, MPDC, and Corrections Trustee. Results reported by participants: The U. S. Parole Commission’s initiative has resulted in enhanced cooperation among other criminal justice entities including MPDC, DOC, USAO, Court Services, Superior Court, and state and federal law enforcement agencies. This has enabled the U.S. Parole Commission to docket an increased number of parole cases in accordance with their parole eligibility dates and make informed release decisions. Goal: To have an effective parole monitoring and revocation mechanism that ensures public safety through (1) effective supervision and (2) the establishment of a swift but fair revocation process that imprisons all parolees who threaten public safety. Status: Implemented. Date started: August 1998. Participating agencies: U.S. Parole Commission. Results reported by participants: The revocation process and its speed has recently been the subject of some controversy. As of February 2001, a federal court class action lawsuit challenging certain aspect of the process was pending. Goal: To procure and award contracts to house sentenced D.C. felons in privately operated prisons in accordance with the D.C. Revitalization Act. Status: Ongoing. Date started: November 1997. Participating agencies: DOC and BOP. Results reported by participants: Contracts for two privately operated prisons have been awarded. The first contract is currently delayed pending the resolution of environmental and legal issues. Goal: To evaluate all sentenced felons to determine their impact on medical referral centers and to plan for an orderly transfer to BOP; provide DOC medical staff with parameters for processing general program inmates and information regarding tuberculosis criteria; and minimize the impact of infectious diseases upon BOP inmates and staff. Status: Ongoing. Date started: May 1998. Participating agencies: BOP and Corrections Trustee. Results reported by participants: BOP is continuing to work with DOC to identify inmates requiring BOP medical center referrals. BOP also worked with DOC in providing relevant medically related information to the transfer of DOC inmates into BOP custody. At the time of our review, over 700 charts had been reviewed. To assist BOP in evaluating the acuity of sentenced felons for placement in medical referral centers, DOC supplied the Corrections Trustee with a listing of acute and chronic classification of various categories of medical problems, such as dialysis, HIV/AIDS, and tuberculosis. Goal: To transfer all DOC-sentenced felons in BOP custody by December 31, 2001, as required by the D.C. Revitalization Act. Efficient and effective designation of D.C. code felons to BOP custody. Status: Ongoing. Date started: November 1999. Participating agencies: BOP, DOC, Corrections Trustee, Court Services, Superior Court, U.S. Marshals Service, and U.S. Parole Commission. Results reported by participants: Through February 13, 2001, BOP had accepted 4,291 D.C. inmates. Of these, 3,072 remained in BOP custody, including 276 inmates that were in BOP custody prior to passage of the D.C. Revitalization Act. Beginning on May 8, 2000, DOC and/or U.S. Marshals Service prepared referral packages for BOP on newly sentenced D.C. Code Youth Rehabilitation Act, female, male, minimum, and low security felons for designation and transfer to BOP. As of September 11, 2000, the U.S. Marshal’s Office for D.C. Superior Court began preparing referral packages on all newly sentenced D.C. Code felons. DOC provides jail credit, medical, and separation data to D.C. Superior Court, U.S. Marshals Service, on each of these referrals. Goal: To employ eligible, interested, and qualified DOC staff within BOP. Status: Ongoing. DOC was participating in the Priority Placement Consideration Task Force, which meets monthly to discuss processing/progress concerns, status updates, and planning for upcoming events. These events include (1) career transition center open house, (2) job fairs, (3) on-site recruitment and interview sessions, and (4) workshops and seminars. Employees are provided with resume and employment application preparation, interview techniques training, and job search assistance to include on-line submissions. Date started: July 1997. Participating agencies: Corrections Trustee, DOC, Office of D.C. Personnel, Metropolitan Area Reemployment Services Center for D.C., Department of Employment Services, BOP, and Office of Personnel Management (OPM). Results reported by participants: BOP expanded the program to encompass all existing BOP correctional facilities. BOP has received a total of 33 applications, and had hired 4 persons affected by reductions in force and 16 others. Goal: To identify inmates with safety or security concerns and place them in BOP custody upon completion of their court matters. Status: Ongoing. Date started: April 1997. Participating agencies: BOP and DOC. Results reported by participants: BOP has accepted 71 cases referred by DOC. Goal: To educate D.C. and federal judicial branch personnel on BOP’s designation process, drug treatment and education programs, treatment and placement of Youth Rehabilitation Act offenders, and significance of pre-sentencing investigation reports and judicial recommendations. Status: Ongoing. Date started: June 2000. Participating agencies: BOP, Court Services, Federal Judicial Center, and U.S. Parole Commission. Results reported by participants: Results include: (1) a presentation to Superior Court judges regarding the transition and BOP policies and programs; (2) a tour of the Federal Correctional Institution in Fairton, NJ, for Superior Court and federal court judges; (3) a tour of the Federal Correctional Institution in Petersburg, VA, for Superior Court courtroom clerks; and (4) meetings with staff of the Federal Judicial Center to develop an agenda for training of D.C. and federal judicial branch personnel. Goal: To designate and transfer up to five high-security inmates per month from DOC to BOP custody. Status: Ongoing. DOC continues to submit detailed referral packages for BOP review, which include legal, behavioral, medical, treatment program, and separation information. Date started: March 2000. Participating agencies: BOP, Court Services, and DOC. Results reported by participants: According to BOP, it has accepted 32 high-security inmates, of which 21 have been accepted for transfer to the Administrative Maximum Security Facility in Florence, CO, or the U.S. Penitentiary in Marion, IL. DOC stated it had referred over 60 cases to BOP, of which 18 had been designated and transferred to either the Florence or the Marion facilities. Goal: To procure 400 additional community corrections center beds for use by D.C. inmates released from BOP custody. Status: Ongoing. Date started: September 1999. Participating agencies: BOP and the Interagency Detention Task Force. Results reported by participants: Since September 1999, BOP has contracted for 205 additional beds, and there are currently solicitations for 325 beds. Goal: To develop and implement a system to continuously assess program performance, internal controls quality, and mission accomplishment for all disciplines and offices in DOC. To achieve and maintain program accountability and prioritize program improvements by identifying areas that appear most vulnerable to waste, fraud, and abuse. Status: The policy revision, audit guideline development, and cross training phases are ongoing. BOP continues to provide DOC staff with hands-on exposure to audit practices in a correctional environment. Approximately 140 staff had participated. Twenty to thirty additional staff will participate between now and the close of fiscal year 2001. In the summer of 1999, scheduled program-specific audits were canceled by DOC. In lieu of program-specific audits, DOC completed audits of the entire operations at the Central Detention Facility, the Central Treatment Facility, the Maximum Security Facility, and the Community Correctional Center. Date started: September 1998. Participating agencies: DOC, Corrections Trustee, U.S. Department of Agriculture Graduate School, Government Auditing Institute (training purposes only), and BOP. Results reported by participants: Although an official evaluation has not been conducted, Corrections Trustee staff try to identify areas needing improvement and communicate this information to the Office of Internal Controls Manager. One of the recurring deficiencies communicated by the Corrections Trustee was DOC’s failure to provide copies of audit reports for fiscal year 2000. Goal: To establish a multiagency committee to improve the coordination and logisitcal planning for various detention related processes, such as dual jurisdiction cases, Superior Court’s felon designation and transfer process, and parole and halfway house inmate processing. Status: Ongoing. Date started: January 2000. Participating agencies: CJCC, DOC, Pretrial Services, Defender Service, Executive Office of the Mayor, Superior Court, Court Services, BOP, Corrections Trustee, U.S. District Court, Federal Public Defender Office, USAO, U.S. Marshals Service, and U.S. Probation Office. Results reported by participants: The backlog of inmate designations from Federal Court to BOP facilities has largely been eliminated. A new written policy on dual code cases was implemented in March 2000 by representatives of the U.S. Marshals Service, BOP, and the U.S. Probation Office. Also, in February 2000, USAO developed new procedures that eliminated duplicative and confusing case numbers for cases transferred from Superior Court to Federal Court. Goal: To provide a comprehensive written assessment of DOC case management practices. Status: The assessment has been completed; responses to recommendations are ongoing. Date started: March 1999. Participating agencies: Corrections Trustee, DOC, and BOP. Results reported by participants: An assessment report was submitted in April 1999 containing 33 recommendations. DOC has implemented 23 recommendations; 5 others are in progress; 2 were not implemented because the Lorton closure plan made the recommendations moot; and 3 were not adopted because DOC did not agree with them. Goal: To (1) establish a Community Corrections Office in D.C. to process all requests for designations and provide oversight for contract community corrections centers located in D.C., and (2) facilitate liaison with Superior Court, U.S. District Court for D.C., the U.S. Marshals Service, U.S. Probation Office for D.C, and Court Services. Status: Ongoing. Date started: March 2000. Participating agencies: BOP. Results reported by participants: Permanent offices were expected to be open by April 2001. BOP was continuing to work with other components to ensure open lines of communications. Goal: To assist DOC in developing an oversight program for its detention facilities. Status: Ongoing. Date started: February 1999. Participating agencies: BOP, Corrections Trustee, and DOC. Results reported by participants: To gain practical application knowledge of BOP’s oversight process, selected DOC staff have actively participated in program reviews at various BOP institutions and Community Corrections offices. Goal: To reach a consensus on what is required in presentencing investigations to properly classify inmates. Status: Ongoing. Date started: May 2000. Participating agencies: Superior Court, Court Services, and BOP. Results reported by participants: BOP and Court Services have not yet reached a consensus on the requirements for the presentence investigation report. BOP and Court Services plan to continue working closely together to refine this document. Goal: To review and reform halfway house operations and related procedures to improve public safety and strengthen coordination, cooperation, and management among relevant criminal justice agencies. Status: DOC is responsible for notifying Superior Court of work release infractions of defendants placed in the pretrial/sentenced misdemeanants work release program at halfway houses. Pretrial Services is taking the lead to ensure that defendants are drug tested on a regular basis and referred to drug treatment when necessary. Pretrial Services also assists with communications between DOC and Superior Court, keeping Superior Court apprised of halfway-house infractions, escapes, and noncompliance with conditions of release. Date started: March 1999. Participating agencies: CJCC, Corporation Counsel, Superior Court, DOC, MPDC, Corporation Counsel, Corrections Trustee, Court Services, Defender Service, Pretrial Services, BOP, USAO, and U.S. Marshals Service. Results reported by participants: According to agency officials, the initiative has reduced the time to obtain a warrant for halfway house walkaways from 7 business days to 1 business day. During the fourth quarter of fiscal year 2000, 39 defendants in the pretrial work release program were found to be in need of treatment, and 33 of those defendants were placed in outpatient, inpatient, or detoxification treatment programs. By the close of the quarter, six defendants were still awaiting placement. Goal: To provide vital parole information by telephone to the general public, attorneys, caseworkers, and staff from other D.C. and federal agencies. Status: Ongoing. Date started: April 2000. Participating agencies: U.S. Parole Commission and Lucent Technologies. Results reported by participants: The system was developed to manage an immediate problem within the U.S. Parole Commission. A greater problem still exists with the management of calls among D.C. agencies and the U.S. Parole Commission. Goal: To develop a pilot videoconferencing system to allow victims and witnesses in D.C. criminal cases to provide information and participate in parole hearings without having to travel to the prison. Status: Ongoing. Date started: Fiscal year 1999. Participating agencies: Defender Service, DOC, BOP, and U.S. Parole Commission. Results reported by participants: The U.S. Parole Commission acquired the videoconferencing equipment during fiscal year 1999. The system was expected to be operational by January 2001. Goal: To permit the U.S. Parole Commission to identify and keep potentially violent offenders in prison on a systematic basis, while maintaining a rate of parole grants (35 percent at initial hearings) that avoids prison overcrowding. Status: Implemented. Date started: December 1997. Participating agencies: D.C. Board of Parole, Connecticut Board of Pardons and Paroles, contract researcher, and U.S. Parole Commission. Results reported by participants: The new system has withstood court challenges, and executive hearing examiners have found a low rate of prisoners whose risk factors were not adequately accounted for by the guidelines. Although it is too early to draw conclusions, the U.S. Parole Commission expects the rate of violent recidivism by paroled offenders to decline significantly over the next decade as a result of this and other initiatives. Goal: To reduce police overtime costs and increase available “street time” for officers; reduce unnecessary burdens on other justice agencies, victims, and witnesses; and decrease disposition time of cases. Status: Corporation Counsel was conducting a pilot electronic papering program to eliminate or reduce the practice of having police officers appear in person before an Assistant Corporation Counsel in order for a case to be papered. Superior Court has worked with the Council for Court Excellence and the Justice Management Institute to develop a plan to improve case flow management in the Criminal Division of Superior Court. Date started: January 2000. Participating agencies: CJCC, D.C. Trial Lawyers Association, MPDC, Office of the Deputy Mayor for Public Safety and Justice, Corrections Trustee, Superior Court, Court Services, Defender Service, Pretrial Services, and USAO. Results reported by participants: Pilot was scheduled to commence on March 19, 2001. Goal: To examine the way Superior Court utilizes CJA funds in order to provide a more efficient and cost-effective use of resources while at the same time providing competent legal services to indigent defendants. Status: Ongoing. Date started: 1999. Participating agencies: Defender Service and Superior Court. Results reported by participants: Superior Court established an eligibility list of 250 CJA attorneys who may receive CJA appointments in cases brought by the United States, and a CJA eligibility list of 85 attorneys for cases brought by D.C. At the same time, more cases are being referred to the Defender Service in light of its increased funding and staffing levels. Defender Service received support for a fiscal year 2001 funding initiative to improve eligibility investigation determination process. This is likely to increase the dollar value of defendant contribution orders, as additional potential sources of income available to defendants may be revealed through the investigation. Goal: To ensure that the quality of legal representation received by indigent defendants is not compromised and assist Superior Court in resolving the recurring budgetary crisis with indigent defense services. Status: Ongoing. Date started: July 2000. Participating agencies: Defender Service, D.C. Court of Appeals, Superior Court, their governing body, the Joint Committee on Judicial Administration, Association of Criminal Defense Lawyers, and the Superior Court Trial Lawyers Association. Results reported by participants: While various aspects of the agency’s efforts on this initiative were still being developed, the agency developed and conducted a summer training series during which members of the CJA bar received training provided by Defender Service staff and guest lecturers. A variety of sessions were videotaped and the tapes were available for review. Goal: To alleviate the already strained financial resources of the Superior Court’s CJA budget, Defender Service has agreed to provide legal representation to nearly all parolees facing revocation before the U.S. Parole Commission. Status: Ongoing. Date started: August 2000. Participating agencies: Defender Service, Court Services, U.S. Parole Commission, BOP, and other agencies. Results reported by participants: Defender Service was in the process of completing a major research and education project for Defender Service attorneys, area law school clinical programs handling revocation matters, and the CJA bar. Defender Service was also compiling a comprehensive “how to” manual of relevant case law, regulations, and other materials. Goal: To identify and implement means to reduce drug use by people under criminal justice supervision by 10 percent annually over the next 4 years. Status: Court Services began their pilot program in June 2000, to track the rate of drug use, treatment placement, and rearrests among parolees living in three Patrol Service Areas. Pretrial Services launched their pilot program in September 2000, tracking persons arrested in the same Patrol Service Areas. Some data tracking ceased as of September 30, 2000, due to termination of CJCC’s contract funds. Date started: February 2000. Participating agencies: CJCC, Pretrial Services, Court Services, Superior Court, DOC, Department of Health, Corrections Trustee, MPDC, Defender Service, and USAO. Results reported by participants: Preliminary findings by Court Services and by Pretrial Services indicate that a significant proportion of both populations engaged in some illegal drug use. As part of the process, offenders and defendants were placed in sanctions programs, including drug testing and steps to assess and place them into an appropriate treatment modality. The project’s short time span prevented the report of any definitive data on the rate of rearrests among the targeted population. Goal: To significantly reduce drug trafficking and related violent crime on the borders of D.C., including the criminal misuse of firearms in targeted neighborhoods and sustain reductions for a minimum of 1 year. Status: Ongoing. Date started: June 1999. Participating agencies: MPDC, DEA, and Prince George’s County Police Department. Results reported by participants: Eighty-seven arrests; assets seized worth $148,000. Drugs seized: crack cocaine (448 grams); heroin (4 grams); marijuana (54,048 grams); LSD (1 gram); MDMA (10 grams); PCP (56 grams); and Khat (73,100 grams). Goal: To combat crime and other problems associated with nuisance properties through the use of criminal, forfeiture, civil, and administrative remedies, with the objective of eliminating these properties as locations for drug dealing, prostitution, trash, dumping, and other activities that degrade the quality of life in the surrounding neighborhoods. Status: Ongoing. Date started: October 1999. Participating agencies: D.C. Department of Public Works, D.C. Department of Consumer & Regulatory Affairs, D.C. Housing Authority, MPDC, Department of Housing and Urban Development (HUD), and USAO. Results reported by participants: According to DOJ, nuisance conditions had been abated at eight properties. At six locations, settlement agreements or consent decrees were in place to ensure compliance with applicable laws. Goal: To reduce the availability and abuse of illegal drugs in D.C., increase prosecutions of asset forfeiture and money laundering offenses, and increase prosecutions of interstate drug traffickers. Status: Ongoing. Date started: November 1999. Participating agencies: MPDC, ATF, DEA, FBI, USAO, and other federal law enforcement agencies. Results reported by participants: USAO says it has successfully initiated a number of ongoing investigations of narcotics suppliers. Goal: To identify the highest level of drug traffickers in D.C. and collect information regarding the drug trade in D.C. Within DEA’s Washington Division Office Strategic Intelligence Group, the Cash Flow Task Force uses financial leads and drug intelligence to identify new drug trafficking targets. Status: Ongoing. Date started: July 1998. Participating agencies: DEA, Internal Revenue Service (IRS), and USAO. Results reported by participants: The Strategic Intelligence Group has (1) collected intelligence information on drug trafficking activities in D.C.; (2) identified new and emerging drug trafficking targets; and (3) shared intelligence information, which has led to the initiation of investigations. Goal: To reduce measured drug use by people under criminal justice supervision by 10 percent each year for the next 5 years by drug testing and offering drug treatment to persons in need in the justice system. Status: Implementation of a pilot program has begun for selected populations for whom resources are currently available. Analysis was under way to determine the feasibility of expanding the program to additional populations with existing resources. Date started: October 1999. Participating agencies: CJCC, Superior Court, MPDC, Corrections Trustee, Office of the Mayor for Public Safety and D.C. Department of Health-Addiction Prevention and Recovery Administration, Court Services, Pretrial Services, and USAO. Results reported by participants: Too early to assess efficacy. Goal: To increase the ability of Pretrial Services and Court Services to institute drug testing as a release condition and supervision tool for individuals under criminal justice supervision. Status: Ongoing. Date started: Fall 1999. Participating agencies: Court Services and Pretrial Services. Results reported by participants: Between fiscal year 1999 and fiscal year 2000, the number of post-conviction offenders tested increased by 41 percent, and the number of drug tests per probationer increased from 4 to 6. During the same time period, the number of samples from pretrial defendants and juvenile respondents processed by the forensic toxicology drug-testing laboratory increased by approximately 4 percent. Goal: To provide drug treatment to pretrial defendants using sanctions for immediate corrective action for testing and treatment noncompliance to reduce substance abuse and resultant criminal behavior. Status: Ongoing. Date started: Fall 1999. Participating agencies: Superior Court, Court Services, and Pretrial Services. Results reported by participants: During fiscal year 1999, 1,106 pretrial defendants in sanction-based treatment programs submitted drug tests. That number increased to 1,394 defendants in fiscal year 2000. Goal: To harness the power of the judiciary to coerce addicted defendants to address their addiction by participating in drug treatment and provide a structured environment where the defendant is held accountable for his or her own actions and the consequences for violations are swift and certain. Status: Ongoing. Date started: Demonstration project commenced January 1994. Current SCDIP commenced February 1997. Participating agencies: PSA, Superior Court, USAO, and Defender Service. Results reported by participants: During fiscal year 2000, 117 of the 279 defendants placed in the drug court program successfully graduated. Goal: To investigate the feasibility of fingerprinting all arrestees for identification purposes and developing a criminal history repository. Status: Conducting analysis of fingerprinting and record-keeping processes for the D.C. criminal justice system. Date started: May 2000. Participating agencies: Superior Court, DOC, Corporation Counsel, Corrections Trustee, CJCC, Court Services, Defender Service, Pretrial Services, USAO, and Youth Services Administration. Results reported by participants: MPDC has begun fingerprinting certain Corporation Counsel charges. In addition, CJCC’s Technology Committee was working to develop a criminal history tracking system, including the implications of such a system for those arrested of misdemeanors prosecuted by OCC. Goal: To increase federal prosecution of firearm offenses, establish an ATF regional crime gun center, modify the defendant debriefing program, heighten enforcement of probation and parole conditions, and build a comprehensive media outreach and education program. Status: Ongoing. Date started: November 1999. Participating agencies: MPDC, ATF, USAO, and other law enforcement agencies. Results reported by participants: From January to March 2000, USAO filed approximately 50 cases in U.S. District Court involving firearms charges. Approximately 105 defendants charged with weapons and/or drug offenses were debriefed pursuant to the Operation Ceasefire project. USAO said that over 10 defendants had signed the new cooperation agreement and were actively assisting law enforcement. Based on firearms trafficking intelligence gathered by ATF and MPDC, USAO initiated a number of ongoing investigations of firearms traffickers for prosecution. Goal: To provide a state-of-the-art forensic laboratory for D.C. Status: Ongoing. Date started: June 1998. Participating agencies: MPDC, Office of the Chief Medical Examiner, National Institute of Justice (NIJ), and USAO. Results reported by participants: The project is in the cost-assessment phase. Goal: To provide honest broker baseline data about the current forensic capabilities and capacities in D.C. Status: The report has been completed. Date started: Spring 1999. Participating agencies: NIJ, Office of Law Enforcement Standards/NIST, OPD, and USAO. Results reported by participants: The report found numerous deficiencies and made both short- and long-term recommendations for addressing these deficiencies. There are no current plans to evaluate the recommendations in the report. Goals: The initiative includes the following: develop strategies and procedures to identify, investigate, and prosecute the most significant violent gangs in D.C.; increase federal law enforcement participation in joint investigations to identify, target, and prosecute violent gangs in D.C.; coordinate intra-office information sharing and investigations in order to identify and target violent gangs in D.C.; coordinate and share information among law enforcement agencies (MPDC, FBI, DEA, and ATF) in order to identify and target violent gangs for investigation and prosecution; develop and implement an intelligence database; develop and implement geographical data concerning criminal activity in develop and implement increased access to data regarding subjects and targets of criminal investigations; increase and improve access to court computers; increase and improve USAO access to law enforcement intelligence databases; and coordinate joint intelligence gathering efforts among local and federal law enforcement agencies in D.C., Maryland, and Virginia. Status: The Gang Prosecution and Intelligence Section is fully staffed except for two paralegal specialist positions. Ten AUSAs had been assigned to the section. Date started: Spring 1999. Participating agencies: USAO, MPDC, ATF, FBI, DEA, and High Intensity Drug Trafficking Area (HIDTA) staff. Results reported by participants: Working with the FBI Safe Streets Task Force, USAO prosecuted significant gang cases. USAO’s intelligence operations include (1) establishing an ongoing relationship with HIDTA and arranging for the installation of computers and software to organize intelligence information; (2) organizing information within USAO relating to Operation Ceasefire debriefings, ongoing investigations, and cooperating witnesses; (3) establishing on-line computer access to various databases; and (4) installing software to map criminal activity in D.C. Goal: To provide DOC staff members and the residential communities surrounding the D.C. Jail and the D.C. Correctional Complex at Lorton, VA, with round-the-clock notification of emergency situations at the prison. The CAN computerized network telephones residents living near Lorton to inform them of the emergency and the reason for the sounding of the siren. The system will make three attempts to contact any busy or unanswered phone number. Status: Fully operational. Date started: 1989. Participating agencies: DOC. Results reported by participants: No results provided. Goal: To reduce overtime costs incurred by MPDC officers in connection with court and related appearances. The ACANS system would provide USAO with access to MPDC officers’ work and leave schedules, thus allowing them to coordinate officers’ appearances for witness conferences, grand jury, and court during regular duty hours, thus reducing the need for overtime compensation. Officers would electronically receive notification of court appearances and cancellations, reducing the amount of time and resources currently needed to accomplish the necessary notifications. Status: Pending implementation of adequate automation by MPDC. Date started: December 1996. Participating agencies: MPDC and USAO. Results reported by participants: No information provided. Goal: To create a comprehensive automation system to simplify agency operations and functions, including: developing an automated case management and tracking system; developing software to assist the Court with CJA eligibility and attorney developing a Web site to better communicate with the Court, the CJA bar, and other criminal justice agencies in D.C. and nationally; and creating data links to obtain and share information from other criminal justice agencies. Status: This case management system is 90 percent completed. The initial phase was implemented in the summer of 2000. Defender Service expects to fully implement the system by the end of 2001. Date started: February 2000 Participating agencies: Defender Service. Results reported by participants: The business process portion of this effort has enabled Defender Service to identify data collection and reporting areas that will be strengthened through full implementation of the new system. Goal: To automate the Office of the Chief Medical Examiner by purchasing and customizing a propriety software system designed specifically for forensic applications. This will affect the entire death investigation process including reporting, investigation (death scene and other), documentation of evidence and examination (autopsy), and creation of appropriate reports; it will also create an electronic record- keeping system. Other features to be developed will include workflow automation, to allow for tracking of information, documents, and correspondence (and statistical analysis of work completion and efficiency), and digital imaging with archiving. The results of this project will include greatly increased efficiency in the business processes, monitoring and tracking of data and correspondence, better statistical analysis and improve reporting, images that can be used and transmitted more easily, and enhanced quality of autopsy reports. Status: Ongoing. Product and vendor evaluation/selection, contracting, and procurement have been completed. Initial meetings between the vendor and D.C. had been held, and a scheduling of milestones and completion data were expected. Date started: July 2000. Participating agencies: Office of the Chief Medical Examiner. Results reported by participants: No information provided. Goal: To develop a legislative foundation that provides long-term support for the establishment and operation of a D.C. Criminal Justice Information System Central Repository. Status: The initiative is under way. Date started: May 2000. Participating agencies: CJCC, Court Services, MPDC, Corporation Counsel, Corrections Trustee, Superior Court, Defender Service, Pretrial Services, and USAO. Results reported by participants: The Information Technology Advisory Committee established a Criminal Justice Information System (CJIS) Legislation Working Group (CLWG) in May 2000. The CLWG included representatives of all major D.C. criminal justice agencies and a representative from the City Council. The CLWG drafted a CJIS law for consideration of the Advisory Committee and included it in a final report prepared in September 2000. Goal: To develop an information system that would allow Court Services, MPDC, Superior Court, DOC, and USAO to view text on individual paroling decisions. Status: Ongoing. Date started: January 1999. Participating agencies: U.S. Parole Commission. Results reported by participants: The system has been implemented and can be implemented by D.C. justice agencies at no charge. Goal: To develop a multiagency foundation for privacy and security practices to support day-to-day activities of justice agencies, enhance long- term plans, and meet D.C. and federal laws and regulations. Status: Ongoing. Date started: September 1999. Participating agencies: CJCC, Corporation Counsel, Office of the Chief Technology Officer, Corrections Trustee, MPDC, BOP, Defender Service, Pretrial Services, USAO, and Youth Services Administration. Results reported by participants: In May 2000, participants completed a comprehensive report detailing security policy considerations for justice agency executives in D.C. The subjects outlined in this report will be considered during the development of a District of Columbia Justice Information System (JUSTIS) security plan. Goal: To improve the completeness and accuracy of justice information by implementing a methodology to link all data representing each individual justice processing cycle of an offender. Status: The analysis process has just been initiated. Date started: May 2000. Participating agencies: DOC, Superior Court, MPDC, Corporation Counsel, Court Services, Defender Service, Pretrial Services, and USAO. Results reported by participants: Not yet implemented. Goal: To develop and implement JUSTIS to support interagency data access, data sharing, and automated notification without disrupting the existing systems of the individual federal and local criminal and juvenile justice agencies. Status: KPMG delivered a draft blueprint for JUSTIS on August 31, 2000. Plans are to pilot the program in several agencies, evaluate the results, and make any needed changes prior to systemwide implementation. Date started: April 1999. Participating agencies: CJCC, Youth Services Administration, MPDC, Corrections Trustee, Office of the Chief Technology Officer, Superior Court, BOP, Court Services, Pretrial Services, U.S. Parole Commission, Defender Service, and USAO. Results reported by participants: JUSTIS remains on schedule and within budget. The statement of work for JUSTIS Phase 2 has been completed and reviewed. Goal: To provide the Central Detention Facility (D.C. Jail) with an on-line, fully automated jail information management system that will substantially increase efficiency and reduce the potential for errors in jail operations, such as during inmate release processing. The system will automate virtually every aspect of an inmate’s stay at the jail. It will also have the capability to interface with other law enforcement networks to share information, pinpoint the locations of offenders, and possibly ensure against erroneous releases. Status: Ongoing. DOC began implementing the system in October 2000. DOC user groups are meeting weekly to identify and resolve issues. Date started: May 8, 2000. Participating agencies: DOC, Corrections Trustee, and MPDC. Results reported by participants: In October 2000, DOC successfully introduced the Jail and Community Corrections System (JACCS). Goal: To create an integrated database to replace the current 18 separate legacy systems now operating in Superior Court. The integrated database should eliminate redundant data collection practices, improve identification of related cases across branch and divisional lines, enhance case-flow management and public service, provide more efficient case processing, and enhance data sharing and coordination with other justice system partners. Status: Ongoing. An analysis has been completed with technical assistance by a National Courts advisory group. A request for proposal (RFP) was being prepared and the organizational framework for an integration project, including an advisory committee, a number of working groups, and a dedicated project manager were under way. Date started: 1998. Participating agencies: Superior Court. Results reported by participants: In September 2000, the National Center for State Courts (NCSC) completed a requirement analysis for the IJIS project. The court was striving to implement NCSC’s recommendations and was working with NCSC to complete an RFP. After obtaining Congressional approval of the design plan, the court will publish the RFP. The target date for vendor selection is June 2001. Goal: To develop and implement an integrated criminal justice information system that meets the goals and objectives of the Brady Bill, the National Child Protection Act, and other associated and related requirements. Status: Mitretek Corporation has been awarded a contract to develop the system. Date started: January 1998. Participating agencies: D.C. Courts, Court Services, MPDC, Pretrial Services, and USAO. Results reported by participants: Initial design of records management system is finished. Goal: To develop and implement a sex offender registry that meets the needs of law enforcement and the public, particularly victims of sexual assault and other sexually deviant crimes. Status: Ongoing. In July 2000, D.C. enacted the Sex Offender Registration Act of 1999, which placed the sex offender registration function with Court Services and the community notification function with MPD. Date started: July 1998. Participating agencies: D.C. Courts, MDPC, Corrections Trustee, Court Services, Pretrial Services, and USAO. Results reported by participants: No information provided. Goal: To develop computer systems, such as Community Action Proactive Prosecution System (CAPPS), to enhance coordination and collaboration within the D.C. criminal justice arena. To develop and deploy information systems to improve management of citizen contacts, including nuisance property, other complaints, requests for information, and community meetings. Status: All projects are ongoing. Date started: January 2000. Participating agencies: Executive Office for USAO, Office of Chief Technology Officer, and USAO. Results reported by participants: CAPPS was implemented in August 2000. Goal: To convert Superior Court’s existing youth court from an isolated demonstration program to the cornerstone of a community system. This will be achieved by increasing the size of the jury pool and the number of hearings, and by expanding the number and range of community placements. In addition, Superior Court will transfer current oversight of the youth court from the Time Dollar Institute to a community structure, and develop a governance system with shared ownership and responsibility for implementation of the youth court. Status: The grantee is working to complete the delivery of services to clients recently placed into the Youth Court Program. Date started: January 1999. Participating agencies: Superior Court. Results reported by participants: Not yet implemented. Goal: To serve 300 to 500 youth, develop a drug testing collection site, and establish a family services component and a management information system. Status: The juvenile drug court is fully operational. As of the time of our review, 46 participants had received medical evaluation. Forty-four were enrolled in the Alliance for Concerned Men’s mentor program, and youth were involved in cultural enrichment and therapeutic recreational activities. The Management Information System was in the testing phase. Date started: September 1998. Participating agencies: D.C. Public School’s Comprehensive School Health Program, Superior Court, and Pretrial Services. Results reported by participants: On February 4, 2000, the first graduation was held for 27 youths. Goal: To more fully address the needs of children and get them out of the delinquency system before they follow the all-too-common path into the adult system. Status: When the 1999 Omnibus Appropriations Act limited the amount of fees that could be collected by private attorneys representing children with special education needs under the Individuals With Disabilities Education Act (20 U.S.C. §1400 et. seq.), many children were left without representation. With the additional fiscal year 2000 funding, the Defender Service now represents nearly 200 juveniles in special education matters. Date started: 1999. Participating agencies: Defender Service. Results reported by participants: Defender Service reported an increase in delivery of services provided to children requiring special education needs. Goal: To achieve the following by June 22, 2001: Reduce the number of youth-on-youth homicides by 10 percent. Reduce the number of shootings in which the victim(s) are members of the population targeted by this effort in Wards 7 and 8 by 10 percent. Reduce the number of youth within the targeted population who recidivate for serious and violent crime while under Court Services, Youth Services Administration or Court Social Services, Juvenile Probation custody in Wards 7 and 8 by 10 percent. Reduce the number of open, unsolved violent crimes in Wards 7 and 8 by 10 percent. Status: Ongoing. Date started: June 2000. Participating agencies: Court Social Services, Superior Court, Juvenile Probation, MPDC, Youth Services Administration, Juvenile Parole Unit, Department of Parks and Recreation Roving Leaders/Urban Rangers, Corporation Counsel, USAO, D.C. Public Schools Security Unit, Faith/Community-based entities, Office of the Deputy Mayor for Public Safety and Justice, and Court Services. Results reported by participants: Too early to evaluate. Goal: To prevent the small percentages of youth who commit serious and violent crime from victimizing the larger segment of youth in D.C., reduce the number of juvenile homicide victims, and provide effective intervention services to youth who have traditionally not responded well to such efforts. Status: Implemented. Date started: June 2000. Participating agencies: MPDC; Court Social Services, Juvenile Probation; Youth Services Administration’s Juvenile Parole Unit; Corporation Counsel; D.C. Parks and Recreation’s Roving Leaders/Urban Rangers program; D.C. Pubic Schools Security Unit; D.C. Boys and Girls Clubs; Court Services; USAO; other government, religious, grassroots, and nonprofit organizations. Results reported by participants: Too early to evaluate. Goal: To review and investigate all HUD programs, with a view towards ensuring that they are being operated efficiently, and free from fraud, waste, and abuse. The initiative will ensure that the government is getting its money worth from the HUD programs; propose administrative changes in programs where weaknesses are identified; and file civil and criminal cases where appropriate. Status: Ongoing. Date started: Fall 1999. Participating agencies: HUD, USAO, FBI, U.S. Postal Inspectors, D.C. Inspector General’s Office, and Corporation Counsel. Results reported by participants: No information provided. Goal: To propose—and work with the Council to enact—legislation that will improve the criminal justice system and thereby better service the residents and citizens of D.C. Also respond to requests from the Council to comment on pending legislation. Status: Ongoing. Date started: March 1998. Participating agencies: Council of the District of Columbia and USAO. Results reported by participants: No specific examples provided. Goal: To increase community awareness of the Bias-Related Crimes Act of 1989 and the protection it provides for victims of hate crimes, available victim resources, the need to report hate crimes, and the procedures for making such reports; deter perpetrators of hate crimes; and strengthen the partnership between police, prosecutors, and community organizations in responding to hate crimes. Status: Ongoing. The Task Force meets monthly. Date started: February 1996. Participating agencies: Corporation Counsel, Anti-Defamation League, the Asian Pacific American Bar Association, Gay Men and Lesbians Opposing Violence, the Greater Washington Urban League and Ayuda, MPDC, a Washington Latino organization, FBI, USAO, U.S. Park Police, U.S. Capitol Police, U. S. Secret Service, INS, and D.C. Public Schools. Results reported by participants: At the request of members of the D.C. Bias Crimes Task Force, a D.C. public relations firm created a hate crime public information campaign, which included public service announcements, a hate crime brochure, and campaign posters for placement on billboards, buses, subway stations, etc. The Task Force has also sponsored several conferences to educate law enforcement about ways to combat hate crimes. Goal: To do the following: jointly pursue a comprehensive, vigorous, and proactive approach to investigating and prosecuting allegations of police criminal misconduct; review the status of pending investigations of police misconduct and identify means to ensure that they are handled expeditiously and with highest priority; create a database to track ongoing investigations and allegations of police misconduct; develop investigative plans and approaches to attack criminal misconduct work with MPDC to develop an ongoing system of integrity checks for MPDC officers; and identify a regular forum for meeting and sharing intelligence information about police criminal misconduct cases and investigations among the principal law enforcement offices engaged in investigating such cases. Status: Ongoing. Date started: March 1998. Participating agencies: D.C. Office of Inspector General, FBI, MPDC’s Office of Internal Affairs, USAO. Results reported by participants: The number of police officer convictions for criminal misconduct increased from 5 in 1997 to 9 in 1998 and 13 in 1999. There is now systematic tracking and reporting of all police misconduct cases with USAO. Interaction and intelligence sharing among the various law enforcement agencies have greatly improved through regular meetings and discussions, and joint investigations by participating agencies have been carried out or are ongoing. Goals: (1) To develop curricula and materials to train police officers, community volunteers, and agency representatives throughout D.C. in the methods and tools for neighborhood problem solving; and (2) establish at least one active problem-solving group in each Police Service Area that includes community volunteers, police officers, and agency representatives. Status: PPS is an integral part of the Policing for Prevention (PFP). It provides structure for partnership among city agency, community, police, and other community stakeholders. It introduces people to the five-step problem-solving process: (1) target a problem, (2) understand the problem, (3) create a plan, (4) take action and review progress, and (5) celebrate and create a lasting community presence. MPDC continues to provide training opportunities to its members and the communities it serves. Date started: A pilot PPS training program that focused on the finalized police-community problem-solving model was instituted in July 1999. Participating agencies: MPDC, D.C. community and agency representatives, and the Mid-Atlantic Police Training Institute. Results reported by participants: MPDC’s PPS training program has reached a total of 34 Police Service Areas, 700 police officers, and 300 community volunteers since its inception. In addition, 1,900 police officers have been introduced to the problem-solving model as part of their in- service training at MPDC’s Institute of Police Science, and 36 individuals have been provided with Training of Trainers in 25 Police Service Areas. Goal: To enhance crime prevention and law enforcement activity by allowing for cooperative agreements between federal agencies and MPDC. These agreements may span all areas of law enforcement, from sharing of equipment and radio frequencies to sending federal personnel to patrol in areas immediately surrounding their agencies’ jurisdictions. Status: In October 2000, a cooperation agreement between MPDC and Amtrak was signed. Discussions with other agencies about agreements are ongoing. Date started: May 2000. Participating agencies: USAO. Results reported by participants: The federal agencies and MPDC are satisfied with the level of their cooperation regarding sharing of equipment, joint radio frequencies, and prisoner processing. Goal: To (1) focus law enforcement to disrupt or terminate chronic crime, reduce fear, and build community confidence in MPDC; (2) establish neighborhood problem solving to facilitate the active involvement of the community and government services to stabilize the neighborhoods; and (3) provide long-term intervention by focusing on underlying social and economic conditions that generate crime and disorder. Status: PFP has required an extensive philosophical and training commitment by MPDC members during 2000. In doing so, MPDC has implemented what will be an ongoing initiative in all districts. Date started: Spring 2000. Participating agencies: MPDC; D.C. agencies responsible for social, health, education and economics; and private nonprofit groups. Results reported by participants: In-service training and PFP implementation in the districts have been completed. Goal: To investigate allegations of excessive use of force by the police and prosecute such matters when appropriate. Also, to educate the public regarding hate crime to encourage reporting of such crimes and to effectively investigate and prosecute those crimes. Status: After about 1 year in operation, USAO’s Civil Rights Unit is becoming an efficiently functioning unit. Date started: USAO’s Civil Rights Unit was formed in March 1999 and became staffed with two AUSAs in June 1999. In August 1999, a third AUSA was added to the staff. Participating agencies: FBI Civil Rights Enforcement Squad; MPDC Office of Professional Responsibility’s Force Investigation Team; DOJ’s Civil Rights Division, Criminal Section; and USAO. Results reported by participants: Enhanced communication and improved working relationship with DOJ’s Civil Rights Division, Criminal Section, and law enforcement agencies responsible for investigating police excessive force matters; decrease in the amount of time required to resolve police excessive force matters referred for investigation; expanded scope of matters subject to review and investigation by USAO. Goal: To identify the full range of issues and problems with the operation and management of the current pretrial system, particularly with the use of halfway houses for defendants; to develop accurate, detailed information on how the D.C. pretrial system operates; and to develop consensus among subcommittee members regarding specific, desired policy changes that will address their key concerns with the pretrial system. Status: Ongoing current resources will support development efforts through November 2000. Date started: May 1999. Participating agencies: Superior Court, USAO, Corporation Counsel, Defender Services, CJCC, Pretrial Services, Corrections Trustee, and BOP’s National Institute of Corrections. Results reported by participants: Still in development. Goal: To create a series of diversion programs in D.C. to divert these cases from the system in a way that does not result in an excessive accumulation of legal fees and also addresses the minor misdeeds of the defendant and helps improve the community. Unlike many other major metropolitan areas around the country, D.C. has very few diversion programs for people charged with petty, “quality of life” offenses (e.g., loitering, public drinking, and unlawful entry). Status: Defender Service has collected two sets of national standards on diversion programs, and prepared a comprehensive chart of current diversion programs in D.C. Defender Service is in the process of collecting materials from around the country on successful diversion programs. The specific types of diversion programs under consideration include mental health diversion, welfare-to-work diversion, and community service diversion. Corporation Counsel has created a diversion program for defendants who commit “quality of life” offenses. Date started: Not provided. Corporation Counsel diversion program began in January 2001. Participating agencies: Defender Service, CJCC, and Council for Court Excellence. The planning committee will consist of representatives from USAO, Superior Court, Corporation Counsel, Pretrial Services, the D.C. Commission on Mental Health Services, the D.C. Department of Employment Services, the D.C. Department of Public Works, Court Services, Corrections Trustee, and other local agencies and organizations. Results reported by participants: Still in early stages of development. Too early to determine the effect of Corporation Counsel diversion program. Goal: To monitor persons who are subject to conditions of release—such as stay-away and curfew orders—and prosecute those who violate those conditions on charges of criminal contempt. Status: Ongoing. Date started: 1998. Participating agencies: MPDC, Superior Court, USAO, U.S. Park Police, U.S. Secret Service, and U.S. Capitol Police. Results: Since the inception of the CORE Program, USAO has prosecuted approximately 256 contempt cases for violations of pretrial release conditions, of which 95 thus far have resulted in conviction by trial or guilty plea. Goal: Develop and implement a revised risk/needs instrument to facilitate pretrial decisionmaking in 100 percent of pretrial cases. Status: Development of revised instrument by Pretrial Services Risk Assessment Committee in progress. Date started: February 2000. Participating agencies: DOC, Corporation Counsel, USAO, Corrections Trustee, MPDC, Superior Court, Court Services, Defender Service, Pretrial Services, and U.S. Marshals Service. Results: Still in development. Goal: To create a range of graduated/administrative sanctions that are responsive to the risk of the pretrial defendant and ensure the public’s safety. Status: Committee discussions in progress. Date started: June 2000. Participating agencies: Superior Court, DOC, Corrections Trustee, MPDC, Corporation Counsel, Court Services, Defender Service, Pretrial Services, USAO, and U.S. Marshals Service. Results reported by participants: Still in development. Goal: To provide improved services to victims through education, compensation, and notification. Identified detainees participating in the program shall be made responsible for the obligation to their victim(s). review and ensure full implementation of the Mayor’s Order 82-155, dated August 26, 1982, and D.C. Law 4-100 regarding the D.C. Victims of Violent Crime Compensation Act of 1981; establish a victimization office and hotline; and hire a victimization counselor. develop victim notification categories; and develop a process for notifying victim(s) or victim’s family members. Status: Ongoing. Date started: October 2000. Participating agencies: DOC, DOJ, Corporation Counsel, D.C. Office of Personnel, District Services Administration, Office of the Deputy Mayor for Public Safety and Justice, D.C. Superior Court, and U.S. District Court. Results reported by participants: DOC, in consultation with Corporation Counsel, is reviewing the current Victims Laws and the Mayor’s order to ascertain how quickly they can be fully implemented in DOC. A request for hotline telephone service is to be forwarded to the General Services Administration. Initial contact made with the DOC’s Technology Center. Goal: Short-term goals: ensure that MPDC is meeting its obligation to the Crime Victim Compensation Program (CVCP) by developing a process that supports a renewed effort to inform crime victims of the CVCP program; in collaboration with the National Center for Victims of Crime (NCVC), initiate a pilot effort to provide practical emergency assistance to victims of burglary; improve MPDC’s services to victims of crime by developing a comprehensive strategy/program that is aligned with the assumptions and approaches of PFP; ensure that all police officers are trained in basic crisis intervention techniques; and develop a comprehensive training strategy to ensure consistency throughout recruit in service and roll call training venues; Status: Ongoing. Date started: January 2000. Participating agencies: MPDC, National Organization for Victim Assistance (NOVA), NCVC, International Association of Chiefs of Police, USAO, Superior Court, and funded by a grant from Office for Victims of Crime through D.C. Mayor’s Office. Results reported by participants: The CVCP informational effort is ongoing. A grant proposal has been submitted to establish a dedicated Victim Assistance Unit in MPDC. MPDC has developed training in collaboration with NOVA that focuses on first officer response to victims of crime and includes issues related to the trauma of victimization. Goal: To develop an infrastructure of services to meet the needs of crime victims. The program will support a Victims Services Center that will coordinate and serve as a central referral system for victims of crime in D.C. Status: Plan was completed and submitted to Congress. Date started: May 2000. Participating agencies: CJCC; Corporation Counsel; Office of the Deputy Mayor for Children, Youth, and Families; Office of the Deputy Mayor for Public Safety and Justice; and USAO. Results reported by participants: None reported. Goal: To coordinate the delivery of pretrial mental examination services to defendants facing criminal charges at Superior Court. Monitor bed space at the John Howard Pavilion and D.C. Jail mental health wards to ensure that bed space is used in the most efficient manner. Ensure coordination of all involved agencies to facilitate Superior Court receiving necessary services to prevent trial delays. Status: Committee meets quarterly, with special meetings as required. Date started: 1970s. Participating agencies: Corporation Counsel, DOC, Superior Court, Defender Services, USAO, U.S. Marshals Service, and Commission on Mental Health Services. Results reported by participants: The Committee has been able to eradicate and control prior serious waiting lists for bed space, which resulted in lengthy trial delays. The Committee has formulated new court procedures for pretrial criminal mental evaluations that will help ensure the timely processing of cases and efficient coordination of services by the multiple agencies involved. Goal: To coordinate the civil and criminal justice system for domestic violence victims; create a dedicated violence unit within USAO focused on the aggressive prosecution of all intrafamily offenses; and increase the success of domestic violence prosecutions. Status: USAO’s domestic violence unit now consists of 10 misdemeanor- level AUSAs, each of whom handles exclusively domestic violence criminal cases. The Domestic Violence Court system handles all misdemeanor (and related civil) domestic violence cases. There is a plan to add felony judges to the Court. Date started: April 1996. Participating agencies: Corporation Counsel, Superior Court, MPDC, Court Services, USAO, Emergency Domestic Relations Project, and the D.C. Coalition Against Domestic Violence. Results reported by participants: The preliminary results of the State Justice Institute evaluation indicate a high level of victim satisfaction with the domestic violence intake process. The prosecution rate of domestic violence criminal cases has risen from about 20 percent to 70 percent of all cases presented. Cases are regularly tried without the victim being called as a witness. The overall conviction rate of cases proceeding to trial is approximately 65 percent. Goal: To initiate a referral system for cases of financial exploitation and use of the federal criminal laws creatively to charge those responsible and to recapture as much of the victim’s property as possible. Status: Ongoing. Date started: 1996. Participating agencies: U.S. Secret Service, FBI, MPDC, USAO, D.C. Office on Aging, Legal Counsel for the Elderly, D.C. Adult Protective Services. Other agencies are called upon as needed: IRS, U.S. Postal Inspection Service, Corporation Counsel, and Office of D.C. Child and Family Services. Results reported by participants: Enhanced interagency cooperation has resulted in several pending investigations, one of which, for example, involves millions of dollars and multiple victims. Bankers are now also evaluating and sending cases to appropriate agencies. Goal: The Executive Task Force, chaired by USAO and comprised of the heads of all federal and local law enforcement entities located in D.C., is designed to enhance communication between federal agencies and MPDC. Status: No funding had been provided. Date started: Pre-1997. Participating agencies: USAO, Corporation Counsel, MPDC, ATF, DEA, FBI, IRS, Metro Transit Police, Naval Criminal Investigation Service, U.S. Capitol Police, U.S. Customs Service, Food and Drug Administration, DOJ, INS, U.S. Marshals Service, U.S. Park Police, U.S. Postal Inspection Service, and U.S. Secret Service. Results reported by participants: The Executive Task Force served as a forum for discussing issues relevant to interagency law enforcement efforts. Among other matters discussed during 2000 was planning for demonstrations against the meeting of the International Monetary Fund. Goal: To develop a community-based, multi-agency approach to combating violent crime, substance abuse, and gang-related activity in high-crime neighborhoods. On the “Weed” side, local and federal law enforcement work together with designated Weed and Seed prosecutors by engaging in a coordinated strategy aimed at ridding targeted neighborhoods of violent crime, gang activity, drug use, and drug trafficking. The “Seed” strategy focuses on revitalization, which includes prevention, intervention, and treatment services, followed by neighborhood restoration. Status: Ongoing. In fiscal year 2001, the Executive Office for Weed and Seed initiated efforts to improve the law enforcement and community revitalization elements of D.C. Weed and Seed such as working with MPDC to provide training to help improve homicide clearance rates. Date started: The District’s Weed and Seed was established in 1992. There are now over 200 sites nationwide. Participating agencies: Department of Consumer and Regulatory Affairs, Department of Employment Services, Department of Housing and Community Development, D.C. Housing Authority, Department of Public Works, Department of Recreation and Parks, Corporation Counsel, MPDC, Court Services, Health and Human Services, the FBI, Office of Justice Grants Administration, and USAO. Results reported by participants: Safe Summer 1996-2000. Since 1996, over $383,287 has been raised and awarded to over 200 youth serving organizations throughout the city. Value-Based Violence Prevention Initiative (VBVPI). Grantees used funds to provide stipends for youth workers, hire instructors for parenting skills and G.E.D. courses, and provide meals to program participants. Community Prosecution. As a result of the Fifth District Community Prosecution Pilot Project, reported crime dropped from 10,036 in 1994 to 6,535 in 1998. The Fifth District also fell from having the second highest number of reported crimes in D.C. in 1996 to having the fifth highest in 1999. USAO aims to repeat this progress in the other six police districts. Drug Education For Youth (DEFY). Over 90 youth have successfully graduated from the DEFY program since its inception in 1997. Bring together community policing and community prosecution through the creation of a HUD toolbox, which will enable police officers to access HUD information and resources to solve problems in public and assisted housing. Create a strong partnership among the HUD/DOJ Public Housing Task Force, the D.C. Housing Authority Receiver, and the tenants themselves. Coordinate law enforcement intervention with facility upgrades and management improvement so that residents associate law enforcement officers with both crime reduction and enhanced living conditions. Develop management and resident initiatives through a Family Investment Center operated by the housing authority. Establish increased police visibility and presence, including a full-time community police officer. Bring civil actions against property owners and managers for allowing conditions that threaten residents’ health and safety, as well as for abuses such as equity-skimming and false certifications of compliance with housing quality standards. Use HUD and HIDTA resources to geo-code electronically all HUD- assisted properties in D.C., thus making it possible to overlay housing sites with crime patterns from police department databases, and to build a strategic database of information in support of community policing and community prosecution. Status: Ongoing. Date started: 1996. Participating agencies: USAO, HUD and the HUD Inspector General, MPDC, D.C. Housing Authority Police, FBI, DEA, and ATF. Results reported by participants: A significant reduction in drug and violent crime in the first two public housing areas targeted. Goal: Prosecute health care fraud by providers, recipients, or outsiders; coordinate investigations and investigative resources; facilitate information sharing on health care fraud trends, data mining techniques, and priorities; and educate task force members on topics relevant to investigating health care fraud. Status: Ongoing. Date started: 1993. Participating agencies: D.C. Medicaid Fraud Control Unit; Corporation Counsel; FBI; Department of Health and Human Services’ Office of Inspector General (OIG); Office of Personnel Management’s OIG; USAO; U.S. Postal Service OIG; U.S. Postal Inspection Service; Blue Cross/Blue Shield; GEICO; other private insurance companies and OIG offices; and AARP. Results reported by participants: Numerous health care fraud prosecutions, both criminal and civil, have resulted in dozens of convictions and millions of dollars in restitution and civil recoveries. The reaccredidation of the D.C. Medicaid Fraud Control Unit in 2000 and its partnership with the D.C. OIG should significantly increase the number of criminal and civil referrals in coming years. Appendix IX: Comments From the Executive Office of the Mayor for D.C. Appendix XII - Comments From the U.S. Attorney for D.C. Appendix XII - Comments From the U.S. Attorney for D.C. Appendix XII - Comments From the U.S. Attorney for D.C. Appendix XII - Comments From the U.S. Attorney for D.C. Appendix XII - Comments From the U.S. Attorney for D.C. Appendix XV - Comments From the Public Defender Service for D.C. Appendix XV - Comments From the Public Defender Service for D.C. | Effective coordination of the many agencies that participate in a criminal justice system is key to overall success. Although any criminal justice system faces coordination challenges, the unique structure and funding of the District of Columbia (D.C.) criminal justice system, in which federal and D.C. jurisdictional boundaries and dollars are blended, creates additional challenges. Almost every stage of D.C.'s criminal justice process presents such challenges, and participating agencies are sometimes reluctant to coordinate because the costs to implement needed changes may fall on one or more federally funded agencies, while any savings accrue to one or more D.C. funded agencies, or vice versa. The Criminal Justice Coordinating Council (CJCC) was established and staffed as an independent entity to improve systemwide coordination and cooperation. During its two and a half-year existence, CJCC has served as a useful, independent, discussion forum at a modest cost. It has had notable success in several areas in which agencies perceived a common interest, such as developing technology that permits greater information sharing. It has been less successful in other areas, such as papering, in which forging consensus on the need for and the parameters of change has been difficult. CJCC has achieved some successes at a modest cost and served as a useful, independent forum for discussing issues that affect multiple agencies. CJCC's future is uncertain because its funding source, the D.C. Control Board, is scheduled to disband and key CJCC officials have left. |
Facial recognition technology is one of several biometric technologies, which identify individuals by measuring and analyzing their physiological or behavioral characteristics. Biometric technologies have been developed to identify people using their faces, fingerprints, hands, eye retinas and irises, voice, and gait, among other things. Unlike conventional identification methods, such as a card to gain building access or a password to log on to a computer system, biometric technologies measure things that are generally distinct to each person and cannot easily be changed. There are generally four basic components to a facial recognition technology system: a camera to capture an image, an algorithm to create a faceprint (sometimes called a facial template), a database of stored images, and an algorithm to compare the captured image to the database of images or a single image in the database. The quality of these components determines the effectiveness of the system. In addition, the more similar the environments in which the images are compared—such as the background, lighting conditions, camera distance, and size and orientation of the head—the better a facial recognition technology system will perform. Facial recognition technologies can perform a number of functions, including (1) detecting a face in an image; (2) estimating personal characteristics, such as an individual’s age, race, or gender; (3) verifying identity by accepting or denying the identity claimed by a person; and (4) identifying an individual by matching an image of an unknown person to a gallery of known people. According to FTC staff, academics, and industry experts, most modern facial recognition systems generally follow the steps shown in figure 1. Facial recognition systems can generate two types of errors—false positives (generating an incorrect match) or false negatives (not generating a match when one exists). NIST has measured the performance of companies’ facial recognition algorithms since 1993 and has found that the technology has improved over time. Most recently, NIST’s Face Recognition Vendor Test in 2014 found that the error rates continued to decline and algorithms had improved at identifying individuals from images of poor quality or captured under low light. In addition, research supported by the Technical Support Working Group, a federal interagency group, showed that in certain controlled tests, facial recognition algorithms surpassed human accuracy in determining whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. NTIA is the agency principally responsible for advising the President on telecommunications and information policy issues, including those related to privacy. In February 2014, NTIA began a “multistakeholder process” that has convened various stakeholders to discuss protection of consumer privacy in current and emerging commercial uses of facial recognition technology. This process, which is ongoing, was outlined in a 2012 White House privacy framework, which directed NTIA to convene multistakeholder processes that consist of open, transparent forums in which stakeholders work toward consensus on voluntary, legally enforceable codes of conduct for specific markets or business contexts. The framework also presents privacy principles in the form of a proposed Consumer Privacy Bill of Rights, and states that the codes of conduct developed in NTIA’s multistakeholder processes should specify how those principles would apply to different technologies or markets. FTC is a law enforcement agency that plays a role in enforcing key privacy and consumer protection laws. In December 2011, FTC hosted a workshop—Face Facts: A Forum on Facial Recognition Technology—that explored privacy issues associated with facial recognition technology. It issued a staff report in October 2012 that synthesized those discussions and recommended best practices for the use of the technology in the context of protecting consumer privacy. Facial recognition technology is currently being used in a number of U.S. commercial applications for functions including safety and security, secure access, and marketing and customer service. However the full extent of its present use is not known. Facial recognition technology offers applications beneficial to both consumers and businesses, according to industry representatives and other stakeholders. The International Biometrics & Identification Association has noted that facial recognition and other biometric technologies have until recently been used most prominently by government and law enforcement agencies, such as to protect borders and ports and to identify criminals. However, FTC staff and industry sources have reported that commercial interest and investment in facial recognition technology have grown as the technology has become more accurate and less costly, with new applications being developed for consumers and businesses. The Direct Marketing Association has said that businesses are finding that facial recognition technology can be used as a way to communicate with consumers and provide new tools, products, and services. Industry trade organizations and companies that use and develop facial recognition software have cited four major types of functions that they say do or will benefit from facial recognition technology: photograph identification and organization; safety and security; secure access; and marketing and customer service. Photograph identification and organization. One of the most well- known current uses of facial recognition technology is photograph identification in social networking applications. For example, some of the top social networking applications use facial recognition technology to identify individuals in photographs. In testimony before Congress, a representative of one such application noted that this allows users to instantaneously link photographs from birthdays, vacations, and other events with people who participated. In addition, several applications use facial recognition technology to help individuals organize personal photographs stored online or on computer drives. For example, several photograph management software programs can detect individuals, such as family members, which the user has asked to be identified. The programs then automatically add new photographs of these individuals to a photograph album created by the user. Safety and security. Some retailers, casinos, financial institutions, and apartment buildings use facial recognition technology for safety and security purposes. According to the National Retail Federation, some retailers in the United States are testing systems that use facial recognition technology with closed-circuit television for theft prevention. According to one vendor of such a system with whom we spoke, security cameras in a retail location compare images of individuals who walk into a store against a database of images of known shoplifters, members of organized retail crime syndicates, or other persons of interest. If a match is found, security personnel or management are alerted and provided whatever information is known about the individual. Some casinos in the United States similarly use facial recognition systems to help them identify known or suspected gambling cheaters, members of organized crime networks, or other known persons of concern. Facial recognition technology has also been incorporated into the security systems of some financial institutions to identify robbery suspects or accomplices. According to a vendor of this technology, these systems deter crime and help identify suspects much faster than traditional means, which require staff to spend hours reviewing video recordings. Facial recognition systems have also been used in large apartment buildings to help identify perpetrators of crimes or other known persons of concern who seek to enter the property, according to one software vendor with whom we met. Secure access. Facial recognition technology can be used to provide secure physical access control to buildings or other locked areas. For example, some systems unlock a door after a camera confirms the user’s identity through facial recognition. In addition, applications exist that allow users to unlock personal computers and smartphones, log into video game consoles, or record workplace time and attendance by recognizing their face, in lieu of using a password or personal identification number. Some systems can distinguish whether an image is live to prevent the use of printed photographs to gain access. Industry representatives have noted that these applications have the benefit of not requiring consumers to remember a password and may eventually become an effective voluntary alternative to the use of passwords to access online transactions. Marketing and customer service. Industry trade organizations have said they envision retailers and others using facial recognition technology to target marketing and advertising more effectively and improve customer service. The Direct Marketing Association has stated that facial recognition technology has the potential to help businesses provide more customized and improved products and services, conduct market research and product development, provide more tailored and relevant messaging and advertising, and offer a more secure shopping experience. Facial recognition technology is already used in digital signs—usually televisions or kiosks displaying advertisements in stores— with cameras that recognize characteristics of the viewer, such as gender or age range, and target advertisements accordingly. This allows retailers and advertisers to show relevant products and deals in real time, possibly leading to more sales, according to the Digital Signage Federation. In the future, such signs may be used to identify customers by name and target advertising to them based on past purchases or other personal information available about them, according to FTC staff. Facial recognition systems can also be designed to alert staff when known customers enter the store, according to a software vendor with whom we spoke. Representatives of the National Retail Federation told us they could envision retailers using facial recognition systems to track customer movements around the store to provide the customer with a better shopping experience. Other uses. Several other current or potential uses for facial recognition technology have been cited by industry stakeholders: Facial search engines. Internet search engines are being developed to allow users to conduct a search using a facial image, or to enter a name to search for images that match the name. Online dating. Some online dating companies use facial recognition to determine the facial features a user finds most attractive and search their database for individuals with similar features. Memory support. A memory support application for smartphones assists people with prosopagnosia (face blindness) or other memory- related conditions by confirming the identities and providing the names of family members, friends, caregivers, or others. Hospitality. Facial recognition technology can be used by hotel and guest services industries to identify guests and enable personalized service without having to ask for a guest name or a room number. NTIA staff told us that the next major expansion for facial recognition technology could be in mobile applications for consumers. Industry representatives and some experts and privacy advocacy organizations have noted that the technology can be deployed in cell phone applications to compare faces captured by the phone to a database of facial images. Some academics have noted that in the future, these types of applications could be integrated into wearable systems, such as eyeglasses. Facial recognition technology is currently being used in a number of commercial applications in the United States, but the full extent of its present use is not known. The International Biometrics & Identification Association, other industry trade organizations, and FTC staff told us they knew of no comprehensive reliable information on the extent to which U.S. businesses use facial recognition technology. Similarly, our review of literature associated with the technology identified no such data. Representatives of the National Retail Federation and Retail Industry Leaders Association told us that their sense was that retailers are not using the technology broadly. Several large companies contacted on our behalf by trade associations declined to speak with us about their use of the technology. An industry trade organization representative told us that companies may be reluctant to discuss the technology for competitive reasons. Two applications in which facial recognition technology appears to now be widely used are photograph identification and management and security access. One large social networking service with more than 1 billion monthly users started using facial recognition technology in 2011 to facilitate “tagging” users’ friends in photographs. Other large companies have incorporated the technology into photograph management and social networking applications. Representatives of six other top social networking companies told us that they do not currently use facial recognition technology. Facial recognition technology also has become relatively widely used in providing secure access. For example, versions of one major operating system allow users to unlock devices via facial recognition, as do two of the best-selling home video game systems. Many other companies also offer hardware, software programs, or mobile phone applications using facial recognition technology for photograph management or secure access. In contrast, our review found that less is known about the current prevalence of facial recognition for marketing and security uses. According to the World Privacy Forum, some companies in Europe and Asia currently use facial recognition technology to enhance marketing and customer service, but such use in the United States is less common. A representative of the National Retail Federation told us that U.S. retailers were exploring facial recognition for such purposes, but were taking a slower approach than their overseas counterparts because of concerns over customer reaction. According to the Digital Signage Federation, some digital signs in the United States, such as video monitors displaying advertisements in stores, are used to detect a face or characteristics, such as age and gender for targeted marketing. Some safety and security applications using facial recognition technology are marketed to retailers, casinos, financial institutions, and other businesses, but the extent of their use is uncertain. A representative of the National Retail Federation said that many retailers were at least in the early stages of looking into facial recognition systems for security purposes, but knew of no data on their current use. A representative of the American Gaming Association told us that facial recognition does not appear to be widely used in U.S. casinos, but that it did not have comprehensive data on such use. Representatives from the American Bankers Association and Financial Services Roundtable told us that they were unaware of any data on the extent of use of facial recognition technology by financial institutions. The American Bankers Association representative said at least one major U.S. bank uses facial recognition technology to identify robbery suspects, but two other major banks stated the technology was not in broad use by financial institutions because of concerns over its accuracy. The International Biometrics & Identification Association told us that some businesses are reluctant to disclose use of the technology because publicizing their specific security practices can diminish their effectiveness. A number of stakeholders—including federal agencies, privacy and consumer groups, and some industry representatives—have identified privacy issues related to commercial use of facial recognition technology. In particular, concerns have been raised by the technology’s potential to identify and track individuals in public without their knowledge, and around the collection, use, and sharing of personal data associated with the technology. However, some industry stakeholders have argued that the technology does not present new or unusual privacy risks, or that such risks can be mitigated. While acknowledging the potential benefits of commercial use of facial recognition technology, government agencies, privacy advocacy organizations, academics, and others have raised a number of privacy concerns about the technology and its future direction. As noted earlier, facial recognition technology continues to rapidly improve in accuracy. Further, individuals continue to upload billions of pictures to social networking and other Internet sites, creating a vast repository of facial images that are often linked to names or other personal information. The convergence of these two trends may make it technically feasible one day to identify almost any individual in a wide range of public spaces, according to some privacy advocacy organizations and others. Key privacy concerns related to the commercial application of facial recognition technology have generally centered around (1) its effect on the ability of individuals to remain relatively anonymous in public; (2) the capacity to track individuals across locations; and (3) use of facial recognition without individuals’ knowledge or consent. During the NTIA multistakeholder process, some participants expressed concern that facial recognition technology could affect personal privacy by reducing individuals’ ability to be anonymous when in a public or commercial space, such as a sidewalk or store. The Center for Democracy & Technology has noted that when most individuals are in public, they expect a few people or businesses to recognize their face, but fewer to connect a name to their face, and even fewer to associate their face with Internet behavior, travel patterns, or other profiles. Commercial use of facial recognition technology for identification purposes, the group states, has the potential to change this dynamic by allowing companies or individuals to collect information on any individual captured by a camera. Privacy advocacy organizations and academics have expressed concern that as being remotely identified in commercial settings becomes more common, some individuals may be uncomfortable visiting certain places, shopping at certain establishments, or assembling in public for a cause they support. Further, the Electronic Privacy Information Center has stated that individuals lose some control over their identity if they are not allowed to choose whether or not they want to remain anonymous in public. The organization has also noted that additional privacy concerns would be raised by use of the technology to identify not just who someone is, but whom they are with. Some participants in the NTIA multistakeholder process and others have expressed concern that facial recognition technology could be used to track individuals’ movements in public, which they said could erode personal privacy. Industry sources told us that, at present, facial recognition technology is not used in a commercial context to track consumers on any widespread scale. However, the Center for Democracy & Technology has stated that if use of the technology to identify individuals in public were deployed widely enough in the future, and if businesses shared facial recognition data with one another, the result could be a network of cameras that readily tracked a consumer’s movements from location to location. Further, it noted that unlike other tracking methods, facial recognition does not require an individual to wear a special device or tag, which reduces individuals’ ability to avoid unwanted tracking. A representative of the World Privacy Forum told us that most consumers would find it invasive of their privacy for security cameras to be used to track their movements for marketing purposes. Likewise, FTC staff have reported that the privacy risks for consumers would increase if companies began using images gathered through digital signs to track consumers across stores. These issues are underscored by concerns consumers have expressed about being tracked in other contexts. For example, a 2009 consumer survey on marketing found that about two-thirds of respondents did not want online advertising targeted to them if it involved having their offline activity tracked. Likewise, a representative of the National Retail Federation told us that customers of a major department store chain reacted negatively after the store posted signs disclosing that customers’ movements within the store were being tracked via their mobile phones. Another concern that has been raised is the commercial use of facial recognition technology for identification or verification without individuals’ knowledge or consent. Unlike other biometrics, such as fingerprint identification, facial recognition technology can be used to capture a face remotely and without the individual’s knowledge. In addition, even if consumers are notified and given the option to opt out of the technology, that option may become less feasible as the use of this technology grows. Some industry trade organizations have acknowledged these concerns and expressed caution over deploying the technology in certain contexts without consumer notification and consent. For example, the Software & Information Industry Association said that firms may need to obtain consent prior to deploying the technology in digital signs to identify individuals and record their interests and preferences. The Computer and Communications Industry Association stated that firms’ use of facial recognition technology to match individuals’ names and other biographical information with their faces should be transparent and provide individuals the ability to opt out. Privacy advocacy organizations, government agencies, academics, and some industry representatives also have raised privacy and security issues associated with personal data collected in conjunction with commercial use of facial recognition technology. Many of these issues mirror concerns about the collection, use, and sharing of personal data more broadly by commercial entities. Key data privacy issues that have been raised with regard to facial recognition technology in particular include the following: Consumer control over personal information. Some privacy advocacy organizations and others have reported that, like other forms of personal data, information that is collected or associated with facial recognition technology could be used, shared, or sold in ways that consumers do not understand, anticipate, or consent to. Commenters to the FTC Face Facts Forum noted that the proliferation in recent years of information resellers, and of data sharing among third parties, raises questions about whether faceprints and associated personal data may one day be sold or shared. Facial recognition data may be particularly valuable to marketers because they potentially could link a person’s online presence and offline presence, according to two experts we spoke with. One privacy expert told us that risks to consumer privacy would increase if retailers were to develop relationships with social networking sites, which typically possess consumers’ facial images and detailed personal information that could be used for marketing. As we concluded in our September 2013 report on information resellers, consumers generally do not have the right to prevent their personal information from being collected, used, or shared for marketing purposes. Data security. Industry trade organizations, government agencies, and privacy advocacy organizations have noted that commercial use of facial recognition technology raises the same security concerns as those associated with any personal data. Participants in the NTIA multistakeholder process noted that facial recognition data could be subject to data breaches that result in sensitive biometric data being revealed to unauthorized entities. Because a person’s face is unique, permanent (absent surgery), and therefore irrevocable, a breach involving data derived from or related to facial recognition technology may have more serious consequences than the breach of other information, such as passwords or credit card numbers, which can be changed. The Electronic Privacy Information Center has stated that the risk of theft of data associated with the technology could increase the possibility of identity theft, harassment, and stalking. The International Biometrics & Identification Association has said that faceprints, when stored with other identity data, should be considered personally identifiable information and provided all the security and privacy protections bestowed upon these personal data. At the same time, industry representatives have noted that security concerns are mitigated, to some extent, because at present faceprint algorithms are specific to a vendor and of little use outside that vendor’s system if obtained through a breach. Misidentification. Industry and other stakeholders have said that facial recognition technology may generate more matching errors than other forms of biometric identification because facial recognition technology systems are currently less accurate than other biometrics. Representatives of the Electronic Privacy Information Center have expressed concern that if someone’s image is captured and misidentified, adverse information—such as an incorrect identification of an individual as a shoplifter—could propagate in the long-term throughout different commercial systems, sometimes without the individual’s knowledge. Disparate treatment. Some stakeholders in the NTIA multistakeholder process expressed concerns about disparate treatment for certain groups based on information derived from facial recognition systems. They also noted that individuals who declined to consent to a retailer’s request for facial recognition could be denied access to certain products or services. The World Privacy Forum expressed the view that digital signage networks have the potential to create a new form of marketing surveillance that it believes raises the possibility of unfairness, discrimination, and abuses of personal information. In addition, the Center for Democracy & Technology has expressed concerns about the use of facial recognition technology for classification purposes, such as to detect gender, race, and age range. The organization expressed the view that this could lead to profiling—the use of personal characteristics or behavior patterns to make generalizations about a person—that could lead to, for example, price discrimination for certain groups. In contrast, some industry representatives have argued that commercial use of facial recognition technology does not present new or unusual privacy risks, that risks that do exist can be mitigated, and that any potential loss of privacy should be weighed against the benefits the technology confers. In position papers, other written materials, or in interviews we conducted, some industry stakeholders have expressed the following views: Individuals should not expect complete anonymity in public. The National Retail Federation and the International Biometrics & Identification Association have argued that individuals effectively give up some of their anonymity when they make their faces public. The latter has contended that privacy and anonymity are not the same and that losing complete anonymity is not tantamount to a surrender of privacy. Further, the organization has argued that capturing a facial image or faceprint in public does not necessarily remove an individual’s anonymity because it does not directly reveal a name, Social Security number, or any other personal information. Surveillance is already part of our daily life. The International Biometrics & Identification Association has noted that commercial entities already routinely have security cameras and that facial recognition does not increase their use. Further, it says that privacy advocacy organizations may have overstated the capabilities of facial recognition technology systems, noting that cameras generally are not interconnected and that it is not practical to conceive of a commercial application that would use multiple cameras to track individuals’ movements. Consumers have shown a willingness to give up some privacy for the benefits technology offers. Industry stakeholders have generally noted that there are inherent trade-offs between some loss of privacy and the benefits that new technologies confer to consumers and businesses, and to economic growth in general. As we noted in our September 2013 report on information resellers, representatives of the marketing and information technology industries, among others, have argued that consumers’ expectations and notion of privacy have changed in an era of innovative technologies. For example, they said, consumers have shown they are willing to share private information in public settings—such as by posting to social networking sites in order to gain such benefits as photograph sharing and management. The need for consent should depend on the context. Industry trade organizations including the Software & Information Industry Association, Computer and Communications Industry Association, and National Retail Federation have stated that the need for consumer consent should depend on the context under which facial recognition technology is used. For example, two of the trade organizations say that businesses that use the technology for security may not need to obtain consent before using the technology, as opposed to social networking sites that have repositories of facial images to identify individuals more broadly. Facial recognition technology should not be singled out. Some industry representatives have stated that the privacy issues associated with facial recognition technology are largely the same as those for any biometric technologies—particularly for emerging technologies like voice or gait recognition, which also can identify individuals from afar without their knowledge. Several facial recognition technology companies have said that policymakers should focus on protecting personal information, which would include all biometrics, not just facial recognition technology. Several government, industry, and privacy organizations have proposed or are developing suggested privacy guidelines for commercial use of facial recognition technology. Firms may describe how they collect, use, and store data in published privacy policies, and the policies we reviewed varied in whether and how they addressed facial recognition technology. Several different groups, including a government agency, industry trade organizations, and a privacy advocacy organization have proposed, or are in the process of developing, privacy guidelines or best practices for commercial use of facial recognition technology. Most of these guidelines are based at least to some extent on the Fair Information Practice Principles, a set of internationally recognized principles for balancing the privacy and security of personal information with other interests. Our review found some areas of agreement in the different organizations’ recommended practices—for example, all recommended that users of facial recognition technology publish a privacy policy describing their data collection practices. However, the guidelines differed in other key areas, such as if and when firms should obtain individuals’ consent prior to using the technology to identify them. As of June 2015, NTIA’s multistakeholder process for facial recognition was ongoing. As noted previously, the goal of the process is to develop a voluntary, enforceable code of conduct for facial recognition technology that incorporates privacy principles outlined in the 2012 White House privacy framework. Once the process is complete, companies for which the code is relevant may commit to abide by the code, and that company’s adherence to the code, after having made such a commitment, is enforceable by FTC. The process is open to the public and has involved a series of meetings held in Washington, D.C., with remote access via teleconference and webcasting. At the 11 meetings that NTIA had convened as of June 2015, stakeholders have discussed the privacy and technical issues related to current and potential uses of facial recognition technology, and how a code of conduct might address those issues. Topics of discussion have included issues to be considered in drafting a code of conduct, who its provisions should apply to, the circumstances under which informed consent should be obtained from consumers, and the impact of a code of conduct. The participants with whom we spoke had mixed views on the process. One industry participant said that the meetings had provided a good forum for discussing concerns about the technology, while other participants expressed concern that the process was moving slowly or that any resulting code of conduct would not be widely adopted or provide real privacy protections. Some participants expressed disappointment that there has not been greater involvement by social networking services, given that they are major users of the technology and possess large numbers of faceprints. In June 2015, nine privacy and consumer groups issued a joint statement announcing that they were withdrawing from the multistakeholder process, stating that the process was unlikely to yield a set of privacy rules that offers adequate protections for the use of facial recognition technology. Other stakeholders decided to continue the discussions toward a code of conduct, and the next meeting was scheduled for July 28, 2015. In August 2014, the International Biometrics & Identification Association released “Privacy Best Practice Recommendations for Commercial Biometric Use,” which it also submitted as part of the NTIA process. Key elements of these recommendations for businesses include the following: Users of biometric technologies, which include facial recognition technology, should publish privacy policies, which should specify the types and purposes of the biometric data captured, the nonbiometric data that are being associated with the biometric data, and the amount of time that biometric data will be stored. When using biometrics to detect or classify an individual, businesses should post a general notice. When using biometrics to identify an individual, whether to provide notice depends on the context; for example, universal notification may be impractical, the association said, if the technology is used to identify every person entering an office building. Firms should use good cybersecurity practices to protect any information collected or retained; provide a mechanism for consumers to obtain a record of data maintained on them and have that data corrected if necessary or removed; restrict third-party access to biometrics unless that access is disclosed as a purpose of the data collection; and maintain an appropriate audit trail for accountability. The International Biometrics & Identification Association does not currently track implementation of these recommendations and refers to them as “general guidelines.” It also says that it leaves it to those using the technology to determine what is most appropriate given the application and its purpose, the risk and consequence of abuse, and the nonbiometric data used. The American Civil Liberties Union and the Center for Digital Democracy have been critical of these best practices, objecting, for example, to the statement that it is impractical to obtain consent for use of facial recognition from consumers entering buildings. The American Civil Liberties Union issued “An Ethical Framework for Facial Recognition” in May 2014, and provided it for discussion during the NTIA multistakeholder process. The framework recommends stricter standards for use, notice, and consent than those recommended by the International Biometrics & Identification Association, including that users of facial recognition technology should not use the technology to determine an individual’s race, color, religion, sex, national origin, disability, or age; prominently notify individuals when facial recognition is in operation; obtain specific consent from an individual before storing a photograph or faceprint of that person or sharing any facial recognition data with a third party; allow individuals to access, correct, and delete their faceprint information; and consider what special precautions might be needed when using a facial recognition system with teenagers. In February 2011, the Digital Signage Federation issued privacy standards for its members that it developed in collaboration with the Center for Democracy & Technology. Among other things, the standards state that companies should disclose in privacy policies the data collected by digital signs and the purpose of the data; obtain affirmative consent before using facial recognition to identify an notify consumers at the physical location of the sign when using facial recognition (or other means) to collect other information about an individual, such as age range or gender; not share data for any uses incompatible with those specified in the allow consumers to submit complaints and request to access their data. The Digital Signage Federation has a process to certify that member firms are abiding by these standards, and the federation reported that 28 of its 221 member firms had been certified as of May 2015. FTC issued a staff report in October 2012 that included recommended best practices for commercial uses of facial recognition technology to protect consumer privacy. The report synthesized the discussions and comments from FTC’s December 2011 Face Facts Forum to explore advances in facial recognition technologies, current and possible future commercial uses, ways consumers can benefit from these uses, and privacy and security concerns. FTC staff said the best practices are intended to provide guidance to commercial entities that are using or plan to use facial recognition technologies in their products and services, while promoting innovation. The best practices were based on core principles outlined in FTC’s March 2012 report on consumer data privacy. These principles included providing consumers with meaningful choices about use of their data at a relevant time and context, ensuring that practices related to information collection and use are transparent, and building in privacy protections during product development. Among the best practices recommended for companies using facial recognition technology were obtaining individuals’ affirmative consent before identifying them in anonymous images to someone who could not otherwise identify them; providing clear notice to individuals when using facial recognition to providing individuals with a choice about whether any data collected with facial recognition technology are shared with third parties; and implementing a specified retention period for personal data and disposing of stored images once they are no longer necessary for the purpose for which they were collected. One FTC commissioner issued a dissenting statement with the staff report, stating there was little evidence that facial recognition technology was likely to cause tangible injuries to consumers in the near future, and that following the staff report’s recommendations would create a burden on businesses in many contexts. Representatives of several privacy advocacy organizations told us they believed the FTC staff report provided a good summary of the issues but had not resulted in noticeable changes in industry practices. In addition to best practices and codes of conduct, some stakeholders have advocated addressing consumer privacy through “privacy by design”—the practice of building in consumer privacy protections at every stage of product development. For example, FTC staff and the Privacy Rights Clearinghouse have both noted that systems can be designed to ensure that data collected by facial recognition technology are not used beyond specified purposes, either by automatically deleting the data after they are used, or ensuring the data cannot be repurposed. The International Biometrics & Identification Association has cited the need for systems that block “web-crawlers,” which seek to surreptitiously gather images and other information from websites containing facial images and other personal data. Firms that develop facial recognition technology also told us that some measures can be taken to build privacy controls into facial recognition systems. These include segregating biometric data from other personal data, as well as encrypting data, which can be especially important for entities that possess large numbers of photographs matched with other identifying information. However, some representatives of industry and privacy advocacy organizations have argued that privacy by design has limitations. Industry representatives noted that facial recognition technology systems generally are built to be flexible and provide users with the option to choose among different levels of privacy protection. This may allow users to bypass privacy protections, such as a data retention time frame, that have been included in the system’s design. We reviewed the written privacy policies, as of May 2015, of selected businesses in three industries—social networking, retail, and gaming—to identify whether and how these policies expressly addressed facial recognition technology. In published privacy policies, firms may describe the data they collect, their uses for these data and circumstances under which they may be shared with third parties, and how these data are stored. Firms that publish a privacy policy may not be required to describe their use of facial recognition technology, if any. Therefore, some firms may be using facial recognition technology even if they do not address the technology in their privacy policies. However, because we were unable to determine whether most companies we selected used facial recognition technology, their failure to mention it in their privacy policies could also mean that they do not use the technology. Two companies operating social networking applications that use facial recognition technology expressly address how they use the technology in their privacy policies and associated documents. One of these companies automatically uses facial recognition technology to facilitate tagging a user’s friends in photographs unless the user opts out. The user can turn off the facial recognition feature, at which time the user’s facial templates are deleted. Users can also “untag” themselves in photographs. Company representatives told us that the firm’s approach to privacy is to provide multiple opportunities for users to exercise control over their data—for example, through privacy settings and tools that allow users to select who can see their personal information and content. Company representatives told us that the firm has no plans at this time to share facial recognition faceprints with third parties and that personal information associated with facial recognition technology would not be shared without user consent. In contrast, the second company requires users to approve the use of its face tagging feature before it is enabled, which company representatives told us is in accordance with the FTC staff report’s suggested best practices. That company told us it does not share and has no plans to share personal information associated with facial recognition technology without user consent, except in very limited circumstances as described in its privacy policy (i.e., with domain administrators, for external processing by company affiliates or trusted parties, or for legal reasons). This company has also developed a digital eyeglass product. In a June 2013 letter to a Member of Congress, the company said it would not provide facial recognition capabilities for that product, or allow third-party developers to do so, until it had strong privacy protections in place. The letter also stated that the company would prohibit developers from disabling a feature that would alert the public when the digital eyeglass product was being used to take a photograph or video. The firm also stated that the product was covered by the company’s privacy policy and that no changes were contemplated for its privacy policy to specifically address the product. The five major retail chains and four large casino companies that we selected did not expressly address facial recognition technology in their written privacy policies, although as indicated earlier, we were unable to determine whether these firms use the technology. A representative from the National Retail Federation told us that retailers’ privacy policies are written broadly enough to address facial recognition technology, even if they do not expressly mention it. The privacy policies of three of the five retailers we reviewed did address their stores’ use of video cameras (which can be a component of facial recognition systems). All three policies specified that the cameras are used for security purposes or to measure store traffic, while one policy noted that the cameras were used for collecting information about their customers. None of the five policies made reference to the use of digital signs, which, as noted earlier, can incorporate facial recognition technology. Similarly, none of the privacy policies of the four large U.S.-based casino companies we reviewed made reference to the use of facial recognition technology in the United States. Some federal laws may potentially be applicable to the commercial use of facial recognition technology, but these laws do not fully address the privacy concerns that have been raised about facial recognition technology by some stakeholders. Views differ on the need for additional legislation. The United States does not have a comprehensive privacy law governing the collection, use, and sale of personal information by private-sector companies. In addition, we did not identify any federal laws that expressly regulate commercial uses of facial recognition technology in particular. However, there are three areas in which certain federal laws that address privacy and consumer protection may potentially apply to commercial uses of facial recognition technology: (1) the capture of facial images; (2) the collection, use, and sharing of personal data; and (3) unfair or deceptive acts or practices, such as failure to comply with a company’s stated privacy policies. Generally, individuals can take pictures while on public property and in commercial spaces open to the public unless prohibited by the business or property owner. We did not identify federal laws that generally restrict the capture of facial images. One federal law, the Video Voyeurism Prevention Act of 2004, prohibits the taking of certain types of pictures in limited circumstances. The act makes it a crime to intentionally capture an image of a “private area” of an individual without his or her consent, or to knowingly do so under circumstances in which that individual has a reasonable expectation of privacy. While the definition of a private area does not include a person’s face, the act could affect the specific placement of cameras in certain parts of commercial spaces, such as retail stores’ dressing rooms. Federal laws addressing privacy issues in the private sector are generally tailored to specific purposes, situations, types of information, or sectors or entities. In general, these laws, among other things, limit the disclosure of certain types of information to a third party without an individual’s consent, or prohibit certain types of data collection. Some of these laws also set standards for how certain personal data should be stored and disposed of securely. These laws may potentially apply to facial recognition technology in two ways. First, they may potentially limit some firms’ ability to share data collected with facial recognition technology, such as a person’s image and faceprint, or a person’s location at a given time. Second, these laws may potentially limit firms’ access to personal information that could be used in connection with the technology, such as a person’s photograph, name, age, address, or purchase history. As shown in table 1, the general applicability of these laws depends on some combination of the source of the data, the means of collection, the entity collecting the data, the type of data being collected, and the purpose for which the data are being used. For additional details on these federal laws and their potential applicability to facial recognition technology, see appendix II. Table I: Federal Laws Specifically Addressing the Collection, Use, and Storage of Personal Information Federal law Driver’s Privacy Protection Act Addresses the use and disclosure of personal Applicability to collection, use, and storage of personal information information contained in state motor vehicle records. Governs the disclosure of nonpublic information collected by financial institutions, and sets standards for data security. Applicability to collection, use, and storage of personal information Governs the disclosure of individually identifiable health information collected by covered health care entities, and sets standards for data security. Governs the disclosure of personal information collected or used for eligibility determinations for such things as credit, insurance, or employment. Governs the disclosure of personally identifiable information from education records. Generally prohibits the online collection of personal information from children under 13 without verifiable parental consent. Prohibits the interception and disclosure of electronic communications by third parties unless a specified exception applies. Computer Fraud and Abuse Act Prohibits obtaining information from a protected computer through the intentional access of a computer without authorization or exceeding authorized access. Section 5 of the Federal Trade Commission Act (FTC Act) authorizes FTC to take action against unfair or deceptive acts or practices in or affecting commerce. Although the act does not explicitly grant FTC the authority to protect privacy, FTC has interpreted it to apply to deceptions or violations of written privacy policies. For example, if a retailer has a written privacy policy stating it does not use facial recognition technology to identify customers and later breaches the policy by doing so, FTC staff have stated the agency could prosecute the retailer if it determined such violation constituted a deceptive practice. Likewise, FTC staff told us that the act could apply if firms violate written privacy policies on how they use or share personal information that was collected in conjunction with facial recognition technology. In addition, according to FTC staff, the FTC Act’s “unfairness” authority could apply even in the absence of a privacy policy, in situations where an act or practice causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. FTC staff told us that as of June 2015, the agency had not taken any enforcement actions on privacy issues specifically with regard to the use of facial recognition technology or other biometrics. However, more generally, the agency had brought more than 40 cases related to privacy, and more than 50 cases related to data security, against companies that had engaged in unfair or deceptive practices. Two states—Texas and Illinois—have adopted privacy laws that expressly address commercial uses of biometric identifiers, including scans of face geometry such as those gathered through facial recognition technology. A report by a committee of the Texas House of Representatives noted that there were concerns that biometric data were increasingly becoming a target of identity theft and needed to be safeguarded. Similarly, the Illinois General Assembly noted that use of biometrics was growing in the business and security screening sectors and that the ramifications of this technology were not fully known. Both the Texas and Illinois laws require that before collecting a biometric identifier of an individual, a private entity must obtain that individual’s consent; prohibit an entity in possession of a biometric identifier from sharing that person’s biometric identifier with a third party, unless the disclosure meets an exception, such as for law enforcement or to complete a financial transaction that the individual requested or authorized; and govern the retention of biometric records, including requirements for protecting biometric information and destroying such information after a certain period of time. In addition, according the National Conference of State Legislatures, most states have general privacy laws applicable to personal data, which may also potentially apply to information collected with, or otherwise connected to, facial recognition technology. As of January 2015, according to the organization, 47 states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands had enacted legislation requiring companies to notify residents if their personal information in the companies’ custody was compromised. Further, at least 32 states and Puerto Rico have enacted laws that require entities to destroy, dispose of, or otherwise make personal information unreadable or undecipherable after it is no longer being used or after a specified amount of time, according to the National Conference of State Legislatures. Federal laws do not fully address the privacy issues that stakeholders have identified with commercial uses of facial recognition technology. As previously discussed, no federal law expressly regulates commercial use of facial recognition technology. This means that federal law does not expressly regulate the circumstances under which facial recognition technology may be used by commercial entities to identify and track someone with whom they have no prior relationship. Further, federal law does not expressly regulate whether and when firms should notify consumers of their use of facial recognition technology, or seek an individual’s consent prior to identifying them, which has been an area of concern to privacy advocacy organizations and others. Certain federal laws do address the collection, use, and sale of personal information by private-sector companies, as discussed earlier. These laws could potentially restrict, in certain circumstances, the collection of facial images, which are used to build a database for use with facial recognition technology. For example, provisions in the Driver’s Privacy Protection Act restrict state motor vehicle bureaus from selling drivers’ license photographs and associated information to private parties. In addition, the Gramm-Leach-Bliley Act and Health Insurance Portability and Accountability Act potentially could restrict the ability of banks and health care providers to share data collected with facial recognition technology if those data were to fall within the laws’ definitions of protected information. However, the reach of these laws is limited because they generally apply only for specific purposes, in certain situations, to certain sectors, or to certain types of entities. As a result, they do not comprehensively address how commercial entities other than those explicitly covered by the specific laws may collect, use, or share personal data in conjunction with facial recognition technology. Additionally, depending on the context, federal law does not require firms that collect personal information to follow detailed standards for verifying the accuracy of data developed through computer matching. As noted earlier, facial recognition algorithms may misidentify individuals and may not be as accurate as other forms of biometric identifiers because of technological challenges in accurately matching photographs and because users typically can adjust settings for the degree of accuracy required for a match. In a 2012 report, FTC stated that measures to ensure the accuracy of the consumer data that companies collect and maintain should be scaled to the intended use and sensitivity of the information. One privacy expert we met with stated that in some commercial contexts facial recognition technology is used to identify criminals, such as shoplifters. As such, misidentifying an innocent individual could be detrimental. In most contexts, federal law may not provide consumers with the right to correct or delete inaccurate personal information collected by commercial firms. Some privacy advocacy organizations have argued for new legislation or regulation to address privacy issues associated with facial recognition technology. The American Civil Liberties Union has contended that government intervention and statutorily created legal protections are needed to protect against the negative effects of this technology. In testimony before the Senate in 2012, the Electronic Frontier Foundation stated that because of the risk that faceprints will be collected without individuals’ knowledge, rules should define clear notice requirements to alert people that a faceprint has been collected and include information on how to request that data collected on them be removed. One academic has argued specifically for legislation that would require individuals’ explicit consent before a company could capture or use their faceprint, and would require companies to provide individuals with information about the uses of their biometric data. The Center for Digital Democracy has urged FTC to recommend new safeguards for adolescents relating to facial recognition. The Center for Democracy & Technology has stated that the technology poses complex privacy issues that do not fit squarely with present laws because these laws apply only indirectly to facial recognition and offer consumers no real choices with regard to the technology. Views vary on the approach that additional privacy legislation or regulation, if any, should take to address these issues. The Center for Democracy & Technology argues that current federal privacy law is a confusing patchwork targeting discrete economic sectors with different rules. Therefore, it contends that Congress should not pass privacy legislation for facial recognition technology alone, but rather as a comprehensive framework that protects all personal information. One industry representative told us he believed that federal privacy legislation could benefit firms by clearly defining acceptable practices, but that any legislation should focus comprehensively on personal data rather than a specific technology. In February 2015, the White House proposed draft legislation to establish baseline protections for individual privacy in the commercial arena. However, one academic privacy expert told us he believed that legal protections specific to facial recognition technology are needed because, as a practical matter, comprehensive privacy legislation is unlikely to be enacted in the foreseeable future. In comments submitted to FTC, several industry groups noted that facial recognition technology should not be regulated in a “one size fits all” manner. One company also noted that the privacy issues that arise from using facial recognition to surreptitiously identify a person in public are very different from the issues involved in using the technology to organize personal photographs, and that privacy regulations should be designed with respect to these different uses. However, most industry representatives have argued that new legislation or regulation is not necessary to address privacy issues associated with facial recognition technology, contending that self-regulation—such as voluntary codes of conduct and best practices—and privacy by design are effective alternatives. Industry trade organizations have urged caution in expanding privacy law in general, arguing that absent any identifiable harm in the marketplace, privacy issues are best addressed through industry self-regulatory programs and best practices. In a 2012 Senate Committee hearing, a representative of the Digital Advertising Alliance said that industry self-regulation is flexible and could adapt to rapid changes in technology and consumer expectations, whereas legislation and government regulation could be inflexible and quickly become outdated in an era of rapidly evolving technologies. Specifically with regard to facial recognition technology, the industry association TechAmerica has contended that self-regulation based on the Fair Information Practice Principles, coupled with privacy by design, has provided consumers with the necessary privacy protections. The National Retail Federation and others have argued that it may be too early to consider additional regulation of facial recognition technology because the technology is still in the early stages of development and thus the privacy issues raised are mostly speculative. The group argued that such regulation could inhibit innovation and deny businesses and consumers beneficial products, while protecting against harms that might never have occurred. In contrast, some consumer and privacy advocacy organizations have argued that self-regulation is not sufficient to fully address privacy concerns because it may be limited in scope, limited in coverage to those entities that choose to participate, and subject to change. For example, a study of industry privacy self-regulatory programs from 1997 through 2007 by the World Privacy Forum argued that these programs often lacked a meaningful ability to enforce their own rules or maintain memberships, and covered only a fraction of an industry or an industry subgroup. The Center for Digital Democracy questioned the effectiveness of the NTIA’s multistakeholder process for developing voluntary industry standards, contending that a previous process covering mobile applications did not lead to any significant changes in that industry’s privacy practices. One former industry representative told us that firms using facial recognition technology would not abide by voluntary codes of conduct if those codes damaged their commercial interests. In contrast, he believes that privacy legislation can provide firms with a financial incentive to abide by privacy principles because of the reputation risk caused by a public violation of privacy law. The White House framework for consumer data privacy, while supporting industry-wide codes of conduct, also suggested that Congress enact legislation to provide FTC with the ability to enforce the Consumer Privacy Bill of Rights independently. Similarly, in its 2012 report on protecting consumer privacy, FTC noted that while it has supported self-regulatory efforts, privacy self-regulation had not gone far enough and that Congress should consider enacting baseline privacy legislation. In our September 2013 report on personal information collected for marketing purposes, we suggested that Congress consider strengthening the consumer privacy framework to reflect the effects of changes in technology and the marketplace, a suggestion that is underscored by the privacy issues associated with facial recognition technology. We identified how advances in technology and marketplace practices had resulted in vast changes in the amount and type of personal information collected. We also found that gaps existed in federal privacy law because it had not adapted to new technologies. As of July 2015, Congress has not passed legislation addressing our 2013 suggestion. Facial recognition technology may be employed in a wide range of useful commercial applications, but the future trajectory of the technology raises questions about consumer privacy. Federal law does not expressly address the circumstances under which commercial entities can use facial recognition technology to identify or track individuals, or when consumer knowledge or consent should be required for the technology’s use. Further, in most contexts federal law does not address how personal data derived from the technology may be used or shared. NTIA’s multistakeholder process to develop a voluntary code of conduct is a positive step toward incorporating privacy considerations into the development and use of facial recognition technology. However, views vary on the efficacy of voluntary and self-regulatory approaches versus legislation and regulation to protect privacy. The privacy issues stakeholders have raised about facial recognition technology and other biometric technologies serve as yet another example of the need to adapt federal privacy law to reflect new technologies. As such, we reiterate our 2013 suggestion that Congress strengthen the current consumer privacy framework to reflect the effects of changes in technology and the marketplace. We provided a draft of this report for review and comment to the Department of Commerce and the Federal Trade Commission. We received technical comments from them, which we incorporated as appropriate. We also provided relevant excerpts of the draft for technical review to selected private parties cited in our report, and included their technical comments as appropriate. We are sending copies of this report to Commerce, FTC, appropriate congressional committees and members, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report examines (1) the uses of facial recognition technology for consumers and businesses, (2) privacy issues that have been raised in connection with commercial uses of facial recognition technology, (3) proposed best privacy practices and industry privacy policies related to facial recognition technology, and (4) privacy protections under federal law that may potentially apply to facial recognition technology. The scope of this report includes use of the technology by companies and other private entities and does not include use by federal, state, or local government agencies. This report covers use of the technology in the United States and not in other countries. It focuses primarily on use of the technology to identify or verify an individual and not on facial detection (which simply detects when a face is present). To address the first two objectives, we identified and reviewed relevant studies and reports, congressional testimony, position papers, and other documents from industry stakeholders, privacy and consumer advocacy organizations, federal agencies, and academic and industry experts. These included, among others, the Federal Trade Commission’s (FTC) 2012 staff report on facial recognition technology; transcripts and written comments stemming from FTC’s 2011 facial recognition technology forum; agendas, meeting summaries, and documents submitted by participants from the Department of Commerce’s (Commerce) National Telecommunications and Information Administration’s (NTIA) multistakeholder process on facial recognition technology in 2014 and 2015; and testimony and written statements from a 2012 congressional hearing on facial recognition technology. In addition, we reviewed product information and marketing material from selected companies that develop and sell products using facial recognition technology. We conducted a literature search to identify academic and trade articles on commercial applications of facial recognition technology, as well as privacy issues raised by the technology. We used these articles to corroborate information we obtained from industry stakeholders, privacy and consumer advocacy groups, and federal agencies. In conducting this search, we generally obtained information from various online research sources such as Proquest, Nexis, and Dialogue. We also used Internet search techniques and key word search terms to identify additional sources and types of available information about how facial recognition technology works, how it is used in commercial applications, and what privacy issues are raised by its use. To address the third objective, we reviewed the Fair Information Practice Principles and White House Consumer Privacy Bill of Rights, as well as privacy guidelines and best practices with specific application to facial recognition and biometric technologies issued by the American Civil Liberties Union, Digital Signage Federation, FTC staff, and the International Biometrics & Information Association. We also reviewed the privacy policies of selected companies in the social networking, retail, and casino industries, which we obtained from company websites. We chose those industries because they were widely cited among government and industry stakeholders as among the most significant users, or potential users, of facial recognition technology. Specifically, we reviewed the privacy policies of (1) two U.S. companies that currently use facial recognition technology and are ranked number one and number five, respectively, of the most popular social networking websites, based on estimated unique monthly visitors as of December 2014; (2) the five largest retail companies in the United States based on 2013 retail sales; and (3) the four largest U.S. casino companies, based on worldwide revenue in 2013. We reviewed each privacy policy and relevant supporting documents for key words such as “facial recognition technology,” “photos,” “video,” and “surveillance cameras” to determine whether and how they addressed the use of facial recognition technology. While we identified whether and how the privacy policies addressed facial recognition technology specifically, we did not conduct a broad evaluation or assessment of these policies more generally because that was outside the scope of this engagement. We also met with representatives of two companies that provide social networking applications to discuss, among other things, how facial recognition technology was addressed in their privacy policies. We also inquired with trade organization representatives about meeting with major retailers and casinos to discuss facial recognition technology, but the trade organizations told us that these companies declined to speak to us. To examine privacy protections under federal law that may potentially apply to commercial uses of facial recognition technology, we reviewed and analyzed relevant federal laws and regulations to examine their potential applicability to the commercial use of facial recognition or other biometric technologies. We then reviewed laws, as well as relevant agency regulations, in terms of their general purpose and potential applicability to facial recognition or other biometric technologies. We also reviewed prior work we had conducted on federal privacy law as it relates to commercial entities. We also reviewed state laws in Illinois and Texas that expressly addressed privacy issues related to commercial use of facial recognition technology or other biometric identifiers. We selected these laws for review because Illinois and Texas were the two states we identified as having such laws, based on our interviews and a review of relevant law review articles and information from the National Conference of State Legislatures. This review was intended to provide illustrative examples and was not exhaustive, and thus may not have identified all state laws that may exist addressing privacy and biometrics. In addition, we reviewed the previously mentioned congressional testimony, comments submitted to FTC’s 2011 forum, and documents submitted to NTIA’s multistakeholder process that addressed the applicability of federal law to facial recognition technology. To address all four objectives, we conducted interviews with, and obtained documentation from, representatives of federal agencies, including FTC and Commerce’s NTIA and National Institute of Standards and Technology; companies that use facial recognition technology; companies that develop facial recognition products; trade associations, including the American Bankers Association, American Gaming Association, Interactive Advertising Bureau, National Retail Federation, and Retail Industry Leaders Association; privacy or consumer advocacy organizations, including the American Civil Liberties Union, Center for Democracy & Technology, Electronic Frontier Foundation, Electronic Privacy Information Center, and World Privacy Forum; and two academics who have participated in the FTC FaceFacts Forum or NTIA multistakeholder process and who have studied these issues. We also interviewed the International Biometrics & Information Association and conducted a group interview with seven of its member companies that it had invited to participate. We conducted this performance audit from July 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix describes the federal laws that specifically address the collection, use, and storage of personal information by private entities, and these laws’ potential applicability to commercial uses of facial recognition technology. Driver’s Privacy Protection Act. Enacted in 1994, the Driver’s Privacy Protection Act generally prohibits the use and disclosure of certain personal information contained in state motor vehicle records for commercial purposes, with some exceptions. The act’s definition of personal information includes the driver’s license photograph, as well as any other information that identifies an individual, including Social Security number, driver identification number, name, address (except 5- digit zip code), telephone number, and medical or disability information. Gramm-Leach-Bliley Act (GLBA). Enacted in 1999, the Gramm-Leach- Bliley Act contains provisions that restrict, with some exceptions, the disclosure of nonpublic information by entities that fall under GLBA’s definition of a “financial institution” or that receive nonpublic personal information from such a financial institution. GLBA generally requires financial institutions to provide notice and an opportunity for consumers to opt out before sharing their nonpublic information with nonaffiliated third parties, other than for certain purposes such as processing a financial service authorized by the consumer. Regulations implementing GLBA do not specifically cite facial images or biometric identifiers—which are used by some financial institutions to verify customers—in the definition of nonpublic personal information. However, the definition does include personally identifiable financial information, which is defined to include any information that financial institutions obtain about a consumer in connection with providing a financial product or service to that consumer, such as the fact that an individual is a consumer or customer of a particular financial institution. Health Insurance Portability and Accountability Act. Enacted in 1996, the Health Insurance Portability and Accountability Act establishes a set of national standards for the protection and safeguarding of individually identifiable health information. With some exceptions, rules implementing the act require an individual’s written authorization before a covered entity—a health care provider that transmits health information electronically in connection with covered transactions, health care clearinghouse, or health plan—may use or disclose that individual’s individually identifiable health information, including for commercial purposes. The rules also give individuals the right to have a covered entity amend their protected health information if it is not accurate and complete. Additionally, the rules include full-face images and biometric identifiers among the personal identifiers that must be removed before protected health information is no longer considered individually identifiable health information and therefore generally can be disclosed. Fair Credit Reporting Act. The act, which was enacted in 1970, protects the security and confidentiality of personal information collected or used for eligibility determinations for such products as credit, insurance, or employment. The Fair Credit Reporting Act applies to those meeting the definition of “consumer reporting agency” under the act, which includes the three nationwide consumer reporting agencies (commonly called credit bureaus) and other businesses that collect or disclose information for consumer reports for use by others. The act limits the use and distribution of personal data collected for consumer reports to permissible purposes specified in the act and also gives consumers certain rights to opt out of allowing their personal information to be shared for certain marketing purposes. The act also allows individuals to access and dispute the accuracy of personal data held on them and imposes safeguarding requirements for such data. Children’s Online Privacy Protection Act. Enacted in 1998, the Children’s Online Privacy Protection Act requires covered website and online service operators to obtain verifiable parental consent before collecting personal information from children under 13, with certain exceptions. Regulations implementing the act define “personal information” to include a photograph or video containing a child’s image, as well as other information such as full name and e-mail address. In its 2013 final rule amending its regulations implementing the act, FTC expressly addressed facial recognition technology in the discussion of its decision to incorporate photographs of a child under 13 into the definition of “personal information,” noting the inherently personal nature of photographs and the possibility of them being paired with facial recognition technology. Electronic Communications Privacy Act. Enacted in 1986, this act prohibits the interception and disclosure of electronic communications by third parties unless an exception applies, such as one of the parties to the communication having consented to the interception or disclosure. For example, unless an exception applies, such as customers having given their consent, the act would prohibit an Internet service provider from selling the content of its customers’ e-mails and text messages—which could include facial images—to a third party. Family Educational Rights and Privacy Act. Enacted in 1974, the act generally prohibits federal funds from being made available to any school or institution that has a policy of releasing students’ education records or personally identifiable information contained in such records without the prior written consent of the parent or eligible student, with certain exceptions. The Department of Education’s regulations implementing the Family Educational Rights and Privacy Act include biometric records— including facial characteristics—in the definition of personally identifiable information. Computer Fraud and Abuse Act. Enacted in 1986, the Computer Fraud and Abuse Act prohibits obtaining information from a protected computer through the intentional access of a computer without authorization or through exceeding authorized access. Some courts have held that using a website for a purpose in violation of the site’s terms of use or terms of service exceeds authorized access and therefore violates the act. Other courts, however, have read the act more narrowly, holding that it prohibits the unauthorized procurement of information rather than its misuse. To the extent that the collection of images online related to facial recognition technology is found to constitute obtaining information from a protected computer through access without authorization or exceeding authorized access, such collection may be a violation of the Computer Fraud and Abuse Act. Alicia Puente Cackley, 202-512-8678 or cackleya@gao.gov. In addition to the contact named above, Jason Bromberg (Assistant Director), José R. Peña (Analyst-in-Charge), William R. Chatlos, Jeremy Conley, Richard Hung, Patricia Moye, and Jennifer Schwartz made key contributions to this report. | Facial recognition technology—which can verify or identify an individual from a facial image—has rapidly improved in performance and now can surpass human performance in some cases. The Department of Commerce has convened stakeholders to review privacy issues related to commercial use of this technology, which GAO was also asked to examine. This report examines (1) uses of facial recognition technology, (2) privacy issues that have been raised, (3) proposed best practices and industry privacy policies, and (4) potentially applicable privacy protections under federal law. The scope of this report includes use of the technology in commercial settings but not by government agencies. To address these objectives, GAO analyzed laws, regulations, and documents; interviewed federal agencies; and interviewed officials and reviewed privacy policies and proposals of companies, trade groups, and privacy groups. Companies were selected because they were among the largest in industries identified as potential major users of the technology, and privacy groups were selected because they had written on this issue. Facial recognition technology can be used in numerous consumer and business applications, but the extent of its current use in commercial settings is not fully known. The technology is commonly used in software that manages personal photographs and in social networking applications to identify friends. In addition, several companies use the technology to provide secure access to computers, phones, and gaming systems in lieu of a password. Facial recognition technology can have applications for customer service and marketing, but at present, use in the United States of the technology for such purposes appears to be largely for detecting characteristics (such as age or gender) to tailor digital advertising, rather than identifying unique individuals. Some security systems serving retailers, banks, and casinos incorporate facial recognition technology, but the extent of such use at present is not fully known. Privacy advocacy organizations, government agencies, and others have cited several privacy concerns related to the commercial use of facial recognition technology. They say that if its use became widespread, it could give businesses or individuals the ability to identify almost anyone in public without their knowledge or consent and to track people's locations, movements, and companions. They have also raised concerns that information collected or associated with facial recognition technology could be used, shared, or sold in ways that consumers do not understand, anticipate, or consent to. Some stakeholders disagree that the technology presents new or unusual privacy risks, noting, among other things, that individuals should not expect complete anonymity in public and that some loss of privacy is offset by the benefits the technology offers consumers and businesses. Several government, industry, and privacy organizations have proposed or are developing voluntary privacy guidelines for commercial use of facial recognition technology. Suggested best practices vary, but most call for disclosing the technology's use and obtaining consent before using it to identify someone from anonymous images. The privacy policies of companies GAO reviewed varied in whether and how they addressed facial recognition technology. No federal privacy law expressly regulates commercial uses of facial recognition technology, and laws do not fully address key privacy issues stakeholders have raised, such as the circumstances under which the technology may be used to identify individuals or track their whereabouts and companions. Laws governing the collection, use, and storage of personal information may potentially apply to the commercial use of facial recognition in specific contexts, such as information collected by health care entities and financial institutions. In addition, the Federal Trade Commission Act has been interpreted to require companies to abide by their stated privacy policies. Stakeholder views vary on the efficacy of voluntary and self-regulatory approaches versus legislation and regulation to protect privacy. GAO has previously concluded that gaps exist in the consumer privacy framework, and the privacy issues that have been raised by facial recognition technology serve as yet another example of the need to adapt federal privacy law to reflect new technologies. GAO makes no recommendations in this report. However, GAO suggested in GAO-13-663 that Congress consider strengthening the consumer privacy framework to reflect changes in technology and the marketplace, and facial recognition technology is such a change. GAO maintains that the current privacy framework in commercial settings warrants reconsideration. |
The interstate commercial motor carrier industry, primarily the trucking industry, is an important part of the nation’s economy. Trucks transport over 11 billion tons of goods annually, or about 60 percent of the total domestic tonnage shipped. Buses also play an important role, transporting an estimated 631 million passengers annually. There are approximately 711,000 commercial motor carriers registered in MCMIS, about 9 million trucks and buses, and more than 10 million drivers. Most motor carriers are small; about 51 percent operate one vehicle, and another 31 percent operate two to four vehicles. Carrier operations vary widely in size, however, and some of the largest motor carriers operate upwards of 50,000 vehicles. Carriers continually enter and exit the industry. Since 1998, the industry has increased in size by an average of about 29,000 interstate carriers per year. In the United States, commercial motor carriers account for less than 5 percent of all highway crashes, but these crashes result in about 13 percent of all highway deaths, or about 5,500 of the approximately 43,000 highway fatalities that occur nationwide annually. In addition, about 106,000 of the approximately 2.7 million highway injuries per year involve motor carriers. The fatality rate for trucks has generally been decreasing over the past 30 years, but this decrease has leveled off, and the rate has been fairly stable since the mid-1990s. The fatality rate for buses has improved slightly from 1975 to 2005 but has more annual variability than the fatality rate for trucks due to a much smaller total vehicle miles traveled. (See fig. 1.) Congress created FMCSA through the Motor Carrier Safety Improvement Act of 1999 to reduce crashes, injuries, and fatalities involving commercial motor vehicles. To accomplish this mission, FMCSA carries out a number of enforcement, education, and outreach activities. FMCSA uses enforcement as its primary approach for reducing the number of crashes, fatalities, and injuries involving trucks and buses. Some of FMCSA’s enforcement programs include compliance reviews, which are on-site reviews of carriers’ records and operations to determine compliance with regulations; safety audits of new interstate carriers; and roadside inspections of drivers and vehicles. FMCSA’s education and outreach programs are intended to promote motor carrier safety and consumer awareness. One of the programs is the New Entrant program, which is designed to inform newly registered motor carriers about motor carrier safety standards and regulations to help them comply with FMCSA’s requirements. Other programs are designed to identify unregistered carriers and get them to register, promote increased safety belt use among commercial drivers, and inform organizations and individuals that hire buses how to make safe choices. FMCSA plans to make major revisions to its compliance and enforcement approach under an initiative called Comprehensive Safety Analysis 2010. Compliance reviews are an important enforcement tool because they allow FMCSA to take an in-depth look at carriers that have been identified as posing high crash risks because of high crash rates or poor safety performance records. Motor carriers may be identified as high risk from SafeStat or through calls to FMCSA’s complaint hotline. Carriers are given a satisfactory, conditional, or unsatisfactory safety rating. A conditional rating means the carrier is allowed to continue operating, but FMCSA may schedule a follow-up compliance review to ensure that problems noted in the first compliance review are addressed. An unsatisfactory rating must be addressed or the carrier is placed out of service, meaning it is no longer allowed to do business, and the carrier may face legal enforcement actions undertaken by FMCSA. Compliance reviews can take several days to complete, depending on the size of the carrier, and may result in enforcement actions being taken against a carrier. FMCSA uses both its own inspectors and state inspectors to carry out its enforcement activities. In total, about 750 staff are available to perform compliance reviews, and more than 10,000 staff do vehicle and driver inspections at weigh stations and other points. Together, FMCSA and its state partners perform about 16,000 compliance reviews a year, which cover about 2 percent of the nation’s 711,000 carriers. Because the number of inspectors is small compared with the size of the motor carrier industry, FMCSA prioritizes carriers for compliance reviews. To do so, it uses SafeStat to identify carriers that pose high crash risks. SafeStat is a model that uses information gathered from crashes, roadside inspections, traffic violations, compliance reviews, and enforcement cases to determine a motor carrier’s safety performance relative to that of other motor carriers that have similar exposure in these areas. A carrier’s score is calculated on the basis of its performance in four safety evaluation areas: Accident safety evaluation area: The accident safety evaluation area reflects a carrier’s crash history relative to other motor carriers’ histories. The safety evaluation area is based on state-reported crash data, vehicle data from MCMIS, and data on reportable crashes and annual vehicle miles traveled from the most recent compliance review. A carrier must have two or more reportable crashes within the last 30 months to have the potential to receive a deficient value and thus be made a priority for a compliance review. Driver safety evaluation area: The driver safety evaluation area reflects a carrier’s driver-related safety performance and compliance relative to other motor carriers. The driver safety evaluation area is based on violations cited in roadside inspections that have been performed within the last 30 months and compliance reviews that have occurred within the last 18 months, together with the number of drivers listed in MCMIS. A carrier must have three or more driver inspections, three or more moving violations, or at least one acute or critical violation of driver regulations from a compliance review to have the potential to receive a deficient value and thus be made a priority for a compliance review. Vehicle safety evaluation area: The vehicle safety evaluation area reflects a carrier’s vehicle-related safety performance and compliance relative to other motor carriers. The vehicle safety evaluation area is based on violations identified during vehicle roadside inspections that have occurred within the last 30 months or vehicle-related acute and critical violations of regulations discovered during compliance reviews that have occurred within the last 18 months. A carrier must have either three or more vehicle inspections or at least one acute or critical violation of vehicle regulations from a compliance review to have the potential to receive a deficient value and thus be made a priority for a compliance review. Safety management safety evaluation area: The safety management safety evaluation area reflects a carrier’s safety management relative to other motor carriers. It is based on the results of violations cited in closed enforcement cases in the past 6 years or violations of regulations related to hazardous materials and safety management discovered during a compliance review performed within the last 18 months. A carrier must have had at least one enforcement case initiated and closed or at least two enforcement cases closed within the past 6 years, or at least one acute, critical, or severe violation of hazardous material or safety management regulations identified during a compliance review within the last 18 months to have the potential to receive a deficient value and thus be made a priority for a compliance review. A motor carrier’s score is based on its relative ranking, indicated as a value, in each of the four safety evaluation areas. For example, if a carrier receives a value of 75 in the accident safety evaluation area, then 75 percent of all carriers with sufficient data for evaluation performed better in that safety evaluation area, while 25 percent performed worse. The calculation used to determine a motor carrier’s SafeStat score is as follows: SafeStat Score = (2.0x accident value) + (1.5x driver value) + vehicle value + safety management value As shown in the formula, the accident and driver safety evaluation areas have 2.0 and 1.5 times the weight, respectively, of the vehicle and safety management safety evaluation areas. Safety evaluation area values less than 75 are ignored in the formula used to determine the SafeStat score. For example, a carrier with values of 74 for all four safety evaluation areas has a SafeStat score of 0. FMCSA assigned more weight to these safety evaluation areas because, according to FMCSA, crashes and driver violations correlate relatively better with future crash risk. In addition, more weight is assigned to fatal crashes and to crashes that occurred within the last 18 months. In consultation with state transportation officials, insurance industry representatives, safety advocates, and the motor carrier industry, FMCSA used its expert judgment and professional knowledge to assign these weights, rather than determining them through a statistical approach, such as regression modeling. FMCSA assigns carriers categories ranging from A to H according to their performance in each of the safety evaluation areas. A carrier is considered to be deficient in a safety evaluation area if it receives a value of 75 or higher in that particular safety evaluation area. Although a carrier may receive a value in any of the four safety evaluation areas, the carrier receives a SafeStat score only if it is deficient in one or more safety evaluation areas. Carriers that are deficient in two or more safety evaluation areas and have a SafeStat score of 225 or more are considered to pose high crash risks and are placed in category A or B. (See table 1.) Carriers that are deficient in two safety evaluation areas but have a SafeStat score of less than 225 are placed in category C and receive a medium priority for compliance reviews. Carriers that are deficient in only one of the safety evaluation areas are placed in category D, E, F, or G. Carriers that are not deficient in any of the safety evaluation areas do not receive a SafeStat score and are placed in category H. Of the 622,000 motor carriers listed in MCMIS as having one or more vehicles in June 2004, about 140,000, or 23 percent, received a SafeStat category A through H. There are several reasons why a small proportion of carriers receive a score. First, approximately 305,900, or about 42 percent, of the carriers have crash, vehicle inspection, driver inspection, or enforcement data of any kind. SafeStat relies on these data to calculate a motor carrier’s score, so carriers without such data are not rated by SafeStat. It is likely that some of the carriers listed in MCMIS are no longer in business, but it is also possible that these carriers had no crashes, inspections, or compliance reviews in the 30-month period prior to June 2004. Second, a carrier must meet the minimum requirements to be assigned a value in a given safety evaluation area. If, for example, a carrier had only one reportable crash within the last 30 months, then the carrier would not be assigned an accident safety evaluation area value. Of the 305,900 carriers that have any safety data in SafeStat, 140,000 met the SafeStat minimum requirements in one or more safety evaluation areas. Of these 140,000 carriers, 45,000 were rated in categories A through G. The other carriers were placed in category H because they were not considered deficient, meaning they did not receive a value of 75 or more in any of the safety evaluation areas. The design of SafeStat and its data sufficiency requirements increase the likelihood that larger motor carriers will be deficient in one of the safety evaluation areas, in other words, rated in categories A through G, than are small carriers. About 51 percent of all carriers listed in MCMIS operate one vehicle, and about 3 percent of them received a SafeStat rating in categories A through G. (See table 2.) In contrast, fewer than 1 percent of the carriers listed in MCMIS have more than 100 vehicles, and nearly 25 percent of them received a SafeStat rating in categories A through G. We found that FMCSA could improve SafeStat’s ability to identify carriers that pose high crash risks if it applied a statistical approach, called a negative binomial regression model, to the four SafeStat safety evaluation areas instead of its current approach. Through this change, FMCSA could more efficiently target compliance reviews to the set of carriers that pose the greatest crash risk. Applying a negative binomial regression model would improve the identification of high risk carriers over SafeStat’s performance by about 9 percent, compared with the current approach, which incorporates safety data weighted in accordance with the professional judgment and experience of SafeStat’s designers. Moreover, according to our analysis, this 9 percent improvement would enable FMCSA to identify carriers with almost twice as many crashes in the following 18 months as those carriers identified under its current approach. Targeting these high-risk carriers would result in FMCSA giving compliance reviews to carriers that experienced both a higher crash rate and, in conjunction with the higher crash rate, 9,500 more crashes over an 18-month period than those identified by the SafeStat model. Applying a negative binomial regression model approach to the SafeStat safety evaluation areas would be easy to implement and, in our opinion, would be consistent with other FMCSA uses for SafeStat beyond identifying carriers that pose high risks for crashes. In addition, adopting a negative binomial regression model approach would be beneficial even if FMCSA makes major revisions to its compliance and enforcement approach in the coming years under its Comprehensive Safety Analysis 2010 initiative. Overall, other changes to the SafeStat model that we explored, such as modifying decision rules used in the construction of the safety evaluation areas, did not improve the model’s overall performance. Although SafeStat is nearly twice as effective as (83 percent better than) random selection in identifying carriers that pose high crash risks and, therefore, has value for improving safety, we found that FMCSA could improve SafeStat’s ability to identify such carriers by about 9 percent if it applied a negative binomial regression model approach to its analysis of motor carrier safety data. The use of a regression model does not entail assigning the letter categories currently assigned by the SafeStat model. Rather, the model predicts carriers’ crash risks, sorts the carriers according to their risk level, and assigns a high priority for a compliance review to the highest risk carriers. The improvement in identification of high-risk carriers, which we observed with the negative binomial regression model, is consistent with results obtained in an earlier analysis of MCMIS data performed by a team of researchers at Oak Ridge National Laboratory. To compare the effectiveness of regression models and SafeStat in identifying carriers that pose high crash risks, we applied several regression models to the four safety evaluation areas (accident, driver, vehicle, and safety management) used by the SafeStat model. We recalculated SafeStat’s June 2004 accident safety evaluation area values because the data FMCSA provided on the number of crashes for each carrier differed in 2006 from the data used in the model in 2004. Using our accident safety evaluation area value and the original driver, vehicle, and safety management safety evaluation area values from June 2004, we selected the 4,989 carriers that our regression models identified as the highest crash risks, calculated the crash rate per 1,000 vehicles for these carriers over the next 18 months, and compared this rate with the crash rate per 1,000 vehicles for the 4,989 carriers identified by the SafeStat model as posing high crash risks (categories A and B). All of the regression models that we estimated were at least as effective as SafeStat in identifying motor carriers that posed high crash risks. (See app. III for these results.) Of these, the negative binomial regression approach gave the best results and proved 9 percent more effective than SafeStat, as measured by future crashes per 1,000 vehicles. The set of carriers in SafeStat categories A and B had a crash rate of 102 per 1,000 vehicles for the 18 months after June 2004 while the set of high-risk carriers identified by the negative binomial regression model had 111 crashes per 1,000 vehicles. Even though this 9 percent improvement rate seems modest, it translates into nearly twice as many “future crashes” identified. Specifically, the negative binomial regression model identified carriers that had nearly twice as many crashes (from July 2004 to December 2005) as the carriers identified by SafeStat—19,580 crashes compared with 10,076. SafeStat (categories A and B) and our negative binomial regression model identified many of the same carriers—1,924 of the 4,989 (39 percent)—as posing high crash risks. However, our model also identified a number of high-risk carriers that SafeStat did not identify, and vice versa. For example, our model identified 2,244 carriers as posing high crash risks, while SafeStat placed these carriers in category D (the accident area), assigning them a lower priority for compliance reviews. One reason for this difference is the decision rules that SafeStat employs. Under SafeStat, carriers must perform worse than 75 percent of all carriers to be considered deficient in any safety evaluation area. The regression approach identifies the carriers with the highest crash risks regardless of how they compare with their peers in individual areas. For example, we identified as posing high crash risks 482 carriers that SafeStat did not consider at all for compliance reviews because the carriers had not performed worse than 75 percent of their peers in any of the four safety evaluation areas. In the short term, FMCSA could easily implement a regression model approach for SafeStat. All the information required as input for the negative binomial regression model is already entered into SafeStat. In addition, a standard statistical package can be used to apply the negative binomial approach to the four SafeStat safety evaluation areas. Like SafeStat, the negative binomial regression model would be run every month to produce a list of motor carriers that pose high crash risks, and these carriers would then be assigned priorities for a compliance review. As with SafeStat, the results of the negative binomial model would change slightly each month with the addition of new safety data to MCMIS. In discussing the concept of adopting a negative binomial regression model approach with FMCSA officials, they were interested in understanding how the use of the negative binomial regression model results could be used to identify and improve the safety of those carriers that pose the greatest crash risks (much as the SafeStat categories of A and B do now) and how it could employ the proposed approach for current uses beyond identifying carriers that pose high crash risks. These uses include providing an understandable public display to shippers, insurers, and others who are interested in the safety of carriers; selecting carriers for roadside inspections; and trying to gain carriers’ compliance with driver and vehicle safety rules, when these carriers may not have crashes, consistent with agency efforts. Identifying and improving the safety of carriers that pose high crash risks. The negative binomial regression model approach would produce a rank order listing of carriers by crash risk and by the predicted number of crashes. For compliance reviews, FMCSA could choose those carriers with the greatest number of predicted crashes. FMCSA would choose the number of carriers to review based on the resources available to it, much as it currently does. Regarding improving the safety of carriers that pose high crash risks, FMCSA currently enrolls carriers that receive a SafeStat category of A, B, or C in the Motor Carrier Safety Improvement Program. This program aims to improve the safety of high-risk carriers through (1) a repetitive cycle of identification, data gathering, and assessment and (2) progressively harsher treatments applied to carriers that do not improve their safety. The use of a negative binomial regression model would not affect the structure or workings of this program, other than to better identify carriers that pose high crash risks. As discussed above, FMCSA would use the regression model’s results to identify the highest risk carriers and then intervene using its existing approaches (such as issuing warning letters, conducting follow-up compliance reviews, or levying civil penalties) as treatment. Providing an understandable display to the public. FMCSA could choose to provide a rank order listing of carriers together with the associated number of predicted crashes or it could look for natural breaks in the predicted number of crashes and associate a category—such as “category A” to these carriers. Selecting carriers for roadside inspections. Safety rankings from the SafeStat model are also used in FMCSA’s Inspection Selection System to prioritize carriers for roadside driver and vehicle inspections. The negative binomial regression model optimizes the identification of carriers by crash risk using safety evaluation area information. The negative binomial regression model approach that we describe in this report retains SafeStat’s basic design with four safety management areas (driver, vehicle, accident, and safety management). Therefore, FMCSA could use the negative binomial regression model results to identify carriers that pose a high crash risk, the results from the driver and vehicle safety evaluation areas, or both, to target carriers or vehicles for roadside driver and vehicle inspections. Furthering agency efforts to gain compliance with driver and vehicle safety rules for carriers that do not experience crashes (or a sufficient number of crashes to pose a high risk for crashes). FMCSA was interested in understanding how, if at all, the negative binomial regression model approach would affect gaining compliance against carriers that may routinely violate safety rules (such as drivers’ hours of service requirements), but where these violations do not lead to crashes. As discussed above, the negative binomial regression model approach retains SafeStat’s four safety evaluation areas. Where it differs, is that it assigns different weights to those areas based on a statistical procedure, rather than having the weights assigned by expert judgment. As a result, FMCSA would still be able to identify carriers with many driver, vehicle, and safety management violations. Other opportunities also exist for FMCSA to improve the ability of regression models to identify carriers that pose high crash risks. In 2005, a FMCSA compliance review work group reported a positive correlation between driver hours of service violations and crash rates. Because FMCSA can link violations of specific regulatory provisions, including those limiting driver hours of service, to the crash experience of the carriers involved, it has the opportunity to improve the violation severity weighting used in constructing the driver and vehicle safety evaluation areas. FMCSA has detailed violation data from roadside inspections and can statistically analyze these data to find other strong relationships with carriers’ crash risks. Changes made to the safety evaluation area methodology to strengthen the association with crash risk will improve the ability of the negative binomial regression model to identify carriers that pose high crash risks. FMCSA has expressed doubts in the past when analysts have proposed switching to a regression model approach. For example, Oak Ridge National Laboratory advocated using a regression model approach in place of SafeStat in 2004, but FMCSA was reluctant to move away from its expert judgment model because it believed that the regression model approach would place undue weight on the accident safety evaluation area in determining priorities for compliance reviews, thereby diminishing the incentive for motor carriers to comply with the many safety regulations that feed into the driver, vehicle, and safety management safety evaluation areas. In FMCSA’s view, carriers would be less likely to comply with these regulations because violations in the driver, vehicle, and safety management areas would be less likely to lead to compliance reviews under a regression model approach that placed a heavy emphasis on crashes. Our view is that adopting a negative binomial regression model approach would better identify carriers that pose high crash risks and would thus further FMCSA’s primary mission of ensuring safe operating practices among commercial interstate motor carriers. Over the longer term, FMCSA is considering a complete overhaul of its safety fitness determinations with its Comprehensive Safety Analysis 2010 initiative. This planned comprehensive review and analysis of the agency’s compliance and enforcement programs may result in a new operational model for identifying drivers and carriers that pose safety problems and for intervening to address those problems. FMCSA expects to deploy the results of this initiative in 2010. In our opinion, given the relative ease of adopting the regression modeling approach discussed in this report, and the immediate benefits that can be achieved, there is no reason to wait for FMCSA to complete its initiative, even if the initiative results in major revisions to the SafeStat model. Besides investigating whether the use of regression models could improve SafeStat’s ability to identify carriers that pose high crash risks, we explored whether the existing model could be improved by changing several of its decision rules. Overall, these changes did not enhance the model’s ability to identify carriers that pose high crash risks. As long as FMCSA continues to estimate the safety evaluation area values with its present methodology, the rules we investigated help make the identification of high-risk motor carriers more efficient for both SafeStat and the negative binomial regression model. Because the SafeStat model is composed of many components, we selected three decision rules for analysis. We chose these three rules because they are important pillars of the SafeStat model’s methodology for constructing the safety evaluation areas and because we could complete our analysis of them during the time we had to perform our work. A fuller exploration of areas with high potential to improve the identification of carriers that pose high crash risks would be a long-term effort, and FMCSA plans to address this work as part of the Comprehensive Safety Analysis 2010 initiative. Removing comparison groups. As part of its methodology for calculating the accident, driver, and vehicle safety evaluation area values, SafeStat divides carriers into comparison groups. For example, in the driver safety evaluation area, SafeStat groups carriers by the number of moving violations they have, placing them in one of four groups (3 to 9, 10 to 28, 29 to 94, and 95 or more). SafeStat uses the comparison groups to control for the size of the carrier. We removed all the comparison groups in each of the three safety evaluation areas, recalculated their values, and compared the number of crashes in which the carriers were involved and their crash rates, for each of the SafeStat categories A through H, with the SafeStat results in which comparison groups were retained. Removing minimum event requirements. SafeStat imposes minimum event requirements. For example, as noted, SafeStat does not consider a carrier’s moving violations if, in the aggregate, its drivers had fewer than three moving violations over a 30-month period. FMCSA does not calculate a safety evaluation area value for carriers with fewer than three events in an attempt to control for carriers that have infrequent, rather than possibly systemic, safety problems. We removed the requirement to have a minimum number of events (such as moving violations, crashes, and inspections), recalculated the three safety evaluation values, and compared the number of crashes in which the carriers were involved and their crash rates, for each of the SafeStat categories A through H, with the SafeStat results in which minimum event requirements were retained. Removing time and severity weights. The SafeStat formula weights more recent events and more severe events more heavily than less recent or less severe events in the accident, driver, and vehicle safety evaluation areas. For example, the results of vehicle roadside inspections performed within the latest 6 months receive three times the weight of inspections performed 2 years ago. Similarly, crashes involving deaths or injuries receive twice as much weight as those that resulted in property damage only. We removed the time and severity weights for the three safety evaluation areas, recalculated these values, and compared the number of crashes in which the carriers were involved and their crash rates, for each of the SafeStat categories A through H, with the SafeStat results in which time and severity weights were retained. Simultaneous changes to comparison group, event, and time severity requirements. Finally, we simultaneously removed comparison groups, minimum event requirements, and time and severity weights and compared the number of crashes in which the carriers were involved and their crash rates, for each of the SafeStat categories A through H, with the SafeStat results in which comparison groups, minimum event requirements, and time and severity weights were retained. The results of each of our individual analyses and of making all changes simultaneously produced one of two outcomes, neither of which was considered more desirable. Relaxing the minimum data requirements greatly increased the number of carriers identified as high risk without increasing the overall number of predicted crashes over the subsequent 18 months, thus reducing the effectiveness of the SafeStat model. Removing comparison groups and removing time and severity weights had the effect of reducing the future crashes per 1,000 vehicles among those carriers identified as high risk, also reducing the effectiveness of the SafeStat model. As a result, we are not reporting on these results in detail. Trying to modify the decision rules used in SafeStat did highlight the balance that FMCSA has to strike between maximizing the identification of companies with the largest number of crashes (usually larger carriers) and those carriers with the greatest safety risk (which can be of any size). The quality of crash data is a long-standing problem that potentially hindered FMCSA’s ability to accurately identify carriers that pose high crash risks. Despite the problems of late-reported crashes and incomplete and inaccurate data on crashes during the period we studied, we determined that the data were of sufficient quality for our use, which was to assess how the application of regression models might improve the ability to identify high-risk carriers over the current approach—not to determine absolute measures of crash risk. Our reasoning is based on the fact that we used the same data set to compare the results of the SafeStat model and the regression models. Limitations in the data would apply equally to both results. FMCSA has recently undertaken a number of efforts to improve crash data quality. FMCSA’s guidance provides that states report all crashes to MCMIS within 90 days of their occurrence. Late reporting can cause SafeStat to miss some of the carriers that should have received a SafeStat score. Alternatively, since SafeStat’s scoring involves a relative ranking of carriers, a carrier may receive a SafeStat score and have to undergo a compliance review because crash data for a higher risk carrier were reported late and not included in the calculation. Late reporting affected SafeStat’s ability to identify all high-risk carriers to a small degree—about 6 percent-—for the period that we studied. Late reporting of crashes by states affected the safety rankings of more than 600 carriers, both positively and negatively. When SafeStat analyzed the 2004 data, which did not include the late-reported crashes, it identified 4,989 motor carriers as highest risk, meaning they received a category A or B ranking. With the addition of late-reported crashes, 481 carriers moved into the highest risk category, and 182 carriers dropped out of the highest risk category, resulting in a net increase of 299 carriers (6 percent) in the highest risk category. After the late-reported crashes were added, 481 carriers that originally received a category C, D, E, F, or G SafeStat rating received an A or B rating. These carriers would not originally have been given a high priority for a compliance review because the SafeStat calculation did not take into account all of their crashes. On the other hand, a small number of carriers would have received a lower priority if the late-reported crashes had been included in their score. Specifically, 182 carriers – or fewer than 4 percent of those ranked, fell from the A or B category into the C, D, E, F, or G category once the late-reported crashes were included. These carriers would not have been considered high priority for a compliance review if all crashes had been reported on time. This does not have a big effect on the overall motor carrier population, however, as only 4 percent of carriers originally identified as highest risk were negatively affected by late reporting. The timeliness of crash reporting has shown steady and marked improvement. The median number of days it took states to report crashes to MCMIS dropped from 225 days in calendar year 2001 to 57 days in 2005 (the latest data available at the time of our analysis). In addition, the percentage of crashes reported by states within 90 days of occurrence has jumped from 32 percent in fiscal year 2000 to 89 percent in fiscal year 2006. (See fig. 2.) FMCSA uses a motor carrier identification number, which is unique to each carrier, as the primary means of linking inspections, crashes, and compliance reviews to motor carriers. Approximately 184,000 (76 percent) of the 244,000 crashes reported to MCMIS between December 2001 and June 2004 involved interstate carriers. Of these 184,000 crashes, nearly 24,000 (13 percent) were missing this identification number. As a result, FMCSA could not match these crashes to motor carriers or use them in SafeStat. In addition, the carrier identification number could not be matched to a number listed in MCMIS for 15,000 (8 percent) other crashes that involved interstate carriers. Missing data or being unable to match data for nearly one quarter of the crashes during the period of our review potentially has a large impact on a motor carrier’s SafeStat score because SafeStat treats crashes as the most important source of information for assessing motor carrier crash risk. Theoretically, information exists to match crash records to motor carriers by other means, but such matching would require too much manual work to be practicable. We were not able to quantify the actual effect of either the missing data or the data that could not be matched for MCMIS overall. To do so would have required us to gather crash records at the state level—an effort that was impractical. For the same reason, we cannot quantify the effects of FMCSA’s efforts to improve the completeness of the data (discussed later). However, a series of reports by the University of Michigan Transportation Research Institute sheds some light on the completeness of the data submitted to MCMIS by the states. One of the goals of the research was to determine the states’ crash reporting rates. Reporting rates varied greatly among the 14 states studied, ranging from 9 percent in New Mexico in 2003 to 87 percent in Nebraska in 2005. It is not possible to draw wide-scale conclusions about whether state reporting rates are improving over time because only two of the states—Missouri and Ohio—- were studied in multiple years. However, in these two states, the reporting rate did improve. Missouri experienced a large improvement in its reporting rate, with 61 percent of eligible crashes reported in 2001, and 83 percent reported in 2005. Ohio’s improvement was more modest, increasing from 39 percent in 2000 to 43 percent in 2005. The University of Michigan Transportation Research Institute’s reports also identified a number of factors that may affect states’ reporting rates. One of the main factors affecting reporting rates is the reporting officer’s understanding of crash reporting requirements. The studies note that reporting rates are generally lower for less serious crashes and for crashes involving smaller vehicles, which may indicate that there is some confusion about which crashes are reportable. Some states, such as Missouri, aid the officer by explicitly listing reporting criteria on the police accident reporting form, while other states, such as Washington, leave it up to the officer to complete certain sections of the form if the crash is reportable, but the form includes no guidance on reportable crashes. Yet other states, such as North Carolina and Illinois, have taken this task out of officers’ hands and include all reporting elements on the police accident reporting form. Reportable crashes are then selected centrally by the state, and the required data are transmitted to MCMIS. Inaccurate data, such as reporting a nonqualifying crash to FMCSA, potentially has a large impact on a motor carrier’s SafeStat score because SafeStat treats crashes as the most important source of information for assessing motor carrier crash risk. For the same reasons as discussed in the preceding section, we were neither able to quantify these effects nor determine how data accuracy has improved for MCMIS overall. The University of Michigan Transportation Research Institute’s reports on crash reporting show that, among the 14 states studied, incorrect reporting of crash data is widespread. In recent reports, the researchers found that, in 2005, Ohio incorrectly reported 1,094 (22 percent) of the 5,037 cases, and Louisiana incorrectly reported 137 (5 percent) of the 2,699 cases. In Ohio, most of the incorrectly reported crashes did not qualify because they did not meet the crash severity threshold. In contrast, most of the incorrectly reported crashes in Louisiana did not qualify because they did not involve vehicles eligible for reporting. Other states studied by the institute had similar problems with reporting crashes that did not meet the criteria for reporting to MCMIS. These additional crashes could cause some carriers to exceed the minimum number of crashes required to receive a SafeStat rating and result in SafeStat’s mistakenly identifying carriers as posing high crash risks. Because each report focuses on reporting in one state in a particular year, it is not possible to identify the number of cases that have been incorrectly reported nationwide and, therefore, it is not possible to determine the impact of inaccurate reporting on SafeStat’s calculations. As noted in the University of Michigan Transportation Research Institute’s reports, states may be unintentionally submitting incorrect data to MCMIS because of difficulties in determining whether a crash meets the reporting criteria. For example, in Missouri, pickups are systematically excluded from MCMIS crash reporting, which may cause the state to miss reportable crashes. However, some pickups may have vehicle weights above the reporting threshold, making crashes involving them eligible for reporting. There is no way for the state to determine which crashes involving pickups qualify for reporting without examining the characteristics of each vehicle. In this case, the number of omissions is likely to be relatively small, but this example demonstrates the difficulty states may face when identifying reportable crashes. In addition, in some states, the information contained in the police accident report may not be sufficient for the state to determine if a crash meets the accident severity threshold. It is generally straightforward to determine whether a fatality occurred as a result of a crash, but it may be difficult to determine whether an injured person was transported for medical attention or a vehicle was towed because of disabling damage. In some states, such as Illinois and New Jersey, an officer can indicate on the form if a vehicle was towed by checking a box, but there is no way to identify whether the reason for towing was disabling damage. It is likely that such uncertainty results in overreporting because some vehicles may be towed for other reasons. FMCSA has taken steps to try and improve the quality of crash data reporting. As we noted in November 2005, FMCSA has undertaken two major efforts to help states improve the quality of crash data. One program, the Safety Data Improvement Program, has provided funding to states to implement or expand activities designed to improve the completeness, timeliness, accuracy, and consistency of their data. FMCSA has also used a data quality rating system to rate and display ratings for states’ crash and inspection data quality. Due to its public nature, this map serves as an incentive for states to make improvements in their data quality. To further improve these programs, FMCSA has made additional grants available to states and implemented our recommendations to (1) establish specific guidelines for assessing states’ requests for funding to support data improvement in order to better assess and prioritize the requests and (2) increase the usefulness of its state data quality map as a tool for monitoring and measuring commercial motor vehicle crash data by ensuring that the map adequately reflects the condition of the states’ commercial motor vehicle crash data. In February 2004, FMCSA implemented Data Q’s, an online system that allows for challenging and correcting erroneous crash or inspection data. Users of this system include motor carriers, the general public, state officials, and FMCSA. In addition, in response to a recent recommendation by the Department of Transportation Inspector General, FMCSA is planning to conduct a number of evaluations of the effectiveness of a training course on crash data collection that it will be providing to states by September 2008. While the quality of crash reporting is sufficient for use in identifying motor carriers that pose high crash risks and has started to improve, commercial motor vehicle crash data continue to have some problems with timeliness, completeness, and accuracy. These problems have been well-documented in several studies, and FMCSA is taking steps to address the problems through studies of each state’s crash reporting system and grants to states to fund improvements. As a result, we are not making any recommendations in this area. Interstate commerce involving large trucks and buses has been growing substantially, and this growth is expected to continue. While the number of fatalities per million vehicle miles traveled has generally decreased over the last 30 years, the fatality rate has leveled off and remained fairly steady since the mid-1990s. FMCSA could more effectively address fatalities due to crashes involving a commercial motor vehicle if it better targeted compliance reviews to those carriers that pose the greatest crash risks. Using a negative binomial regression model would further FMCSA’s mission of reducing crashes through the more effective targeting of compliance reviews to the set of carriers that pose the greatest crash risks. In light of possible changes to FMCSA’s safety fitness determinations resulting from its Comprehensive Safety Analysis 2010 initiative, we are not suggesting that FMCSA undertake a complete and thorough investigation of SafeStat. Rather, we are advocating that FMCSA apply a statistical approach that employs the negative binomial regression model rather than relying on the current SafeStat formula that was determined through expert judgment. In our view, the substitution of a statistically based approach would likely yield a markedly better ability to identify carriers that pose high crash risks with relatively little time or effort on FMCSA’s part. We recommend that the Secretary of Transportation direct the Administrator of FMCSA to apply a negative binomial regression model, such as the one discussed in this report, to enhance the current SafeStat methodology. We provided a draft of this report to the Department of Transportation for its review and comment. In response, departmental officials, including FMCSA’s Director of the Office of Enforcement and Compliance and Director of the Office of Research and Analysis, noted that our report provided useful insights and offered a potential avenue for further improving the effectiveness of FMCSA’s efforts to reduce crashes involving motor carriers. The agency indicated that it is already working to improve upon SafeStat as part of its Comprehensive Safety Analysis 2010 initiative. FMCSA agreed that it would be useful for it to consider whether there are both short and longer term measures that would incorporate the type of analysis identified in our report, as an adjunct to the SafeStat model, in order to better target compliance reviews so as to make the best use of FMCSA’s resources to reduce crashes. The agency expressed some concerns with the negative binomial regression analysis, noting that its intent is to effectively target its compliance activities based on a broader range of factors than is considered in the negative binomial regression analysis approach described in our draft report, which increases reliance on past crashes as a predictor of future crashes while apparently de-emphasizing known driver, vehicle, or safety management compliance issues. FMCSA told us that it incorporates a broad range of information including driver behavior, vehicle condition, and safety management in an attempt to capture and enable the agency to act on accident precursors in order to reduce crashes. FMCSA is correct in concluding that the use of the negative binomial regression approach could tilt enforcement heavily toward carriers that have experienced crashes and away from other aspects of its problem areas, such as violation of vehicle safety standards, that are intended to prevent crashes. That is because the present SafeStat model does not statistically assign weights to the accident, driver, vehicle, and safety management areas. In addition, the negative binomial regression approach fully considers information on the results of driver and vehicle inspection data and safety management data. We used the same data that FMCSA used, with some adjustments as new information became available. While we found that the driver, vehicle, and safety management evaluation area scores are correlated with the future crash risk of a carrier, the accident evaluation area correlates the most with future crash risk. We recognize that FMCSA selects carriers for compliance reviews for multiple reasons, such as to respond to complaints, and we would expect that it would retain this flexibility if it adopted the negative binomial regression approach. FMCSA also indicated that greater reliance on crash data increases emphasis on the least reliable available data set, and one that is out of the organization’s direct control—crash reporting. While our draft report found that crash reporting has improved, and that late reporting has not significantly impaired FMCSA’s use of the SafeStat model, FMCSA noted that the reliance on previous crashes in the negative binomial regression analysis described in our draft report could result in greater sensitivity to the crash data quality issues. As FMCSA noted in its comments, our results showed that the effect of late-reported data was minimal. Also, as mentioned in our draft report and in this final report, it was not practical to determine the effect, if any, on SafeStat rankings of correcting inaccurate data or adding incomplete data. Since June 2004, FMCSA has devoted considerable efforts to improving the quality of the crash data it receives from the states. States are now tracked quarterly for the completeness, timeliness, and accuracy of their crash reporting. As FMCSA continues its efforts to have states improve these data, any sensitivity of results to crash data quality issues for the negative binomial regression approach should diminish. We are sending copies of this report to congressional committees and subcommittees with responsibility for surface transportation safety issues; the Secretary of Transportation; the Administrator, FMCSA; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please either contact Sidney H. Schwartz at (202) 512-7387 or Susan A. Fleming at (202) 512-2834. Alternatively, they may be reached at schwartzsh@gao.gov or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are Carl Barden, Elizabeth Eisenstadt, Laurie Hamilton, Lisa Mirel, Stephanie Purcell, and James Ratzenberger. Several studies by the Volpe National Transportation Systems Center (Volpe), the Department of Transportation’s Office of Inspector General, the Oak Ridge National Laboratory (Oak Ridge), and others have assessed the predictive capability of the Motor Carrier Safety Status Measurement System (SafeStat) model and the data used by that model. In general, those studies that assessed the predictive power of SafeStat offered suggestions to increase that power, and those studies that assessed data quality found weaknesses in the data that the Federal Motor Carrier Safety Administration (FMCSA) relies upon. The studies we reviewed covered topics such as comparing SafeStat with random selection to determine which does a better job of selecting carriers that pose high crash risks, assessing whether statistical approaches could improve that selection, and analyzing whether carrier financial positions or driver convictions are associated with crash risk. In studies of the SafeStat model published in 2004 and 1998, Volpe analyzed retrospective data to determine how many crashes the carriers in SafeStat categories A and B experienced over the following 18 months. The 2004 study used the carrier rankings generated by the SafeStat model on March 24, 2001. Volpe then compared the SafeStat carrier safety ratings with state-reported data on crashes that occurred between March 25, 2001, and September 24, 2002, to assess the model’s performance. For each carrier, Volpe calculated a total number of crashes, weighted for time and severity, and then estimated a crash rate per 1,000 vehicles for comparing carriers in SafeStat categories A and B with the carriers in other SafeStat categories. The 1998 Volpe study used a similar methodology. Each study used a constrained subset of carriers rather than the full list contained in the Motor Carrier Management Information System (MCMIS). Both studies found that the crash rate for the carriers in SafeStat categories A and B was substantially higher than the other carriers during the 18 months after the respective SafeStat run. On the basis of this finding, Volpe concluded that the SafeStat model worked. In response to a recommendation by the Department of Transportation’s Office of Inspector General, FMCSA contracted with Oak Ridge to independently review the SafeStat model. Oak Ridge assessed the SafeStat model’s performance and used the same data set (for March 24, 2001), provided by Volpe, that Volpe had used in its 2004 evaluation. Perhaps not surprisingly, Oak Ridge obtained a similar result for the weighted crash rate of carriers in SafeStat categories A and B over the 18-month follow-up period. As with the Volpe study, the Oak Ridge study was constrained because it was based on a limited data set rather than the entire MCMIS data set. While SafeStat does better than simple random selection in identifying carriers that pose high crash risks, other methods can also be used to achieve this outcome. Oak Ridge extended Volpe’s analysis by applying regression models to identify carriers that pose high crash risks. Specifically, Oak Ridge applied a Poisson regression model and a negative binomial model using the safety evaluation area values as independent variables to a weighted count of crashes that occurred in the 30 months before March 24, 2001. (For more information on statistical analyses, see app. III.) In addition, Oak Ridge applied the empirical Bayes method to the negative binomial regression model and assessed the variability of carrier crash counts by estimating confidence intervals. Oak Ridge found that the negative binomial model worked well in identifying carriers that pose high crash risks. However, the data set Oak Ridge used did not include any carriers with one reported crash in the 30 months before March 24, 2001. Because data included only carriers with zero or two or more reported crashes, the distribution of crashes was truncated. Since the Oak Ridge regression model analysis did not cover carriers with safety evaluation area data and one reported crash, the findings from the study are limited in their generalizability. However, other analyses of crashes at intersections and on road segments have also found that the negative binomial regression model works well. In addition, our analysis using a more recent and more comprehensive data set supports the finding that the negative binomial regression model performs better than the SafeStat model. The studies carried out by other authors advocate the use of the empirical Bayes method in conjunction with a negative binomial regression model to estimate crash risk. Oak Ridge also applied this model to identify motor carriers that pose high crash risks. We applied this method to the 2004 SafeStat data and found that the empirical Bayes method best identified the carriers with the largest number of crashes in the 18 months after June 25, 2004. However, the crash rate per 1,000 vehicles was much lower than that for carriers in SafeStat categories A and B. We analyzed this result further and found that although the empirical Bayes method best identifies future crashes, it is not as effective as the SafeStat model or the negative binomial regression model in identifying carriers with the highest future crash rates. The carriers identified with the empirical Bayes method were invariably the largest carriers. This result is not especially useful from a regulatory perspective. Companies operating a large number of vehicles often have more crashes over a period of time than smaller companies. However, this does not mean that the larger company is necessarily violating more safety regulations or is less safe than the smaller company. For this reason, we do not advocate the use of the empirical Bayes method in conjunction with the negative binomial regression model as long as the method used to calculate the safety evaluation area values remains unchanged. If changes are made in how carriers are rated for safety, this method may in the future offer more promise than the negative binomial regression model alone. Conducted on behalf of FMCSA, a study by Corsi, Barnard, and Gibney in 2002 examined how a carrier’s financial performance data correlate with the carrier’s score on a compliance review. The authors selected those motor carriers from MCMIS in December 2000 that had complete data for the accident, driver, vehicle, and safety management safety evaluation areas. Using these data, the authors then matched a total of 700 carriers to company financial statements in the annual report database of the American Trucking Associations. The authors created a binary response variable for whether the carrier received a satisfactory or an unsatisfactory outcome on the compliance review. The authors then assessed how this result correlated with financial measures derived from the company financial statements. In general, the study found that indicators of poor financial condition correlated with an increased safety risk. Two practical considerations limit the applicability of the findings from this study to SafeStat. First, the 700 carriers in the study sample are not necessarily representative of the motor carriers that FMCSA oversees. Only about 2 percent of the carriers evaluated by the SafeStat model in June 2004 had a value for the safety management safety evaluation area. Of these carriers, not all had complete data for the other three safety evaluation areas. Second, FMCSA does not receive annual financial statements from all motor carriers. For these reasons, we did not consider using carrier financial data in our analysis of the SafeStat data. A series of studies by Lantz and others examined the effect of incorporating conviction data from the state-run commercial driver license data system into the calculation of a driver conviction measure. The studies found that the driver conviction measure is weakly correlated with the crash per vehicle rate. However, the studies did not incorporate the proposed driver conviction measure into one of the existing safety evaluation areas and use the updated measure to estimate new SafeStat scores for carriers. While the use of commercial driver license conviction data may have potential for future incorporation into a model for identifying carriers that pose high crash risks, there is no assessment of its impact at this time. The 2004 Office of Inspector General report, the 2004 Oak Ridge study, and reports by the University of Michigan Transportation Research Institute on state crash reporting all examined the impact of data quality on SafeStat’s ability to identify carriers that pose high crash risks. These studies looked at issues such as late reporting and incomplete or inaccurate reporting of crash data and found weaknesses. To determine whether states promptly report SafeStat data, the Office of Inspector General conducted a two-stage statistical sample in which it selected 10 states for review and then selected crash and inspection reports from those states for examination. It sampled 392 crash records and 400 inspection records from July through December 2002. In 2 of the 10 states selected, Pennsylvania and New Mexico, no crash records were available for the sample period, so it selected samples from earlier periods. The Office of Inspector General also discussed reporting issues with state and FMCSA officials and obtained crash records from selected motor carriers. In addition, the Office of Inspector General used the coefficient of variation to analyze data consistency and trends in reporting timeliness across geographic regions. Our review of the study indicates that it was based on sound audit methodology. The study found that, as of November 2002, states submitted crash reports in fiscal year 2002 an average of 103 days after the crash occurred and that states varied widely in the timeliness of their crash data reporting. (FMCSA requires that states report crashes no more than 90 days after they occur.) In addition, the study found that 20 percent of the crashes that occurred in fiscal year 2002 were entered into MCMIS 6 months or more after the crash occurred. On the basis of this information, the Office of Inspector General concluded that the calculation of the accident safety evaluation area value was affected by the location of the carrier’s operations but did not estimate the degree of this effect. We also assessed the extent of late reporting. We measured how many days, on average, it took each state to report crashes to MCMIS in each calendar year and found that the amount of time taken to report crashes declined from 2000 to 2005. Our findings were similar in nature to the Office of Inspector General’s findings. However, our results are broader because they are based on all crash data rather than a sample. In addition, since our work is more recent, it reflects more current conditions. We both came to the conclusion, although to varying degrees, that late reporting of crash data by states negatively affects SafeStat’s identification of carriers that pose high crash risks. Oak Ridge also examined the impact of late reporting. Using data provided by Volpe, Oak Ridge looked at the difference between the date a crash occurred and the date it was entered into MCMIS. The researchers found that after 497 days, 90 percent of the reported crashes were entered into MCMIS. The Oak Ridge study also reran the SafeStat model for March 2001 with the addition of crash data from March 2003 to see how more complete data changed SafeStat scores. The study found that the addition of late- reported data increased the number of carriers in the high-risk group by 18 percent. This late reporting affected the rankings of 8 percent of all the carriers ranked by SafeStat in March 2001. Of these affected carriers, 3 percent moved to a lower SafeStat category, and 5 percent moved to a higher category. Including the late-reported crash data available in March 2003 for the period from September 1998 through March 2001 resulted in a 35 percent increase in the available crash data. We performed the same analysis as the Oak Ridge study and obtained similar results. We used SafeStat data from June 2004, which include carrier safety data from December 2001 through June 2004. Using FMCSA’s master crash file from June 2006, we found that, with the addition of late-reported crashes, 481 carriers moved into the highest risk category, and 182 carriers dropped out of the highest risk category resulting in a net increase of 299 carriers (6 percent) being added to the highest risk category. The University of Michigan Transportation Research Institute issued a series of reports examining crash reporting rates in 14 states. These reports looked at late reporting as a potential source of low crash reporting rates but did not specifically examine the extent of late reporting or the impact of late reporting on SafeStat scores. The institute looked at reporting rates in each of the states by month to determine if reporting rates were lower in the latter part of the year because of late reporting. It found that reporting rates were lower in the latter part of the year in 6 of the 14 states studied. This issue was not a focus of our efforts, so we did not conduct a similar analysis. The Office of Inspector General’s study found several instances of incomplete or inaccurate data on crashes and carriers. The study reviewed MCMIS reporting for all states and found that 6 of them did not report any crashes to FMCSA in the 6-month period from July through December 2002. In addition, the study found that MCMIS listed about 11 percent of carriers as having no vehicles, and 15 percent as having no drivers. Finally, from a sample of crash records, the study estimated that 13 percent of the crash reports and 7 percent of the inspection reports in MCMIS contained errors that would affect SafeStat results. In particular, the study concluded that the database identified the wrong motor carrier as having been involved in a crash or as having received a violation in 11 percent of the erroneous records. The University of Michigan Transportation Research Institute also examined the accuracy of states’ crash data reporting. To determine if crashes were reported accurately, the institute compared information contained in the individual states’ police accident reporting files with crash data reported to MCMIS. Some states, such as Ohio, had enough information captured in the police accident file to determine if individual crashes were eligible for reporting, and, therefore, the institute was able to use these data in its analyses. In other states, not enough information was available to make a determination, and the institute had to project results on the basis of other states’ experience. The institute also carried out a number of analyses, such as comparing reporting rates for different reporting jurisdictions, in an attempt to identify reporting trends in the individual states. The institute identified several problems with the accuracy of states’ crash reporting. All 14 states that it studied reported ineligible crashes to MCMIS. These crashes were ineligible because they either involved vehicles not eligible for reporting or they did not meet the crash severity threshold. In total, the 14 states reported nearly 5,800 ineligible crashes to MCMIS out of almost 68,000 crashes reported (9 percent). The states also failed to report a number of eligible crashes: the 14 states studied reported from 9 percent to 87 percent of eligible crashes. Our review of the institute’s methodology indicates that its findings are based on sound methodology and that its analyses were very thorough. However, its studies are limited to the 14 states studied and to the particular year studied. (Not all studies covered the same year.) These states’ experience may or may not be representative of the experiences of the entire country, and there is no way to determine if the reporting for this year is representative of the state’s reporting activities over a number of years or if the results were unique to that particular year. The exceptions to this are the studies for Missouri, which covered calendar years 2001 and 2005, and Ohio, which covered calendar years 2000 and 2005. We did not attempt to assess the extent of inaccurate reporting in individual states, but we did find examples of inaccurate data reporting. To analyze the completeness of reporting, we attempted to match all crash records in the MCMIS master crash file for crashes occurring between December 26, 2001, and June 25, 2004, to the list of motor carriers in the MCMIS census file. We found that Department of Transportation numbers were missing for 30 percent of the crashes that were reported, and the number did not match a Department of Transportation number listed in MCMIS for 8 percent of reported crashes. We also compared the number of crashes in MCMIS with the number in the General Estimates System produced by the National Highway Traffic Safety Administration and found evidence of underreporting of crashes to MCMIS. To determine whether statistical approaches could be used to improve FMCSA’s ability to identify carriers that pose high crash risks, we tested a variety of regression models and compared their results with results from the existing SafeStat model. The models we tested, using MCMIS data used by SafeStat in June 2004 to identify carriers that pose high crash risks, include the Poisson, negative binomial, zero-inflated negative binomial, zero-inflated Poisson, and empirical Bayes. We chose these regression models because crash totals for a company represent count outcomes, and these statistical models are appropriate for use with count data. In addition, we explored logistic regression to assess the odds of having a crash. Based on the results of the statistical models, we ranked the predicted means (or predicted probabilities in the logistic regression) to see which carriers would be at risk during the 18-month period after June 2004. We selected June 2004 because this date enabled us to examine MCMIS data on actual crashes that occurred in the 18-month period from July 2004 through December 2005. We used these data to determine the degree to which SafeStat identified carriers that proved to pose high crash risks. We then compared the predictive performance of the regression models with the performance of SafeStat to determine which method best identified carriers that pose high crash risks. Using a series of simple random samples, we also calculated the crash rates of all carriers listed in the main SafeStat summary results table in MCMIS for comparison with the crash rates of carriers identified by SafeStat as high risk. We did this analysis to determine whether the SafeStat model did a better job than random selection of identifying motor carriers that pose high crash risks. In addition, we tested changes to selected portions of the SafeStat model to see whether improvements could be made in the identification of high- risk motor carriers. In one analysis, we modified the calculation of the safety evaluation area values and compared the number of high-risk motor carriers identified with the number identified by the unmodified safety evaluation areas. For example, we included carriers with only one crash in the calculation of the accident safety evaluation area whereas the unmodified SafeStat model includes only carriers with two or more crashes. We also investigated the effect of removing the time and severity weights from the indexes used to construct the accident, driver, and vehicle safety evaluation areas. We then compared the result of using the modified and unmodified safety evaluation area values to determine if this modification improved the model’s ability to identify future crash risks. To assess the extent to which the timeliness, completeness, and accuracy of MCMIS and state-reported crash data affect SafeStat’s performance, we carried out a series of analyses with the MCMIS crash master file and MCMIS census file, as well as surveying the literature to assess findings on MCMIS data quality from other studies. To assess the effect of timeliness, we first measured how many days on average it was taking each state to report crashes to FMCSA by year for calendar years 2000 through 2005. We also recalculated SafeStat scores from the model’s June 25, 2004, run to include crashes that had occurred more than 90 days before that date but had not been reported to FMCSA by that date. We compared the number and rankings of carriers from the original SafeStat results with those obtained by adding in data for the late-reported crashes. In addition, we reviewed the University of Michigan Transportation Research Institute’s studies of state crash reporting to MCMIS to identify the impact of late reporting in individual states on MCMIS data quality. To assess the effect of completeness, we attempted to match all crash records in the MCMIS crash file for crashes occurring from December 2001 through June 2004 to the list of motor carriers in the MCMIS census file. In addition, we reviewed the University of Michigan Transportation Research Institute’s studies of state crash reporting to MCMIS to identify the impact of incomplete crash reporting in individual states on MCMIS data quality. To assess the effect of accuracy, we reviewed a report by the Office of Inspector General that tested the accuracy of electronic data by comparing records selected in the sample with source paper documents. In addition, we reviewed the University of Michigan Transportation Research Institute’s studies of state crash reporting to MCMIS to identify the impact of incorrectly reported crashes in individual states on MCMIS data quality. While the limitations in the data adversely affect the ability of any method to identify carriers that pose high crash risks, we determined that the data were of sufficient quality for our use, which was to assess how the application of regression models might improve the ability to identify high- risk carriers over the current approach—not to determine absolute measures of crash risk. Our reasoning is based on the fact that we used the same data set to compare the results of the SafeStat model and the regression models. Limitations in the data would apply equally to both results. Methods to identify carriers that pose high crash risk will perform more efficiently once the known problems with the quality of state- reported crash data are addressed. To understand what other researchers have found about how well SafeStat identifies motor carriers that pose high crash risks, we identified studies through a general literature review and by asking stakeholders and study authors to identify high-quality studies. Studies included in our review were (1) the 2004 study of SafeStat done by Oak Ridge National Laboratory, (2) the SafeStat effectiveness studies done by the Department of Transportation Office of Inspector General and Volpe Institute, (3) the University of Michigan Transportation Research Institute’s studies of state crash reporting to FMCSA, and (4) the 2006 Department of Transportation Office of Inspector General’s audit of data for new entrant carriers. We assessed the methodology used in each study and identified which findings are supported by rigorous analysis. We accomplished this by relying on information presented in the studies and, where possible, by discussing the studies with the authors. When the studies’ methodologies and analyses appeared reasonable, we used those findings in our analysis of SafeStat. We discussed with FMCSA and industry and safety stakeholders the SafeStat methodology issues and data quality issues raised by these studies. We also discussed the aptness of the respective methodological approaches with FMCSA. Finally, we reviewed FMCSA documentation on how SafeStat is constructed and assessments of SafeStat conducted by FMCSA. This appendix contains technical descriptions and other information related to our statistical analyses. To study how well statistical methods identify carriers that pose high crash risks, we carried out a series of regression analyses. The safety evaluation area values for the accident, driver, vehicle, and safety management areas served as the independent variables to predict crash risks. We used the state-reported crash data in MCMIS for crashes that occurred during the 30 months preceding June 25, 2004, as the dependent variable in each model. We used the results of the SafeStat model run from June 25, 2004, to benchmark the performance of the regression models with the crash records for the identified high-risk carriers over the succeeding 18 months. We matched the state-reported crashes that occurred from December 26, 2001, through June 25, 2004, to the carriers listed in SafeStat. We checked our match of crashes for carriers with those carriers used by FMCSA in June 2004 and found that the reported numbers had changed for about 10,700 carriers in the intervening 2 years. We found this difference even though we used only crashes that occurred from December 26, 2001, through June 25, 2004, and were reported to FMCSA before June 25, 2004. Because of this difference in matched crashes, we recalculated the accident safety evaluation area using our match of the crashes. This is discussed later in more detail. Using our recalculation of the accident safety evaluation area values and the original driver, vehicle, and safety management safety evaluation area values for the carriers, we fit a Poisson regression model and a negative binomial regression model to the crash counts. Both of these models are statistically appropriate for use when modeling counts that are positive and integer valued. The two models differ in their assumptions about the mean and variance. Whereas the Poisson model assumes that the mean and the variance are equal, the negative binomial model assumes the mean is not equal to the variance. The crash data in MCMIS fit the assumptions of the negative binomial distribution better than those of the Poisson. We also tried to estimate zero-inflated Poisson and zero-inflated negative binomial models with the SafeStat data. These models are appropriate when the count values include many zeros, as is the case with the values in this data set (because many carriers do not have crash records). However, we could not estimate the parameters for these models with the MCMIS data. We also considered using logistic regression to model the carrier’s odds of experiencing a crash. However, the parameter estimates of the four safety evaluation area values could not be estimated, so we did not use the results of this model. Finally, we used the results from the negative binomial model to assess the expected carrier crash counts using the empirical Bayes estimate. In safety applications, the empirical Bayes method is used to increase the precision of estimates and correct for the regression-to-mean bias. In this application, the empirical Bayes method calculates a weighted average of the rate of crashes for a carrier from the prior 30 months with the predicted mean number of crashes from the negative binomial regression. This method optimizes the identification of carriers with the highest number of future crashes. This optimization of total crashes, however, resulted in the identification of primarily the largest companies. The crash rate (crashes per 1,000 vehicles per 18 months) was not as high for this group as for the carriers placed by the SafeStat model in its A and B categories. This section provides the technical details for the negative binomial regression model fit to the SafeStat data. This section also explains how we handled incomplete safety evaluation area data for carriers in the regression model analyses. ) ) ) exp( β ) ) This equation models the log of the expected mean number of crashes for each motor carrier using the four safety evaluation area values, but most commercial motor companies listed in MCMIS do not have values for all four safety evaluation areas. To account for this, it is necessary to define four indicator variables. Let . β ) ) β ) ) A carrier has to have two or more reported crashes in the past 30 months to receive an accident safety evaluation area value. A carrier has to have three or more roadside inspections to receive a driver or vehicle safety evaluation area value. A driver has to have had a compliance review in the past 18 months to receive a safety management safety evaluation area value. There are other ways a carrier can receive a value for one of these four safety evaluation areas, refer to the description of each one provided in the Background. intercept for the indicator term (the coefficient for the indicator function). We used a similar parameterization to formulate the Poisson regression model. We estimated regression models using the same data FMCSA used in its application of the SafeStat model on June 25, 2004, with one exception for the accident safety evaluation area. For that area, we used our own match of crashes to carriers for December 26, 2001, through June 25, 2004. The MCMIS data we received in June 2006 produced different totals in the match of crashes to carriers for about 10,700 carriers. MCMIS data change over time because crash data are added, deleted, or changed as more information about these crashes is obtained. The discrepancies in matching arose even though we used the identical time interval and counted crashes only when the record indicated they had been reported to FMCSA before June 25, 2004. Because of these discrepancies, it was necessary to calculate the accident safety evaluation area values using our match of crashes and then recalculate the SafeStat carrier scores for June 25, 2004, using our accident safety evaluation area values and the original driver, vehicle, and safety management safety evaluation area values. We used our accident safety evaluation area values and the original driver, vehicle, and safety management safety evaluation area values in the regression model analysis. Using the revised accident safety evaluation area values and FMCSA’s original driver, vehicle, and safety management safety evaluation area values, the SafeStat model identified 4,989 carriers that pose high crash risks. For each regression model, we input the safety evaluation area data for the carriers in our analysis data set and used the regression model to calculate the predicted mean number of crashes. We then sorted the predicted scores and selected the 4,989 carriers with the worst predicted values as the set of high-risk carriers identified by the regression model. Next, we used MCMIS to determine the crash history of these 4,989 carriers between June 26, 2004, and December 25, 2005, and compared the aggregate crash history with the aggregate crash history of the carriers identified by the SafeStat model during the same period of time. The regression models do not categorize carriers by letter; the regression models produce a predicted crash risk for each carrier. The regression models make use of the safety evaluation area values, but they differ from the SafeStat model in this respect. The results show that a negative binomial regression model estimated with the safety evaluation area values outperforms the current SafeStat model in terms of predicting future crashes and the future crash rate among identified carriers that pose high crash risks. (See table 3.) That is, our negative binomial and Poisson models show 111 and 109 crashes per 1,000 vehicles per 18 months, respectively, compared with the 102 crashes per 1,000 vehicles per 18 months estimated by the current SafeStat model. The Poisson model is not as appropriate since the crash counts for carriers have variability that is significantly different from the mean number of crashes. The empirical Bayes method optimizes the selection of future crashes; however, it does so by selecting the largest carriers. The largest carriers have a lower crash rate per 1,000 vehicles per 18 months than the carriers that pose high crash risks identified by the SafeStat model or by the negative binomial regression model. Since the primary use of SafeStat is to identify and prioritize carriers for FMCSA and state compliance reviews, the empirical Bayes method did not identify carriers with the highest safety risk. | The Federal Motor Carrier Safety Administration (FMCSA) has the primary federal responsibility for reducing crashes involving large trucks and buses that operate in interstate commerce. FMCSA decides which motor carriers to review for compliance with its safety regulations primarily by using an automated, data-driven analysis model called SafeStat. SafeStat uses data on crashes and other data to assign carriers priorities for compliance reviews. GAO assessed (1) the extent to which changes to the SafeStat model could improve its ability to identify carriers that pose high crash risks and (2) how the quality of the data used affects SafeStat's performance. To carry out its work, GAO analyzed how SafeStat identified high-risk carriers in 2004 and compared these results with crash data through 2005. While SafeStat does a better job of identifying motor carriers that pose high crash risks than does a random selection, regression models GAO applied do an even better job. SafeStat works about twice as well as (about 83 percent better than) selecting carriers randomly. SafeStat is built on a number of expert judgments rather than using statistical approaches, such as a regression model. For example, its designers decided to weight more recent motor carrier crashes twice as much as less recent ones on the premise that more recent crashes were stronger indicators of future crashes. GAO estimates that if FMCSA used a negative binomial regression model, FMCSA could increase its ability to identify high-risk carriers by about 9 percent over SafeStat. Carriers identified by the negative binomial regression model as posing a high crash risk experienced 9,500 more crashes than those identified by the SafeStat model over an 18 month follow-up period. The primary use of SafeStat is to identify and prioritize carriers for FMCSA and state compliance reviews. FMCSA measures the ability of SafeStat to perform this role by comparing the crash rate of carriers identified as posing a high crash risk with the crash rate of other carriers. Using a negative binomial regression model would further FMCSA's mission of reducing crashes through the more effective targeting of compliance reviews to the set of carriers that pose the greatest crash risk. Late-reported, incomplete, and inaccurate data reported to FMCSA by states have been a long-standing problem. However, GAO found that late reported data had a small effect on SafeStat's ability to identify carriers that pose high crash risks in 2004. If states had reported all crash data within 90 days after occurrence, as required by FMCSA, a net increase of 299 carriers (or 6 percent) would have been identified as posing high crash risks of the 4,989 that FMCSA identified. Reporting timeliness has improved, from 32 percent of crashes reported on time in fiscal year 2000, to 89 percent in fiscal year 2006. Regarding completeness, GAO found that data for about 21 percent of the crashes (about 39,000 of 184,000) exhibited problems that hampered linking crashes to motor carriers. Having complete information on crashes is important because SafeStat treats crashes as the most important factor for assessing motor carrier crash risk, and crash information is also the crucial factor in the statistical approaches that we employed. Regarding accuracy, a series of studies by the University of Michigan Transportation Research Institute covering 14 states found incorrect reporting of crash data is widespread. GAO was not able to quantify the effect of the incomplete or inaccurate data on SafeStat's ability to identify carriers that pose high crash risks because it would have required gathering crash records at the state level--an effort that was impractical for GAO. FMCSA has acted to improve crash data quality by completing a comprehensive plan for data quality improvement, implementing an approach to correct inaccurate data, and providing grants to states for improving data quality, among other things. |
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to discuss the General Services Administration’s (GSA) progress in upgrading the security of federal buildings under its operation. As you know, following the April 19, 1995, bombing of the Murrah Federal Building in Oklahoma City, the President directed the Department of Justice (DOJ) to assess the vulnerability of federal office buildings, particularly to acts of terrorism and other forms of violence. Under the direction of DOJ, an interagency working group comprising security professionals from nine federal departments and agencies issued, in June 1995, a report recommending specific minimum security standards for federal buildings. Subsequently, the President directed executive departments and agencies to upgrade the security of their facilities to the extent feasible based on the DOJ report’s recommendations. The President gave GSA this responsibility for the buildings it controls, and in July 1995, GSA initiated a multimillion-dollar security enhancement program for its 8,300 buildings. You requested that we evaluate the GSA building security upgrade program. Specifically, you asked that we determine (1) what criteria GSA used to assess security risks and prioritize security upgrades for its buildings, (2) the implementation and operational status of GSA’s security upgrade program and the costs GSA has incurred by both funding source and type of security upgrade (such as x-ray machines and security guards), and (3) whether any problems have hindered GSA’s implementation of the security upgrade program. timetables in the DOJ report because of GSA’s sense of urgency to upgrade security in its buildings, reduced staffing due to downsizing, data reliability problems, and uncertain funding sources have hindered GSA’s upgrade program implementation. Because of data reliability problems, neither GSA nor we can specify the exact status or cost of the building security upgrade program, and because GSA has not established program outcome measures, neither GSA nor we know the extent to which completed upgrades have resulted in greater security or reduced vulnerability for federal office buildings. Thus, GSA is not in a good position to manage its program to mitigate security threats. Before presenting specific information on our findings, I would like to provide some information on our scope and methodology. In responding to your request, we interviewed key GSA officials in Washington, D.C.; and in GSA Regional Offices in Atlanta, GA; Ft. Worth, TX; Denver, CO; and the National Capital Region of Washington, D.C.; and obtained and reviewed the DOJ report as well as documents from GSA relating to the planning, implementation, and operation of the security upgrade program. We held discussions and obtained data from representatives of the Office of Management and Budget (OMB), the GSA Office of Inspector General (OIG), and several federal agencies that have organizational units in GSA-owned and -leased buildings. We did not evaluate the appropriateness of the DOJ building security standards or the effectiveness of either GSA’s building security program or security programs administered by other agencies. We did our work from July 1997 to May 1998 in accordance with generally accepted government auditing standards. We have included more details about our scope and methodology and additional details about our findings in appendixes I through IV. a given building and were to be assisted by a GSA regional physical security specialist in identifying and estimating the costs of needed security upgrades. GSA assigned initial risk level designations to its buildings based on information available at the time, pending more definitive determinations by the BSCs. Generally, GSA prioritized its buildings for receiving security upgrades based on risk level, with the higher risk buildings receiving security assessments and needed upgrades first. (For further details, see app. II.) Although GSA has completed many building security assessments, upgrade cost estimates, and upgrades, we weren’t able to reliably determine the total numbers of upgrades completed nationally because GSA’s upgrade tracking system contained incomplete and erroneous data. According to GSA, tracking system data were unreliable because its regional staff did not always appropriately or accurately record upgrade transactions in the tracking system. The Federal Protective Service (FPS)—the physical security and law enforcement arm of GSA’s Public Buildings Service—developed a computerized database system to track the status of all building security committee-requested upgrades. The system, which became fully operational in early 1996, was designed to track by region, and by building, upgrades requested, approved, and completed, as well as upgrade cost estimates. The tracking system was also intended to serve in part as a forerunner to a larger government-wide database of security upgrades in all federal buildings, as required by Executive Order 12977, dated October 19, 1995. The order created the Interagency Security Committee, which was to be chaired by GSA’s administrator or his designee, comprising representatives from 17 federal agencies and specific individuals. The Committee was established to enhance the quality and effectiveness of security in buildings and facilities occupied by federal employees. to GSA buildings around the country included (1) concrete bollards constructed around building perimeters, (2) security cameras installed and in use both inside and outside of buildings, and (3) metal detectors and x-ray machines installed at building entrances and operated by GSA or contract security personnel. However, based on our work and that of the OIG, we do not believe that a reliable determination of the building security upgrade program’s status can be made because of errors in the upgrade tracking system. GSA’s upgrade tracking system contained errors related to the number of upgrades approved and the number completed in 24, or 45 percent, of the buildings we reviewed and in 65, or 54 percent, of the buildings reviewed by the OIG. For example, (1) some upgrades that were shown as approved and completed in the tracking system in fact were not completed, and the requests for the upgrades had been cancelled; and (2) some approved upgrades shown in the system as completed weren’t complete, and in fact the related security equipment was boxed and stored. According to GSA, these errors occurred because GSA personnel didn’t always appropriately or accurately record the status of the upgrades in the tracking system. In addition to these errors related to upgrades approved, completed, and cancelled in the tracking system, we have concerns about whether all GSA buildings have been evaluated for security needs. We found that, as of October 1997, the nationwide upgrade tracking system contained little or no evidence that building security evaluations had been done for 754 GSA buildings, 14 of which were level-IV buildings. We judgmentally selected a sample of 26 of the 754 buildings and attempted to determine whether a security evaluation had been done by contacting a representative from each building’s security committee during December 1997 and January 1998. Representatives from 22 of the 26 buildings responded. Of the 22, representatives of 5 buildings told us that a building evaluation wasn’t done, 6 said they weren’t sure whether one was done, and 7 representatives said that the evaluations were done, but the remaining 4 representatives said that evaluations weren’t applicable for their buildings because (1) the lease for the federal agency tenants in the building had been terminated, (2) the building was leased and used only for storage purposes, (3) the building was a maintenance garage with access limited to agency personnel, and (4) the building was no longer in use. For the 11 building representatives that said a building evaluation was not done or that they weren’t sure, we asked whether they believed that their buildings’ current levels of security met the DOJ minimum standards. Representatives of four buildings said “yes”; five said they didn’t know; and two said that the standards weren’t applicable to their specific buildings because the agencies were moving out of the buildings. Four of the five that said they didn’t know also said that they weren’t aware of the DOJ minimum security standards. Similarly, we found no evidence in GSA’s building files that security evaluations had been done for a number of buildings that had no requests for security upgrades in the tracking system. During the latter part of 1997, we judgmentally selected 50 buildings in two GSA regions that showed no requests for security upgrades in the tracking system, and we found no evaluations on file for 12, or 24 percent, of the buildings. GSA had initially classified 3 of these 12 buildings as level IVs, 8 as level IIIs, and 1 as level II. Ten of the 12 buildings were in one GSA region. FPS officials told us that they weren’t sure whether evaluations had been done for all GSA buildings. They said that although they had attempted to obtain evaluations for all buildings, not all BSCs had provided evaluations. In addition to being unable to reliably determine the program’s operational and implementation status, we also couldn’t reliably determine the actual costs or obligations incurred by GSA for security upgrades because GSA’s accounting system, like its tracking system, contained significant errors. Further, we couldn’t determine the actual costs incurred by type of security upgrade because GSA said that its accounting system was not designed to account for costs by upgrade type. Nevertheless, based on the existing accounting system data, we estimate that from October 1, 1995, through March 31, 1998, GSA obligated roughly $353 million for the building security upgrade program nationally. The source of those funds was the FBF. As you know, the Fund consists primarily of rent that GSA charges federal agencies for space and is administered by GSA. It is the primary means of financing the capital and operating costs associated with GSA-controlled federal space. unneeded upgrade funding allowances that could be shifted to regions in need of funds to complete upgrades. In FPS’ analysis, it identified over $5 million in obligations shown in the accounting system for upgrades in 109 buildings in 10 GSA regions for which there were no corresponding approved upgrades shown in the tracking system. FPS found that (1) $0.9 million of the obligations related to other GSA programs rather than the building security upgrade program; (2) $0.6 million in obligations related to upgrades that in fact had been completed but were shown in the tracking system as cancelled and voided by GSA; (3) $1.2 million in obligations related to upgrades completed in other buildings; and (4) $1.6 million were valid obligations, but the corresponding upgrades had inadvertently not been entered into the tracking system. FPS was uncertain about the remaining discrepancies. FPS found similar problems relating to upgrades recorded in the tracking system for which there were no corresponding obligations recorded in the accounting system. Because of these errors or discrepancies in the obligations data, FPS was unable to complete its efforts to reallocate funds among regions for over 2 months. We discuss these discrepancies in more detail in appendix III. In addition to the unreliable nature of the data in the upgrade tracking and accounting systems, several other problems have hindered and slowed GSA’s implementation of the security upgrade program. These included (1) funding source uncertainties; (2) mistakes made to meet deadlines by a downsized staff, as well as a sense of urgency to rapidly complete as many security upgrades as possible; and (3) unreliable upgrade cost estimates. As a result of these problems, GSA was not able to meet several program implementation goals. In addition, GSA lacks information about the benefits of upgrades relative to their costs; has not established specific program effectiveness goals, outcomes, or measures; and doesn’t know whether and to what extent federal office buildings’ vulnerability to acts of terrorism and other forms of violence has been reduced. security, uncertainty continues to exist regarding the source of funds for the building security program. While GSA has projected about $260 million in obligations for fiscal year 1998 and budgeted about $251 million in obligations for fiscal year 1999 on building security, GSA and OMB have not yet reached complete agreement on how best to fund all the future costs of the program. Once GSA and OMB agree on how to fund increased security costs, the increased funding would be contingent on congressional approval in the appropriations process. Timetables, staff, and urgency issues: According to GSA officials, GSA wanted to add as much security as possible in federal buildings before the first anniversary of the Oklahoma City bombing—April 19, 1996. The officials said that this sense of urgency, coupled with the program implementation timetables in the DOJ report and limited availability of staff due to downsizing, led to security upgrade decisions being made with the information available, recognizing that planning and implementation adjustments would likely be necessary. They acknowledged that, as a result, some initial efforts suffered and some mistakes were made. GSA and agency staffs at GSA-controlled buildings had about 3.5 months after issuance of the DOJ report to do security assessments and develop upgrade cost estimates for several hundred level-IV buildings, and they had about 7 months to do the same for several thousand lower level buildings. According to FPS staff and a member of the DOJ report task force from the U.S. Marshals Service, there was little time available to develop the desired level of implementing guidance and training for FPS staff and the thousands of BSCs. Further, they said that the ratio of GSA-operated buildings to FPS physical security specialists added to the difficulties. For example, in one GSA region, we were told that the region had responsibility for about 1,000 buildings but had only 15 FPS physical security specialists available to assist BSCs with the building risk assessments. Nationwide, a total of about 200 FPS physical security specialists were responsible for assisting in the assessment of over 8,000 GSA-operated buildings. GSA had to devise alternative security measures, which sometimes required additional funds. Unreliable cost estimates: A number of the initial cost estimates for upgrades recorded in the tracking system proved unreliable. In an effort to determine how much money was available to complete approved upgrades and reallocate funds among its regions, GSA analyzed upgrade cost estimates versus the actual obligations required to complete the upgrades in many of its buildings, and it found that many of the initial cost estimates were unreliable. For example, the estimated costs in the tracking system of completed upgrades for a group of 98 buildings in 11 GSA regions were about $10.4 million, while the actual obligations to complete the upgrades recorded in the accounting system were about $29 million—that is, obligations to complete the upgrades were over $18 million more than the estimated costs of completing the upgrades. According to GSA, the initial cost estimates were made using the general guidance contained in the DOJ report. Although more accurate cost estimates were made as the upgrade implementation progressed, the upgrade tracking system was not designed to readily capture the revised cost estimates. Without more accurate cost estimates, GSA decisionmakers were not in the best position to judge the cost/benefit of various upgrade options or to reliably estimate funds needed to complete approved upgrades. The unreliable cost estimates combined with the unreliability of the status and cost data in the upgrade tracking and accounting systems, the funding source uncertainties, the reduced level of staff, and the mistakes made due to program deadlines, as well as the sense of urgency by GSA to complete upgrades as quickly as possible hindered the implementation of the upgrade program. Thus, GSA was unable to fully meet program timetables established in the DOJ report and several upgrade implementation goals it had established internally. Further, because additional security upgrade requests were received in the last half of fiscal year 1997, and additional funds were needed to complete previously approved upgrades, GSA estimated in October 1997 that it would need about $7.8 million in additional funds in fiscal year 1998 to complete the upgrades approved as of September 26, 1997. that agencies (usually those involved in national security issues) in these buildings secure the buildings according to their own requirements. Although the DOJ report did not specify goals for GSA’s completion of the security upgrades, GSA established and subsequently revised internal goals for completing upgrades in all its buildings several times in 1996 and 1997. GSA has indicated that it had met the goals established by the DOJ report for evaluating the security needs and estimating the costs of upgrades for all level-IV buildings: In November 1995, GSA told the Senate Subcommittee on Transportation and Infrastructure that, in accordance with the DOJ report’s recommendation and the President’s directive, it had established 429 level-IV building security committees, and it had received over 2,500 upgrade requests from these committees. Also, later that same month, GSA told OMB that $222.6 million would be needed in fiscal years 1996 and 1997 to pay for the upgrades in these 429 buildings. However, we believe that GSA did not fully meet either goal specified in the DOJ report because (1) security evaluations were not made for some level-IV buildings until after November 1995 and (2) in October 1997, much later than the DOJ report’s target dates of October 15, 1995, and February 1, 1996, we found indications that not all of GSA’s buildings, including some level-IV buildings, had been evaluated for security needs. In addition, GSA reported to us that, by March 1996, the number of level-IV buildings had increased to over 700. GSA stated that the increase was partly caused by DOJ’s request that GSA reclassify certain buildings containing court-related tenants from lower levels to level IV, and partly by additional level-IV BSCs’ decisions to conduct building evaluations and provide GSA with upgrade requests after November 1995. Concerning GSA’s internal goals, GSA initially established a goal to have all security upgrades completed for level-IV buildings by September 30, 1996. When it didn’t meet the September 30, 1996, goal, GSA established a new goal to have upgrades completed in all buildings, including level IVs, by September 30, 1997. This goal was not met either, and now GSA’s goal is September 30, 1998, for completing all upgrades approved as of September 26, 1997. GSA’s tracking system indicated that GSA had completed about 85 percent of the approved upgrades for all buildings as of October 3, 1997, and reached the 90-percent mark by March 31, 1998. completed, or to know whether upgrades reported as completed were actually complete and operating as planned. GSA needs this information to justify expenditures for security upgrades and to make changes in its security program if and when appropriate. For example, Social Security Administration (SSA) officials expressed concern to GSA about certain security upgrades that GSA initially placed in some SSA-occupied buildings. SSA was concerned about both the need for and the costs of purchasing and operating the upgrade equipment. After negotiations, GSA removed some upgrades from some SSA locations. Further, security-related evaluations, which GSA security staff were doing prior to the Oklahoma City bombing, were curtailed because these staff were needed to help implement the upgrade program, and at the time of our review, these evaluations hadn’t been resumed. In addition, GSA also hadn’t fully implemented a key recommendation, from an internal “lessons learned” study done after the Oklahoma City bombing incident, to evaluate its current risk assessment methodology to ensure that a wider range of risks are addressed, with an increased emphasis on acts of mass violence. The principal conclusion of the October 1995 study was that GSA’s security and law enforcement processes currently in place did not adequately address the threat environment. In a related issue, the Government Performance and Results Act of 1993 (the Results Act) requires every major federal agency to establish its mission, its goals and how they will be achieved, how its performance toward meeting its goals will be measured, and how performance measures will be used to make improvements. In accordance with the Results Act, GSA established its strategic plan, dated September 30, 1997, covering years 1998 through 2002. GSA’s building security program is specifically addressed in the 1997 plan and in its annual performance plan for fiscal year 1999. However, GSA did not identify in its strategic plan security program evaluations it plans to do, and the 1999 annual performance plan did not state its goals and indicators for the security program in terms of outcomes or desired results as is called for by OMB in Circular A-11. Finally, although GSA’s data systems for tracking program status and funding had incorrect data, which hampered implementation, GSA has just recently initiated efforts to ensure security program measurement data would be valid in connection with the security-related performance goal included in its 1999 annual performance plan prepared under the Results Act. Without more specific information on security program goals and results, GSA does not know the extent to which the upgrades have improved security or reduced federal office building vulnerability to acts of terrorism or other forms of violence. (For further details, see app. IV.) We recommend that the GSA Administrator direct the PBS Commissioner to correct the data in GSA’s upgrade tracking and accounting systems and institute procedures to accurately record approved and completed upgrades in the upgrade tracking system and accurately record obligations incurred for security upgrades in the accounting system; review all GSA buildings to ensure that security evaluations have been complete agreements with OMB on the most appropriate means of providing sufficient funding for the security of GSA-operated buildings at the minimum standard levels recommended by the DOJ report; develop outcome-oriented goals and measures for its security program, identify security program evaluations to be done and implement them as appropriate, and identify the means by which FPS will verify and validate measurement data related to security program goals in GSA’s annual performance plan for 2000; and complete the internally recommended review of GSA’s current security risk assessment methodology, and once the appropriate risk assessment methodology is determined, resume GSA’s program of periodic building security inspections by GSA physical security specialists. program about the source of funds to pay for the capital and operating costs of the upgrades. For example, GSA initially had to use funds to pay for upgrades that had been intended for other purposes. Further, as we have pointed out, early in the program GSA placed on hold proposed costly upgrades, such as the purchase of parking areas adjacent to GSA buildings, because of funding concerns. Thus, while some security upgrades were put on hold due to lack of funds, our major concern is the uncertain funding sources that have confronted the program from its inception. In addition, the GSA officials stated that they have directed GSA regions to resume the periodic building inspection and risk assessment program placed on hold after the Oklahoma City bombing. They said that the inspections are to resume shortly, and inspections for all level-IV buildings are to be completed by the end of fiscal year 1998. Also, the GSA officials said that they have begun to correct the data in the upgrade tracking system and will consider developing outcome-oriented goals for the security program that will be described in GSA’s Year 2000 annual performance plan. In addition, the GSA officials said that they have made substantial progress in discussions with OMB on adjusting agency rental charges to cover the cost of security, and they expect to reach agreement with OMB in time for the Year 2000 budget cycle. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or Members of the Subcommittee may have. The objective of our work was to evaluate the GSA building security upgrade program. Specifically, we were to determine (1) what criteria GSA used to assess security risks and prioritize security upgrades for its buildings, (2) the implementation and operational status of GSA’s security upgrade program and the costs GSA has incurred by both funding source and type of security upgrade, and (3) whether any problems have hindered GSA’s implementation of the security upgrade program. To meet our first objective of determining the criteria GSA used to assess building security risks and prioritize its security upgrade implementation, we held discussions with GSA personnel; reviewed relevant correspondence, guidance, and other documentation on program implementation; reviewed building risk assessment files; and obtained and reviewed copies of GSA’s security upgrade tracking system database as of June 27, August 29, October 3, and December 30, 1997. We also had discussions with a member of the DOJ report task force at the U.S. Marshals Service, as well with security personnel at the Social Security Administration and Department of Health and Human Services, to obtain more insight into how the minimum standards were developed and how they were being implemented by GSA. To meet our objective of determining the security upgrade program’s implementation and operational status and costs, we reviewed the security upgrade tracking system database; compiled data on security upgrades requested, approved, completed, and voided; and compared our results with those compiled by GSA. We judgmentally selected and reviewed GSA building files for 53 buildings in 4 regions and visited 43 of these buildings to determine whether the upgrades were operational. We selected these files to provide a cross section of different risk level buildings with either high or low dollar upgrade cost estimates. We chose not to include level-I buildings in this sample because most upgrades were going into buildings at the higher risk levels. During our review, the GSA OIG’s Office of Audits also began a review of the GSA security upgrade program. We maintained contact with the OIG audit staff and coordinated our work. The GSA OIG audit staff shared with us three alert reports issued to and discussed with GSA management in October 1997, December 1997, and February 1998, concerning problems with erroneous upgrade completion data in the upgrade tracking system and instances of inefficient and ineffective use of security equipment in one or more of the four GSA regions reviewed. We referred to their findings in our report. Further, we obtained and reviewed GSA budget information and actual obligations data from accounting reports generated from the NEAR systemand from data compiled for us by GSA headquarters for fiscal year 1996 through the second quarter of fiscal year 1998. We also reviewed upgrade cost estimates contained in GSA’s security upgrade tracking system as well as documentation on GSA headquarters’ efforts during late August to October 1997 to correlate upgrade cost estimates recorded in the security upgrade tracking system with upgrade obligations data recorded in the accounting system for the purpose of reallocating unneeded upgrade funds among GSA regions. To determine any problems that may have hindered GSA’s implementation of the security upgrade program, we had discussions with GSA headquarters and regional staff in four regions; reviewed GSA correspondence and building files; performed analyses of the security upgrade tracking system databases; and made contacts with 22 of 26 selected building security committees that, according to GSA records, had not requested security upgrades. We also reviewed the results of GSA headquarters’ analyses made during late August to October 1997 of the security upgrade tracking system and accounting system that identified data errors and unreliable upgrade cost estimates. Further, we held discussions with responsible GSA and OMB staff to understand the concerns and ongoing debate related to the future funding of the GSA building security program at the enhanced levels. Finally, we discussed with FPS staff what procedures were in place for monitoring security operations and what efforts had been made to evaluate the security upgrade program, including actions taken on recommendations in an October 1995 internal FPS “lesson learned” report concerning its experiences following the Oklahoma City bombing incident. We also reviewed GSA’s 1997 strategic plan and 1999 annual performance plan required under the Results Act to determine the goals, performance measures, and outcomes that GSA had established for the building security program. We did our work primarily at GSA headquarters in Washington, D.C., and four GSA regional offices in Atlanta, GA—GSA Region 4; Denver, CO—GSA Region 8; Fort Worth, TX—GSA Region 7; and Washington, D.C.—GSA Region 11 (National Capital Region), between July 1997 and May 1998, in accordance with generally accepted government auditing standards. Because the various samples we used in our work were judgmentally selected, the results of the samples cannot be projected to the universes from which they were taken. We also did not evaluate the DOJ security standards or the effectiveness of GSA’s building security upgrade program or any other agency’s building security program. In July 1995, the Federal Protective Service (FPS) began its process for identifying and prioritizing building security upgrade needs and cost estimates using the criteria, guidance, and timetable recommended by the DOJ report, which was issued on June 28, 1995. The DOJ report established 52 minimum security standards in 4 separate categories, which were to be considered for buildings under GSA’s operation based on their assessed risk level. GSA assigned initial risk level designations to its buildings based on information it had on file. Building security committee (BSC) and FPS staff were to subsequently assign the buildings a risk level, using the DOJ report’s more definitive criteria, and evaluate them to determine needed security upgrades and the estimated costs for the upgrades. Using DOJ report criteria, BSC and FPS staff were to place buildings under GSA’s operation into risk levels. The DOJ criteria included tenant population, volume of public contact, size, and agency sensitivity, with level V the highest risk level and level I the lowest, as follows: Level V: A building that contains mission functions critical to national security, such as the Pentagon or CIA Headquarters. A Level-V building should be similar to a Level-IV building in terms of number of employees and square footage. It should have at least the security features of a Level-IV building. The missions of Level-V buildings require that tenant agencies secure the site according to their own requirements. Level IV: A building that has 451 or more federal employees; high volume of public contact; more than 150,000 square feet of space; and tenant agencies that may include high-risk law enforcement and intelligence agencies, courts, and judicial offices, and highly sensitive government records. Level III: A building with 151 to 450 federal employees; moderate/high volume of public contact; 80,000 to 150,000 square feet of space; and tenant agencies that may include law enforcement agencies, court/related agencies and functions, and government records and archives. (According to GSA, at the request of the Judiciary, GSA changed the designation of a number of buildings housing agencies with court and court-related functions from Level III to Level IV.) Level II: A building that has 11 to 150 federal employees; moderate volume of public contact; 2,500 to 80,000 square feet of space; and federal activities that are routine in nature, similar to commercial activities. Level I: A building that has 10 or fewer federal employees; low volume of public contact or contact with only a small segment of the population; and 2,500 or less square feet of space, such as a small “store front” type of operation. BSCs were also to prepare facility evaluations based on the DOJ minimum standards. The facility evaluations, containing requested security upgrades, justifications, and estimated costs for each upgrade were to be submitted to the applicable FPS regional offices for review and approval. Security upgrades costing more than $100,000 to acquire or having an annual operating cost greater than $150,000 required final approval at FPS headquarters. FPS regional staff focused their evaluation efforts on level-IV buildings first, followed by levels III through I, consistent with the timetable recommended by the DOJ report and endorsed by the President. Funding of upgrades generally followed this same progression, with FPS focusing first on level-IV buildings and then levels III through I. Each FPS region established its own building security upgrade implementation schedule based on coordination with other involved PBS components and the individual requirements of the various types of security upgrades. For example, some upgrades required design and engineering work before actual installation could proceed, and some required coordination and approvals from local governments and historical building societies before work could proceed. In early 1996, FPS completed a computerized database system to track, by regional office and by building, all BSC-requested security upgrades. This tracking system was to include the date each upgrade was approved or disapproved; the estimated cost of acquiring, installing, and operating the upgrade; and its scheduled and actual completion status. Each FPS region was to have a database of its buildings and was responsible for maintaining its database. FPS headquarters staff periodically uploaded and entered data into each region’s database to show headquarters’ approval actions on requested upgrades, where required. FPS headquarters staff also consolidated the regional databases for its own use in tracking the nationwide security upgrade program. The DOJ report noted that level-V facilities required tenant agencies to secure their facilities according to their own requirements, and that the degree to which those requirements dictate security features in excess of those for a level IV facility should be set by the individual agencies. For this reason, except for two approved level-V upgrades requiring capital and/or operating funding, risk level IV was the highest level included in GSA’s security upgrade program. The DOJ report established 52 minimum security standards in the categories of perimeter security, entry security, interior security, and security planning to be considered for a building based on its assessed risk level. Tables II.1 through II.4 show how the DOJ report’s minimum security standards are to be applied to each building on the basis of its assessed risk level. For example, control of facility parking is recommended as a minimum standard for buildings in security level III through V and recommended as desirable for buildings in security levels I and II. Avoid leases in which parking cannot be controlled Leases should provide security control for adjacent parking Post signs and arrange for towing unauthorized vehicles ID system and procedures for authorized parking (placard, decal, card key, etc.) Adequate lighting for parking areas Closed circuit television (CCTV) monitoring CCTV surveillance cameras with time lapse video recording Post signs advising of 24 hour video surveillance Lighting with emergency power backup Extend physical perimeter with concrete and/or steel barriers Legend: Minimum standard = Standard based on facility evaluation = o Desirable = Not applicable = N/A Review receiving/shipping procedures (current) Implement receiving/shipping procedures (modified) Evaluate facility for security guard requirements Intrusion detection system with central monitoring capability Upgrade to current life safety standards (fire detection, fire suppression systems, etc.) X-ray and magnetometer at public entrances Require x-ray screening of all mail/packages Entry control with CCTV and door strikes Legend: Minimum standard = Standard based on facility evaluation = o Desirable = Not applicable = N/A Agency photo ID for all personnel displayed at all times Prevent unauthorized access to utility areas Provide emergency power to critical systems (alarm systems, radio communications, computer facilities, etc.) Examine occupant emergency plans (OEP) and contingency procedures based on threats OEPs in place, updated annually, periodic testing exercise Assign & train OEP officials (assignment based on largest tenant in facility) Establish law enforcement agency/security liaisons Review/establish procedure for intelligence receipt and dissemination Conduct annual security awareness training Establish standardized unarmed guard qualifications/training requirements Establish standardized armed guard qualifications/training requirements Co-locate agencies with similar security needs Do not co-locate high/low risk agencies Establish flexible work schedule in high threat/high risk areas to minimize employee vulnerability to criminal activity Arrange for employee parking in/near building after normal workhours Conduct background security checks and/or establish security control procedures for service contract personnel Install mylar film on all exterior windows (shatter protection) GSA’s upgrade tracking system showed that as of March 31, 1998, about 7,000 building security upgrades had been completed, and we estimate that roughly $353 million were obligated for upgrades between October 1, 1995, and March 31, 1998. The source of funds expended on the upgrade program was the FBF. However, actual cost information by upgrade type was not readily available, and the data on the implementation status and actual costs of GSA’s security upgrade program are unreliable. We could not reliably determine the completion and operational status of security upgrades in GSA’s buildings because upgrade status data were not accurately recorded in the tracking system. Further, the accuracy and reliability of the obligations data are questionable because of errors made by GSA personnel when recording upgrade obligations transactions into the accounting system. GSA’s upgrade tracking system showed that as of March 31, 1998, about 7,800 upgrades were approved and about 7,000 upgrades were completed in federal buildings across the United States. However, the data shown by the tracking system were not reliable because the tracking system contained numerous errors. According to GSA, these errors occurred because its regional personnel did not always appropriately or accurately record upgrade transactions into the tracking system. Our review of security upgrade program records of 53 buildings and our visits to 43 of these buildings in 4 regions, as well as visits by the GSA OIG’s audit staff to 121 buildings in 4 GSA regions, showed that GSA has implemented numerous upgrades in buildings throughout the country.However, through these visits, errors were identified in the tracking system related to the number of upgrades approved and completed in 24, or 45 percent, of the buildings we reviewed and in 65, or 54 percent, of the buildings reviewed by the OIG. Our comparison of tracking system data for the 53 GSA buildings with information from FPS building files and our observations at the buildings showed errors affecting completion rates (for 24 buildings) and other information (for 6 buildings) in the tracking system for 30 of these buildings, or about 57 percent, and ranged from 46 percent of the buildings reviewed in region 8 to 70 percent in region 11. Our work related to these buildings was done during the period August through December 1997 in GSA regions 4, 7, 8, and 11. The GSA OIG staff’s work related to these buildings was reported on in October and December 1997 and in February 1998 in GSA regions 1, 4, 7, and 11. We and the OIG audit staff reviewed 4 of the same buildings—2 in Region 7 and 2 in Region 11. system and (2) for four buildings, completed upgrades were not shown in the tracking system. In addition, in nine of the 20 buildings, we found security upgrades that were initially approved and then subsequently cancelled, but were still shown as approved upgrades in the tracking system. Further, for six of the buildings, we found other discrepancies between the buildings’ records and the upgrade tracking system. For example, some buildings’ risk level designations, security upgrade cost estimates, and types of upgrades approved were inaccurately recorded in the tracking system. Additionally, in one region, we found that the completion status of the region’s security upgrade program was overstated and erroneously reported to FPS headquarters because regional FPS staff were inappropriately accounting for some upgrades. According to regional FPS staff, the term “pending” may have been used to categorize upgrades that had been approved and not completed because (1) the upgrades were for new buildings being constructed or (2) the contracts for purchasing the upgrade equipment had not been signed or GSA funds obligated. Further, the regional staff thought headquarters had instructed that new upgrades approved after March 31, 1997, would not be funded in fiscal year 1997 and should be put in a pending status for funding in fiscal year 1998. According to FPS headquarters staff, the “pending” category was intended only for upgrades not yet approved. FPS headquarters officials became aware of this issue late in fiscal year 1997 while attempting to reallocate among regions funds needed to complete approved upgrades. FPS then instructed the regions to ensure that all approved upgrades were categorized as “approved” in the tracking system because all “pending” upgrades as of September 26, 1997, would be considered for funding at a later time. Because of the confusion over the intent of the term pending for categorizing upgrades, this region reported in August 1997 an upgrade completion rate of 99.6 percent for level-IV buildings. However, once these pending upgrades were changed to approved, the region’s completion rate decreased to 77 percent in October 1997. For the same reason, the region’s upgrade completion rate for levels-I through -III buildings also dropped from about 65 percent to about 56 percent over this same period. GSA’s completion goal for all level-IV building upgrades was 100 percent by the end of fiscal year 1997. GSA’s OIG issued three separate audit “alert” reports with significant findings related to the building security upgrade program. The OIG audit staff’s visits to 121 buildings in GSA regions 1, 4, 7, and 11 showed that 65 buildings, about 54 percent, had upgrades reported as completed in the tracking system that were not completed. In fact, the OIG staff found instances where security upgrade equipment reported as completed was actually stored (sometimes in its original packaging), missing, or not operational. For example, in region 11, upgrades for 32 buildings involving equipment, such as x-ray scanners and magnetometers used to screen people and packages, were shown in the tracking system as completed but were actually missing, not operational, or in storage. In addition, the OIG staff found problems, similar to those we found, related to security upgrades in 33 of the 69 buildings they visited in regions 1, 4, and 7. They found that upgrades shown in the tracking system as completed had not been installed because of changes in building security needs, use of alternative security measures, or building lessors’ opposition to the installation of the planned security upgrades. Finally, in a separate report, the OIG stated that security equipment costing about $2 million, such as X-ray devices, magnetometers, and cameras purchased for the upgrade program, were found stored in two storage rooms in region 11. Much of the equipment was in its original packaging. The OIG reported that at that time, the GSA region had no immediate plans for using the equipment. These reports were issued to the Assistant Commissioner, FPS. They were reports A70659/P/2/R98001, dated Oct. 1, 1997; A80613/P/2/R98006, dated Dec. 11, 1997; and A80615/P/2/R98012, dated Feb. 11, 1998. Our sample was selected to obtain a cross section of GSA regions and building risk levels, and included buildings in 9 of 11 GSA regions and security risk levels I - IV. 6 said they weren’t sure whether one was done, 7 representatives said that the evaluations were done, but the remaining 4 representatives said that evaluations weren’t applicable for their buildings because (1) the lease for the federal agency tenants in the building had been terminated, (2) the building was leased and used only for storage purposes, (3) the building was a maintenance garage with access limited to agency personnel, or (4) the building was no longer in use. For the 11 building representatives that said a building evaluation was not done or that they were not sure, we asked whether they believed that their building’s current level of security met the DOJ minimum standards. Representatives of four buildings said “yes”; five said they didn’t know; and two said that the standards weren’t applicable to their specific buildings because the agencies were moving out of the buildings. Four of the five that said they didn’t know also said that they weren’t aware of the DOJ minimum security standards. Similarly, we found no evidence in GSA’s building files that security evaluations had been done for a number of buildings that had no requests for security upgrades in the tracking system. During the latter part of 1997, we judgmentally selected 50 buildings in 2 GSA regions that showed no requests for security upgrades in the tracking system, and we found no evaluation on file for 12, or 24 percent, of the buildings. Of these 12 buildings, 1 was a level II, 8 were level IIIs, and 3 were level IVs. Ten of the 12 buildings were in one GSA region. FPS regional officials told us that they were not sure that evaluations had been done for all GSA buildings. They said that although they had attempted to obtain evaluations on all buildings, not all building security committees had provided evaluations. Table III.1 provides upgrade completion status information we compiled from the tracking system as of different points in time during program implementation. The note at the end of the table provides upgrade status information as of March 31, 1998, which was provided to us by FPS in late April 1998. Based on data obtained from GSA’s accounting system, we estimated that from October 1, 1995, through March 31, 1998, obligations of roughly $353 million were incurred for the building security upgrade program, and all of these funds were obtained from the FBF. However, we could not readily obtain actual cost information by upgrade type because, according to GSA, its accounting system was not designed to provide obligations incurred by upgrade type. In addition, the obligations data shown by the accounting system were not reliable because GSA personnel did not always appropriately and accurately record the obligations incurred for upgrades in the accounting system. Although actual cost information by upgrade type was not readily available, to provide an indication of the costs incurred by GSA by upgrade type, we compiled from the upgrade tracking system data showing the estimated costs of upgrades by upgrade category. These estimated costs data are shown in table III.2. However, as we discuss in detail in appendix IV, many of the estimated costs of upgrades differed significantly from the actual obligations incurred by GSA to complete the upgrades. Table III.2: Summary of Estimated Costs of Approved Security Upgrades Types by Category as of December 30, 1997 Perimeter Security—includes closed circuit televisions, physical barriers, security lighting, fences, gates, etc. Entry Security—includes access control systems, X-rays/ magnetometers, security guards, intrusion detection systems, security locks, etc. Interior Security—includes employee/visitor ID, emergency power backup, etc. Other security planning—intelligence sharing, training, tenant assignment, construction/ renovation, etc. In August 1997, FPS headquarters staff attempted to identify regions having unneeded upgrade funding allowances that could be shifted to other regions needing funds to complete approved upgrades. They were unable to complete this effort until October 1997 because of the numerous discrepancies they found between the upgrade obligations in GSA’s accounting system and the approved and completed upgrade data in the tracking system. Although not all of the discrepancies could be explained, FPS regional staff’s research provided some insight. In one FPS headquarters analysis, obligations totaling $5.1 million for upgrades in 109 buildings in 10 GSA regions were found in the accounting system, but no corresponding approved upgrades were found in the tracking system. These obligations ranged from $16 to $662,912. Regional staff were able to determine the cause for most of this $5.1 million discrepancy between the accounting and tracking systems—$1.2 million had been recorded in error to other buildings; $1.6 million were valid obligations but the corresponding upgrades had inadvertently not been entered into the tracking system; about $0.6 million were valid, but corresponding upgrades had been cancelled and voided in the tracking system; $0.9 million had been erroneously entered into the accounting system—the obligations were not related to the building security upgrade program. In a second FPS headquarters analysis, the tracking system showed completed upgrades for 386 buildings in 10 regions with estimated costs of about $9.7 million, for which there were no corresponding obligations recorded in the accounting system. Regional staff were able to explain some of these discrepancies: (1) about $2 million of the $9.7 million in estimated upgrade costs were borne by either the tenant agencies or the building lessors, not by GSA; (2) about $0.2 million in obligations were recorded in error in the accounting system to other FBF programs instead of the security upgrade program; and (3) about $0.2 million related to upgrades erroneously recorded in the tracking system as completed when, in fact, they had been voided. Table III.3 compares contract security guard and security system capital budgets and obligations obtained from GSA’s accounting system for fiscal years 1996 through March 31, 1998, with similar obligations for fiscal year 1994, prior to the Oklahoma City bombing. As shown by the table, GSA’s contract guard costs have risen significantly from $23 million in 1994 to almost $63 million through only the first 6 months of fiscal year 1998. According to the regions, although the approved upgrades were voided after it was determined that the upgrades were not needed for the buildings originally intended, the upgraded security equipment purchased through these obligations would be used in other buildings. Budget activity 61, Building Contract security guards (K-2x) Security systems (K-36) Security upgrade program (K-36 capital costs) Budget activities 61 and 54 ($000) ($000) ($000) ($000) ($000) ($000) Total—fiscal years 1996-1998 (as of March 31, 1998) (Table notes on next page) Note 1: GSA’s accounting system provides for coding FBF obligations by budget activities, such as basic repairs and alterations (BA-54) and building operations (BA-61), which have been the primary budget activities funding the building security upgrade program. Within the FBF budget activities, the system also provides for coding obligations by functions, such as the K-series function codes that were established for the FPS law enforcement and security programs. The primary K-codes applicable to the upgrade program have been the K-1x series—uniformed operations (police officers), K-2x series—contract guard services, and K-36—security systems/equipment. GSA established new, specific K-codes to enable identifying and tracking (1) police and contract guard services for the upgrade program (operations costs) as distinguished from the level of contract guard services for normal security prior to the Oklahoma City bombing and from the level of police and contract guard services for moderate security provided since the bombing and (2) the capital costs of upgraded security systems equipment and other capital security measures, such as building perimeter barriers and parking lot fencing and gates. However, GSA did not establish the new K-codes for the upgrade program until March 1996, nearly 6 months into fiscal year 1996 activities. PBS Controller staff advised us that upgrade costs such as for police and contract guard services were not always correctly coded as upgrade program costs and that costs charged to normal security operations prior to the new K-codes may not have been corrected. Thus, for this table, we are showing the regional budget allowances GSA provided and the obligations reported in the accounting system for all BA-61 contract guard services, K-2x series, and security systems upgrades, K-36, and for BA-54, the K-36 capital upgrade obligations recorded. However, GSA did not issue specific BA-61 regional budget allowances for K-36, so the budget amounts GSA gave us are for all BA-61, K-3x series function codes. FPS has managed its police officer operations as a separate program from the building security upgrade program, and thus we have not included the K-1x series in the above table. From fiscal year 1996 through March of fiscal year 1998, about $65.592 million had been obligated in the K-1x series for federal protective police officers. Totals may not add up due to rounding. Note 2: Not shown in this table are FBF funds appropriated in fiscal year 1997 for security upgrade capital costs under GSA’s new construction program (BA-51) of about $27.3 million and major repairs and alterations (BA-55) of $2.7 million. Of the $27.3 million from BA-51, GSA provided budget allowances to its regions of about $6.7 million and in fiscal year 1997 through April 30 of fiscal year 1998 had obligated only about $53,000. None of the $2.7 million from BA-55 had been provided as regional budget allowances or had been obligated. Also as of April 30, 1998, GSA added about $2.9 million in fiscal year 1998 FBF BA-54 funds to the regional budget allowance totals and actual obligations in fiscal year 1998 for security upgrades had increased by $1.698 million to $15.148 million. We did not obtain actual BA-61 obligations for security operations as of April 30, 1998. A number of problems hindered GSA’s implementation of the security upgrade program. First, GSA officials told us that they believed it was incumbent on GSA to implement as soon as possible security upgrades in as many buildings as possible after the Oklahoma City bombing incident. However, they said they were faced with both limited time and staff to help plan and implement the program, so mistakes were made. Second, GSA faced program funding source uncertainties throughout the upgrade program. Third, many of the initial decisions made about the need for upgrades had to be reevaluated, changed, or cancelled. Finally, many of the initial cost estimates for completing the upgrades proved to be unrealistic. Because of these problems, program implementation was slowed; GSA was unable to meet program goals; and it now estimates that additional funds will be needed in fiscal year 1998 to complete upgrades approved through September 26, 1997. In addition, GSA had not established specific program effectiveness goals, outcomes, or measures, nor had it specified in its performance plan how it intended to verify performance data. Thus, GSA does not know whether or to what extent federal office buildings’ vulnerability to acts of terrorism and other forms of violence has been reduced. FPS regional and headquarters staff told us that the time frames imposed on them for completing building assessments and cost estimates for security upgrades by the DOJ report created a difficult environment for GSA. Thousands of building security committees had to be organized and assisted in determining security upgrade needs fairly quickly. As a result, the quality of these initial efforts may have suffered. Further, with the first anniversary of the Oklahoma bombing rapidly approaching, GSA wanted to place as much added security as was possible into its buildings by the April 19, 1996, anniversary date because of concerns about further bombings or other acts of violence that might occur. building security upgrade program, and as of July 31, 1995, FPS employed 972 regional staff, including a force of 376 uniformed Federal Protective Police Officers, 199 physical security specialists, 66 criminal investigators, 331 other staff, and a number of contract security guards. In March 1996, PBS documents showed that it planned to hire 150 more police officers as the result of a study that showed that PBS needed 508 additional regional staff—347 police officers, 26 physical security specialists, 26 criminal investigators, and 109 other staff—to support the enhanced security levels stemming from its implementation of the security upgrades recommended by the DOJ report. From the beginning of the upgrade program, according to FPS staff and a member of the DOJ report task force from the U.S. Marshals Service, there was little time available to develop the desired level of implementation guidance and training for FPS staff and the thousands of BSCs. Further, FPS staff said that the ratio of GSA-operated buildings to FPS physical security specialists added to the difficulties. For example, in one GSA region, we were told that the region had responsibility for about 1,000 buildings but had only 15 FPS physical security specialists available to assist BSCs with the building risk assessments. Nationwide, a total of about 200 FPS physical security specialists were responsible for assisting in the assessment of over 8,000 GSA-operated buildings. An FPS official told us that in this challenging environment—deadlines, staff reductions, and significant levels of effort required by many players—it was not surprising that program implementation mistakes occurred. However, the FPS official believed that GSA has taken great strides in significantly improving the level of security in its buildings. According to GSA officials, uncertainties over where funds could be obtained to purchase and operate security upgrades have hindered program implementation. In addition, concerns about the availability of funds for the program contributed to FPS’s decisions to delay approval of some types of more costly upgrades requested by BSCs and to place those requests into a “pending” status. Some of these pending requests were subsequently cancelled and voided or removed from the building security upgrade program by FPS because of funding uncertainties. Further, GSA and OMB have not yet reached agreement on how best to fund all the costs of the security program in the future. By February 1996, GSA had received requests for security upgrades from thousands of BSCs. Although GSA had established the implementation of the building security upgrade program as one of its top priorities, GSA faced the challenge of identifying and obtaining funds for acquiring and operating the security upgrades during a period when overall federal budget constraints and uncertainties existed. No funds were included in GSA’s fiscal year 1996 budget for maintaining security at the enhanced levels that began immediately after the bombing in Oklahoma City, or for funding the security upgrades requested by the BSCs. Further, GSA was experiencing a shortfall in the Federal Buildings Fund (FBF) because of an overestimation of rental revenue from federal agencies due to several reasons. According to GSA officials, delays in congressional approval of many federal agencies’ fiscal year 1996 appropriations were occurring and adding to the uncertainties of how the upgrades were to be funded. Without knowing the available funding that could be expected from the FBF and/or customer federal agencies, GSA officials said that it had to proceed with what information was available in making program decisions, setting program priorities, and working to complete upgrades, while recognizing that planning and implementation adjustments would be necessary. On February 29, 1996, the GSA Administrator asked tenant agencies to help fund the security upgrade program. He stated that within its own funding constraints, GSA had been paying for certain security enhancements, primarily additional contract guard services, for the past 9 months. He asked the tenant agencies to reimburse GSA about $84 million for these cost in fiscal year 1996. He further stated that he would commit GSA to provide $79.5 million from the FBF to pay for the acquisition costs of security upgrades in fiscal year 1996. congressional authority to reprogram $119.8 million in fiscal year 1996 FBF funds from other planned building activities and to use these funds for the security program: (1) $40 million from GSA’s installation acquisitions payment activity and (2) $79.8 million from the building repairs and alterations program consisting of $13.5 million from the Internal Revenue Service Center modernization project, Holtsville (Brookhaven), New York; $49.3 million from the chlorofluorocarbons replacement program; $12.6 million from the energy reduction program; and $4.4 million from the basic building repairs and alterations program. In April 1996, GSA received congressional approval from the cognizant House and Senate Appropriations Subcommittees to reprogram the $119.8 million in funds previously made available for other FBF programs. According to GSA, in its fiscal year 1997 appropriation, Congress directed GSA to spend about $240 million from the FBF for the building security upgrade program—$175 million for the operations costs of security upgrades and $65 million for the capital costs of security upgrades. However, because of GSA’s overestimation of FBF revenues, GSA made available only about $130 million of the $175 million from the buildings operations program for security operations. Thus, in fiscal year 1997, a total of about $195 million was made available from the FBF for the security program. According to GSA, for fiscal year 1998, Congress appropriated about $130 million for the operations costs of security upgrades but GSA did not request nor receive from Congress any additional capital funds for the building security upgrade program. However, because additional security upgrade requests were received from BSCs in the last half of fiscal year 1997, and because additional funds were needed to complete previously approved upgrades, GSA determined in October 1997 that it could need an additional $7.8 million in fiscal year 1998 to complete upgrades approved as of September 26, 1997. GSA planned to obtain these additional capital funds through a reprogramming of funds from other fiscal year 1998 FBF accounts. to be voided. FPS decided that security measures, such as fire suppression and fire detection systems, would be considered separate and apart from the building security upgrade program. As recommended by the DOJ report, GSA has been working with OMB to increase future FBF revenues to more closely approximate its expenditures for the GSA security program at its upgraded level. However, GSA and OMB have not yet reached complete agreement on how and when to increase the rent that GSA charges tenant federal agencies so that rental revenues will be sufficient to pay for the costs of GSA’s building security program. Rent that GSA charges federal agencies for space and services it furnishes is set by the GSA Administrator, who is authorized by law to charge agencies for furnished services, space, quarters, maintenance, repair, or other facilities. The law states that the rates and charges shall approximate commercial charges for comparable space and services. The law does not require that GSA’s rental charges be based on its actual costs of providing the space and services, which include security. Thus, GSA’s rental charges are based primarily on GSA’s periodic market price appraisals for comparable space, not on GSA’s actual costs to provide the space. GSA’s practice when determining the amount of rent to charge federal agencies has been to include a charge for security. This fee consists of two components: (1) the basic service charge of 6 cents per square foot that, coupled with other funds from the FBF, is used for control center operations, criminal investigations, protective services activities, and administration of FPS programs; and (2) a building-specific fee that is used along with other funds from the FBF to pay for commercial equivalent items, such as contract security guard services, and security alarm systems’ installation and maintenance. According to GSA, because GSA’s expenditures for security have historically exceeded the amount charged to agencies for security, the FBF has absorbed the excess expenditures. charges billed to tenant agencies by about $540 million. GSA has projected about $260 million in obligations for fiscal year 1998 and has budgeted about $251 million for fiscal year 1999 for building security. GSA projects that its obligations for security will exceed security related revenue by about $228 million in fiscal year 1998 and by about $112 million in fiscal year 1999. The DOJ report recommended that GSA consider increasing rents to cover the added costs of upgrading security. GSA is required to obtain OMB approval for the rent it charges federal agencies. GSA and OMB officials said they were not in a position to increase rents in fiscal years 1996 and 1997 to help pay for increased costs of security because agencies need to know their rent costs at least 2 years in advance to provide sufficient time for annual budget development and approval. GSA requested an increase in rents for fiscal years 1998 and 1999 as part of a comprehensive effort to redesign its system for determining rent charges and fees for services such as security. As a pilot project, OMB approved part of the requested rent increase—an increase in building-specific fees to recover the cost associated with security operations for new lease agreements made in 1998, and for all leases beginning in fiscal year 1999. OMB also allowed GSA to increase its basic service charge for security from 6 cents to 16 cents per square foot for new lease agreements made in 1998 and 1999. According to GSA and OMB officials, OMB did not allow GSA to increase the basic service charges for existing leases for fiscal years 1998 and 1999 because a comprehensive rent reform proposal was under development by GSA. These officials expect to complete this action by the end of fiscal year 1998. Also, they are continuing to discuss how agencies’ rent charges will reflect GSA’s costs for security in fiscal year 2000 and beyond. Decisions about funding GSA’s security program are complex and involve tradeoffs among competing needs and funding sources. These decisions are important for both federal agencies and the FBF. There are a number of options for addressing the security funding issue. These include allowing the FBF to continue to fund the excess security costs, decreasing expenditures for security, or increasing revenues by either raising security charges or obtaining additional direct appropriations to cover the shortfall. The option or options selected could affect the government’s investment in the existing inventory of federal buildings as well as GSA’s ability to meet the government’s future space needs. Another problem affecting the implementation of the security upgrade program was the need to reevaluate the initial decisions about building security upgrade needs. According to GSA officials, many of these decisions were changed, or even cancelled and voided, for a number of reasons. Because of these changes, it was more difficult for GSA to order priorities, allocate funds, and set realistic completion schedules and goals; delays and inefficiencies in the program resulted. Also, these changes created challenges for GSA in maintaining the reliability of the tracking system, and in some cases, the system was not updated to reflect the changes. Security upgrade decisions often had to be reevaluated, changed, or voided because of several building-unique issues that surfaced after GSA’s initial efforts to identify building security upgrade needs. For example, many GSA-owned and -operated buildings are considered “historic.” For some of these buildings, issues raised by historical societies about the effects of installing certain security upgrades had to be addressed by GSA. In one region we visited, GSA had to find an alternative method for mounting surveillance cameras for monitoring a building’s outside perimeter because of concerns raised by the historical society about the adverse effects of mounting the cameras on the building. GSA decided to attached the cameras to poles near the building instead of to the building itself. This alternative method for utilizing the cameras to upgrade security required additional design work, time, and cost for GSA. There were other instances in which building owners and/or nongovernment tenants in GSA-leased buildings expressed concerns or objections to approved security upgrades, such as the use of magnetometers to screen people entering the building. Some approved and some completed upgrades subsequently had to be cancelled. During our building site visits, both we and the GSA OIG staff found examples of approved, and sometimes completed, upgrades that were voided because of subsequent reevaluations. requested by each building’s BSC. SSA, however, said that the BSCs sometimes requested upgrades without sound security knowledge and the presence and oversight of a GSA physical security specialist. After negotiating with SSA, GSA removed upgrades from some SSA locations and agreed to assess the need for upgrades at other locations. In still other instances, security upgrades requested and/or approved required extensive discussion, coordination, and/or approvals from local municipalities prior to completion. Examples of security upgrades involving these situations included perimeter barriers, such as planters and concrete bollards, that were to be placed on sidewalks or curbs owned by cities or other municipalities, or where city-owned parking meters along the streets around the GSA-operated building were to be eliminated. Changes to approved security upgrades were also necessitated when one or more federal agency tenants moved out of or into a building, thus changing the security needs of the building. Also, as GSA acquired new space in buildings not previously assessed, the related security needs had to be assessed and addressed. Further, FPS staff also told us that some BSCs that initially did not request security upgrades later reconsidered and requested upgrades for their buildings. We noted in the upgrade tracking system a number of requests for upgrades initiated between April 1 and December 30, 1997. During this period, over 800 new approved requests for upgrades, totaling about $20 million in estimated capital costs and $11 million in estimated annual operating costs, were recorded in the upgrade tracking system. The extent to which these upgrade program changes have occurred is reflected by the changes in the number of buildings with approved upgrades, the number of approved upgrades, the number of completed upgrades, and the number of cancelled upgrades as reflected by the upgrade tracking system. According to the tracking system, the number of buildings and the number of approved security upgrades decreased between March 25, 1996, and December 30, 1997—from 2,789 total buildings with 8,577 approved upgrades to 2,564 buildings with 7,885 approved upgrades. Also, the number of completed upgrades and the number of voided upgrades reported increased from 6,194 to 6,841, and from 3,373 to 3,652, respectively, between August 29, 1997, and December 30, 1997. FPS’s security upgrade tracking system did not provide upgrade program managers with reliable cost estimates for completing and operating security upgrades because initial cost estimates shown in the system often did not reflect building-specific installation requirements or other factors affecting cost. Although GSA regional staff developed more accurate cost estimates as upgrades were completed, the upgrade tracking system was not designed to readily add revised cost estimates to the individual upgrade records. As a result, upgrade cost estimates in the tracking system varied significantly with the actual obligations recorded in the accounting system, thus lessening the tracking system’s effectiveness as a management tool for GSA and BSC program decisionmakers. Without readily available and more accurate cost estimates, BSC and GSA decisionmakers were not in a good position to judge the cost/benefit of various upgrade options nor to determine reliable estimates of funds needed to implement and operate the security upgrades. According to GSA, the DOJ report contained general cost estimating guidelines for certain recommended security upgrades for BSC’s and FPS security specialists’ use when estimating the costs of needed building security upgrades. FPS recorded the BSC upgrade requests and associated cost estimates in the upgrade tracking system. GSA regional staff developed more accurate cost estimates after the requests were approved, often after further engineering and design work and consideration of building-specific conditions. Although regional GSA building and contracting staff could have been aware of the revised estimates, FPS did not provide a means for readily including the more accurate cost estimates in the upgrade tracking system. FPS regional staff told us that updated estimates could have been shown in the tracking system by voiding the existing upgrade record and then creating a new upgrade record with the revised cost estimate, but this alternative was not often employed. In August 1997, FPS headquarters staff identified in the tracking system 98 buildings for which the estimated cost of the upgrades varied significantly from the actual obligations incurred. They found that the estimated capital costs of these upgrades totaled about $10.4 million compared to $29 million in obligations recorded in the accounting system for these upgrades—a difference of $18.6 million, or 179 percent. low. For example, for seven of these buildings, FPS regional explanations to headquarters indicated that additional costs of $2.3 million were obligated to complete the upgrades because of unexpected problems: for six historic buildings additional costs had to be incurred, including three buildings where closed circuit television cameras had to be mounted on poles rather than attached to the buildings ($1.1 million in additional costs); and for one building, while installing barriers around the building, old fuel oil tanks were discovered and had to be removed ($1.2 million in additional costs). Our further analysis of FPS data obtained from the tracking system and accounting system in September 1997 showed that the estimated costs of upgrades approved for 551 buildings in 11 GSA regions varied significantly, both up and down, from the actual costs obligated to complete the upgrades. For 348 buildings, the cost estimate of the upgrades totaled $18.2 million more than the actual costs obligated; for 202 buildings, the estimated costs were $14.3 million less that the actual costs obligated; and for 1 building, the estimated costs equaled the actual costs obligated. The DOJ report called for GSA to complete security assessments and upgrade cost estimates by October 15, 1995, for its high-risk (level-IV) buildings, and by February 1, 1996, for the remaining lower risk buildings (level I - III). Although the DOJ report didn’t specify goals for GSA’s completion of the security upgrades, GSA established and subsequently revised goals for completing upgrades in level IV and lower level buildings several times over the last 2 years. However, GSA did not fully meet the goals for completing security assessments called for in the DOJ report, nor did it meet goals it established for the completion of the security upgrades. than the target dates of October 15, 1995, and February 1, 1996, we found indications that not all of GSA’s buildings, including some level-IV buildings, had been evaluated for upgrade needs after November 1995. GSA reported to us that by March 1996 the number of level-IV buildings had increased to over 700. GSA stated that the increase was partly because DOJ requested GSA to reclassify certain buildings containing court-related tenants from lower levels to level IV, and partly because additional level-IV building security committees conducted building evaluations and provided GSA with upgrade requests after November 1995. Concerning GSA’s internal goals, GSA initially established a goal to have all security upgrades completed for level-IV buildings by September 30, 1996, but GSA didn’t meet this goal. Subsequently, GSA established a new goal to have upgrades completed in all buildings by September 30, 1997; this goal was not met either, and now GSA’s goal is September 30, 1998, for completing all upgrades approved as of September 26, 1997. GSA’s tracking system indicated that GSA had completed about 85 percent of the approved upgrades for all building levels by October 3, 1997, and reached the 90-percent mark by March 31, 1998. GSA has not established several key program evaluation mechanisms for its building security program that could assist it in determining how effective its security program has been in reducing or mitigating building security risks or in shaping new security program initiatives. These features are (1) specific goals, outcomes, and performance indicators for the security program, such as reducing the number of thefts or unauthorized entries; (2) establishing and implementing systematic security program evaluations that would provide feedback on how well the security program is achieving its objectives and contributing to GSA’s strategic goals; and (3) ensuring that a reliable performance data information system is place. GSA has established goals and measures for its security program both apart from and in connection with the Government Performance and Results Act of 1993 (the Results Act). However, these goals and measures are output or activity oriented. They do not address the outcomes, or results, expected to be achieved by the security upgrade program as envisioned by the Results Act and encouraged by OMB. “In the wake of the Oklahoma City bombing, GSA has bolstered all of its security systems. To ensure that we have the highest levels of security in place, we are implementing all the security measures recommended in the Justice Department’s Vulnerability Assessment of Federal Facilities.” In its first annual performance plan under the Results Act for fiscal year 1999, dated March 5, 1998, GSA identified the following two performance goals: (1) implement all security measures recommended in the Department of Justice’s Vulnerability Assessment of Federal Facilities, and (2) provide for the safety of workers and visitors in GSA space. Further, GSA identified as performance indicators the percentage of security countermeasures completed in levels I-III and level-IV buildings. This indicator serves as a measure of the program’s output, but no indicators were identified that would enable measurement of program outcomes, particularly relating to GSA’s second performance goal for the security program. Indicators based on such security incidents as the number of building break-ins, reductions in the number of thefts, and the number of weapons and other prohibited items detected on persons and in packages are some examples that might be considered in setting performance outcome goals and indicators. periodically inspect security at GSA-operated buildings, complete building risk assessments based on established criteria, and recommend security improvements. According to an FPS official, this inspection program was curtailed after July 1995 so that the regional physical security specialists could focus on assisting the BSCs in determining building security needs based on the DOJ report’s recommended minimum security standards. After the Oklahoma bombing incident, an October 1995 internal “lessons learned” report made 30 recommendations for improving aspects of GSA’s building security operations, including a recommendation that GSA conduct a comprehensive review of its current risk assessment methodology to ensure that a wider range of risks were addressed with increased emphasis on acts of mass violence. Specifically, the recommendation was that GSA’s current risk assessment methodology, which addressed the safety of federal workers from theft and assault, be revised to one that addresses acts of terrorism and other violence. The principal conclusion of the report was that GSA’s security and law enforcement processes currently in place did not adequately address the threat environment. In a November 25, 1997, progress report that FPS sent to the PBS Commissioner, FPS reported that actions on 20 of the 30 “lessons learned” recommendations had been completed. However, action had not yet been completed to review and modify its risk assessment methodology. Although the November 1997 progress report stated that FPS planned to complete actions on this recommendation by the 4th quarter of fiscal year 1998, we believe that this is a very significant recommendation that should be completed as soon as possible. The recommendations completed by FPS related to security program aspects such as contingency planning for emergencies and disasters involving criminal activities and acts of mass violence, as well as intelligence sharing between agencies with security-related missions. Completion of revisions recommended in its building risk assessment methodology and the resumption of FPS’s periodic building inspection and risk assessment program would provide updated evaluations on a building-by-building basis of how well security measures have operated and whether they continue to be appropriate for future threats that may arise. Further, these evaluations could form the basis for overall evaluations of the building security program and provide data for GSA’s annual performance measurement and evaluations under the Results Act. The Results Act requires GSA to describe in its annual performance plans the means to be used to verify and validate the performance measures it intends to use to determine whether it met its performance goals. GSA’s 1999 annual performance plan contains a general description of how it intends to verify performance data, including audits of its financial records and systems and high-level quarterly meetings to review financial and programmatic results. However, GSA’s description does not identify specific controls to be used to verify and validate performance data on an ongoing basis. Such controls could include periodic data reliability tests, computer edit controls, and supervisory reviews of data. The significant problems we and GSA’s OIG have identified with GSA’s data on its progress in, and costs associated with, implementing the security upgrade program, suggest that a more detailed discussion of the specific means GSA intends to use to verify and validate security program data in GSA’s Year 2000 performance plan would be helpful. The accuracy of the data in GSA’s tracking system is particularly important because Executive Order 12977, dated October 19, 1995, requires GSA to coordinate efforts to establish a governmentwide database of security measures in place at all federal facilities. Further, an accurate reflection of the status of the security upgrade program and its cost will provide GSA, OMB, and Congress with important information needed for determining how much money has been spent on the program and how best to fund the costs of upgrades still needed. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the General Services Administration's (GSA) progress in upgrading the security of federal buildings under its operation, focusing on: (1) what criteria GSA used to assess security risks and prioritize security upgrades for its buildings; (2) the implementation and operational status of GSA's security upgrade program and the costs GSA has incurred by both funding source and type of security upgrade; and (3) whether any problems have hindered GSA's implementation of the security upgrade program. GAO noted that: (1) GSA used the Department of Justice (DOJ) report's criteria to assess risks and prioritize security upgrades in its buildings; (2) despite the formidable challenges posed by this program, GSA has made progress implementing upgrades in federal buildings throughout the country, particularly in its higher risk buildings; (3) GSA's data systems indicate that about 7,000 upgrades were completed and it estimates that roughly $353 million were obligated from the Federal Buildings Fund for the upgrade program nationally between October 1, 1995, and March 31, 1998; (4) however, mistakes made by rushing to meet the timetables in the DOJ report because of GSA's sense of urgency to upgrade security in its buildings, reduced staffing due to downsizing, data reliability problems, and uncertain funding sources have hindered GSA's upgrade program implementation; (5) because of data reliability problems, neither GSA nor GAO can specify the exact status or cost of the building security upgrade program, and because GSA has not established program outcome measures, neither GSA nor GAO knows the extent to which completed upgrades have resulted in greater security or reduced vulnerability for federal office buildings; and (6) GSA is not in a good position to manage its program to mitigate security threats. |
The Visa Waiver Program was created by legislation in 1986 to allow visa- free travel in some instances to citizens of select countries. According to State, the program facilitates international travel for foreign nationals seeking to visit the United States each year and allows State to allocate resources to visa-issuing posts in countries with higher-risk applicant pools. The program accepted its first participant country—the United Kingdom—in 1988. Currently 27 countries participate in the program. See figure 1 for a map of the VWP countries. See table 1 for a list of current VWP countries and the average number of travelers to the United States and average number of visas issued from 2001 to 2007. This table demonstrates that most citizens from these countries who travel to the United States do so through the Visa Waiver Program, rather than by obtaining a visa. The Visa Waiver Program affects U.S. security, trade, commerce, tourism, diplomatic, and other interests. Our previous work has found that eliminating the program could have significant negative effects on these interests, as well as on U.S. relationships with VWP countries. For example, if the United States decided to eliminate the program, those eliminated countries would likely reciprocate and require Americans to obtain visas before visiting their countries. State has the authority, by law, to charge a fee for visas it issues to foreign nationals. According to State, State attempts to recover its costs for processing a visa with its fee, but the fee does not cover all associated processing costs. Consular Affairs officials said this cost recovery includes the direct costs of the activity, such as the costs of biometric information, conducting name checks, interviewing applicants, conducting follow-up investigations if necessary, and printing the visa. It also includes what Consular Affairs officials called the indirect costs of whatever percentage of a visa-processing staff position overseas is spent processing visas. The visa fee is neither meant, nor used, for covering the costs of facilities used to process visas, according to State. The global demand for U.S. visas has grown substantially in recent years, and State expects it to continue to grow in the foreseeable future. In 2006, according to State, U.S. embassies processed over 8 million visa applications and issued 5.84 million visas worldwide, which includes the more than 700,000 visas issued in Road Map countries. This included short-term business and tourism visas, as well as visas for students, temporary workers, foreign exchange visitors, and other visa types. As we reported in July of 2007, State has had difficulty meeting growing visa demand over the long term, which has led to operational challenges, including long wait times. We found that, though State has attempted to address this demand by adding and reallocating staff worldwide, even with the increased staffing, State has not been able to keep pace with visa demand. In addition, we testified in August of 2007 that State’s initiative to address its staffing shortages did not fully meet its goals and staffing shortfalls remained a problem. Some members of Congress have stated and agency officials have acknowledged that the Visa Waiver Program presents security risks, citing terrorist attacks and plots involving VWP travelers. One of the terrorists involved in the attack against the United States on September 11, 2001, Zacarias Moussaoui, entered the country under the program. In addition, since the 9/11 attacks, there have been some high-profile terrorist plots emanating from VWP countries. In December of 2001, British citizen Richard Reid, flying to the United States under the Visa Waiver Program from France, attempted to detonate explosives midflight, but was prevented from doing so by flight attendants and passengers. In August of 2006, U.S. and British security officials announced the disruption of a plot by British citizens to use liquid explosives to blow up multiple airliners during flights to the United States. Finally, as noted earlier, in September of 2007, the Director of National Intelligence testified that Al Qaeda is recruiting Europeans because many of them do not require a visa to enter the United States. The director noted that this recruiting tactic provides Al Qaeda with an “extra edge in getting an operative or two or three into the country with the ability to carry out an attack that might be reminiscent of 9/11.” In 2005, President Bush announced plans to work with 13 Road Map countries to facilitate their eventual entry into the Visa Waiver Program. Figure 2 shows the 13 Road Map Initiative countries. See table 2 for a list of current Road Map Initiative countries and those countries’ average number of travelers to the United States and average number of visas issued from 2001 to 2007. This table shows the number of travelers from Road Map countries to be around 1 million per year. In August of 2007, Congress passed the 9/11 Act, which authorizes the Secretary of Homeland Security, in consultation with the Secretary of State, to waive the low, nonimmigrant visa-refusal rate requirement for countries that meet certain conditions enumerated in the act, including law enforcement and intelligence conditions. For example, countries must cooperate with the United States on counterterrorism initiatives. However, before the Secretary of Homeland Security can exercise this new authority, the 9/11 Act requires that the department complete certain actions aimed at enhancing the security of the program. One of these required actions is that the Secretary of Homeland Security must develop and certify the implementation of ESTA in VWP countries. According to DHS, ESTA will allow DHS to screen citizens from VWP countries who wish to travel to the United States before they depart for U.S. ports of entry. Officials told us that DHS will advise applicants to go online at least 72 hours before the date they plan to depart for the United States in order to complete the ESTA application, which will collect electronically information similar to the information collected in paper form by CBP, which all VWP travelers present to CBP officers upon arrival at a port of entry. According to DHS, after submitting the application, DHS determines the applicant’s eligibility to travel under the Visa Waiver Program and whether there exists a law enforcement or security risk in permitting the applicant to travel to the country under the program. To the extent possible, DHS says, applicants will find out almost immediately whether their travel has been authorized, in which case they are free to travel to the United States, or if their application has been rejected, in which case they are ineligible to travel to the United States under the Visa Waiver Program. Those found ineligible to travel under the Visa Waiver Program must apply for a visa at a U.S. embassy in order to travel to the United States. At the embassy, foreign citizens whose ESTA applications were rejected apply for a visa and pay the visa fee and are either approved to travel to the United States or denied. Figure 3 demonstrates how ESTA will work, according to DHS and State. Elimination or suspension of the Visa Waiver Program could cause dramatic increases in the demand for visas that could overwhelm visa operations in the near term. To meet visa demand, State would need substantially more staff and facilities. State also would receive large increases in the amount of visa fees collected, which would offset the costs for staff, but not the cost of additional facilities. State has conducted limited planning to address the potential impact of Visa Waiver Program elimination or suspension. Elimination or suspension of the Visa Waiver Program could cause dramatic increases in the demand for visas. We estimate that, given existing travel patterns, the annual demand for visas at all VWP posts combined could jump from over 500,000 to as much as around 12.6 million, a level of demand that would overwhelm existing staffing and facility resources. For example, the U.S. mission in Japan, a post that is accustomed to processing around 94,000 visas per year, could find 3.3 million potential travelers seeking visas if the program were eliminated. Even countries that have smaller numbers of annual travelers to the United States could see substantial demand increases. For example, in Singapore, the U.S. embassy—accustomed to processing around 9,000 visas per year—could see visa demand grow almost 11-fold to nearly 100,000. If the Visa Waiver Program were eliminated or suspended, State officials told us that existing visa staffing resources would be unable to meet the new visa demand. As a result, State would need a substantial increase in staff to process visas to meet the increased workload. State officials told us that, over the long term, they would likely hire Foreign Service officers and Foreign Service national staff to support those Foreign Service officers, though these officials acknowledged they have not evaluated all of the options for meeting their staffing needs for this purpose. At three U.S. embassies we visited in VWP countries—Japan, France, and Spain— embassy officials told us they would need hundreds of new Foreign Service officers to perform visa interviews and adjudicate visa applications for the millions of new visa applicants arriving at the embassies. In Japan, for instance, we estimate the embassy would need at least 134 new Foreign Service officers (an increase of 515 percent above the current visa Foreign Service officer workforce of 26) to meet the expected increased visa demand of over 3.3 million new applicants, as well as around 334 new Foreign Service nationals (an increase of 451 percent over the existing Foreign Service national workforce of 74). We estimate that State would have to hire around 540 new Foreign Service officers worldwide, at an estimated cost of between $185 million and $201 million per year. In addition, State would have to hire around 1,350 new Foreign Service national staff worldwide, which would cost around $168 million to $190 million per year. Finally, State told us these new overseas positions would need the management and support of additional staff overseas and in Washington; we estimated these costs at $93 million to $111 million per year. Over a 10-year period after the elimination of the program, these costs would total between $4.4 billion and $4.9 billion. State officials told us that a hiring effort of this kind would be historic in its scope and very difficult to undertake. Both Consular Affairs and embassy officials told us they were unsure exactly how State would accomplish a hiring increase of this magnitude, given current staffing and funding levels. Consular and embassy officials told us in the event that only one of the larger current VWP countries were dropped from the program, State may be able to provide a surge of staff on temporary duty to cope with the immediate spike in visa demand in that country, though State had not developed any plans on how to do so. However, these officials noted, with all 27 countries, or even several of the larger VWP countries dropped from the program, it would be impossible to meet new demand with existing staff. Moreover, consular officials stated that it could be very difficult for State to hire and for its Foreign Service Institute to train the number of staff that would be needed in a short period of time. For instance, from fiscal year 2005 to 2007, State produced around 300 to 400 new Foreign Service officers per year for all global State needs. As many as 200 to 300 of these officers entered into consular functions upon their initial deployment overseas to enable State to meet recent year-to- year visa demand. State, however, would need around 2 to 3 times this number to meet the new demand for visas in the event of VWP elimination. Though Consular Affairs officials told us they have not fully analyzed the extent to which demand increases would impact resources in the event of program elimination, they provided us with information indicating they could need approximately 45 new buildings to handle the increased visa demand. OBO officials said this estimate was generally realistic, based on the potential increases in staffing that could be needed in the event of program elimination and the capacity of buildings to absorb those levels of new staff. However, both OBO and Consular Affairs officials indicated that it was impossible to predict the exact number of facilities that would be needed. OBO, Consular Affairs, and embassy officials told us it would be challenging to build these facilities in a timely and cost-effective manner or, alternatively, to find enough suitable space that could be leased for this purpose. State officials told us they would immediately outgrow their current visa processing space in dozens of posts, and embassy officials in all three VWP countries we visited said they would need new buildings to accommodate visa demand. In fact, these embassy officials noted that a relatively small increase in visa demand would cramp visa waiting rooms and consular adjudicating officer workspace. In addition, embassy officials told us it would be extremely difficult to find suitable land that would provide the space and the setting necessary for building, or leasing, according to State’s building and security standards. Embassy officials in these countries added that land, facility construction costs, and leasing costs in their host countries would be extremely expensive—among the most expensive in the world. OBO officials confirmed that VWP countries’ real estate markets did not offer easy or inexpensive opportunities for building new embassy or separate annex buildings for consular use. OBO officials further stated that finding appropriate sites and negotiating sales in this type of environment would not only cost more, but also would likely take longer to accomplish, tying up OBO resources and potentially delaying the point at which visa operations could resume within a U.S. mission. Until long-term arrangements are made to meet facility needs, OBO officials said, the embassy would likely be forced to lease space to accommodate increased visa demand. We calculated that it would cost between $3.8 billion and $5.7 billion to construct 45 new facilities in these countries. This total cost of $3.8 billion to $5.7 billion would include: (1) new building construction costs, (2) the costs of leasing temporary facilities to accommodate visa operations while new facilities are constructed, (3) facility operations and maintenance costs, and (4) the cost of additional OBO staff. OBO officials told us that if these 45 buildings supplanted those already scheduled to be constructed, it could take around 7 years to finish the design, planning, and construction so that visa operations could be conducted there. However, if OBO were to proceed with its existing building schedule in addition to the 45 new facilities, it would require an increase of approximately 400, or around 40 percent more, full-time permanent or temporary contractor staff. If OBO hired a mix of around 400 full-time and contractor staff, it would cost between $648 million and $897 million over the roughly 7 years. OBO officials told us it would be difficult to find that many staff without using contractors. In addition, OBO noted that since the staff would be employed for only that 7-year period of time and phased out at the end of this period, there would be an advantage to hiring contractors since OBO would not have to provide them with benefits or severance pay. In addition, we estimated the cost of leasing facilities to accommodate dramatically increased visa operations over this 7-year construction period would be between $226 million and $416 million. While staffing and facilities needs would increase if the Visa Waiver Program were eliminated, this scenario also would increase the number of travelers needing a visa; as a result, we estimate visa fee revenues would increase substantially. Using the current fee of $131 per application, we calculated the increase in State’s visa revenue to be $1.7 billion to $1.8 billion per year. We estimate that this increased revenue would offset the year-to-year recurring costs associated with new staff. However, since visa fees would not be collected until the end of the first year, the initial annual staffing costs of $447 million to $486 million would not be offset by visa fees. In addition, visa fee revenues would be far less than the costs for facility construction and, further, are not used for the purpose of offsetting facility construction costs—the largest portion of the initial costs—or the year-to-year facility maintenance costs on those facilities. Though State has made efforts to address long-term growth in visa demand, as noted earlier, the department has conducted limited planning to address the significant operational challenges that could result from the potential elimination of the Visa Waiver Program. State Department and federal guidelines have highlighted the importance of planning for potential program changes that could impact operations. This is particularly important given that any significant disruptions to U.S. visa operations could have severe repercussions on U.S. travel, trade, business, tourism, and diplomatic interests. State’s Performance Plan for the Bureau of Consular Affairs for 2007 sets a performance goal of “Proper Visa Adjudication” and acknowledges challenges that could seriously impede progress would include “an extended disruption of international travel and any significant change in participation in .” State and embassy officials told us during our review that the elimination of the Visa Waiver Program could cause such disruptions of international travel. According to internal control standards for the federal government, once an agency has set its objectives and identified the risks that could impede the efficient and effective achievement of those objectives at the entity level and the activity level, the agency should analyze those risks for their possible effect. Management then should formulate an approach for risk management and decide upon the internal control activities required to mitigate those risks and achieve the internal control objectives of efficient and effective operations. Although State has identified risks, it has not developed a plan for how it would deal with these risks. State has conducted limited planning to prepare for the possibility of Visa Waiver Program elimination, though such a scenario could create significant operational challenges for State, as visa demand would dramatically increase. Specifically, several years ago, State undertook some preliminary thinking about the scenario— including the general magnitude of the resource challenges that would be involved—and acknowledged the importance of doing so; we reviewed the documentation provided and found that the limited planning was general in nature, largely outdated, and does not address the full range of the challenges that would arise, particularly how to provide the additional staffing resources and facilities needed. For instance, a memo from Consular Affairs to OBO requests that OBO identify the space requirements that might be needed in the event of VWP elimination. However, there is no information about how OBO would acquire the substantial amount of additional space needed if the program were eliminated. State officials told us they did not think planning for program elimination or suspension was appropriate, given that current U.S. government policy does not support program elimination. However, in September 2007, the Director of National Intelligence testified that Al Qaeda is recruiting Europeans because many of them do not require a visa to enter the United States, which, he noted, provides Al Qaeda with an “extra edge in getting an operative or two or three into the country with the ability to carry out an attack that might be reminiscent of 9/11.” Moreover, DHS, State, and embassy officials have acknowledged the program could be suspended or eliminated in the event of a major attack emanating from a VWP country. If State does not develop contingency plans and the program is eliminated, State could face tremendous challenges addressing staffing and facilities shortfalls. State has had problems dealing with large demand increases in the past. In 2007, when new passport requirements were implemented, State faced substantial increases in demand for passports. However, because State had not adequately planned for the implications of these new requirements, it faced shortages in staffing and other resources that led to tremendous backlogs that were only addressed when State redeployed domestic and overseas staff and took other emergency measures to address the surge in demand. Consular officials in Washington told us that the impact of the new passport requirements on that process and the resource shortages that State faced in 2007 would be minor compared to the challenges State would face in meeting visa demand in the event that the Visa Waiver Program were eliminated. For instance, given the potential historic level of visa demand that program elimination could bring about and State’s inadequate resources to address this demand, State officials told us they would need to find creative staffing and facility solutions in the short term and, moreover, may need to make choices regarding changes in resource allocation, visa policies, and other considerations to ensure the right balance between security and facilitating legitimate travel. However, State has not developed a plan that identifies its options for meeting the facility and staffing needs described above, or how it would go about making any such changes. State officials at posts we visited in existing VWP countries told us they had not been contacted by their headquarters about undertaking any contingency planning in the event of program elimination. In addition, they said they had not been asked for data or post thinking on the issue, or been provided any information on how post activities, programs, or resources might change given such a scenario. Embassy officials in all three VWP countries we visited told us that our visit and questions had fostered their first real consideration of what was involved in these issues. Consular officials in these posts expressed concern to us that, in the absence of planning for this scenario, visa operations at their posts could be severely disrupted as existing staffing and facilities resources would be overwhelmed. Expansion of the Visa Waiver Program would reduce visa demand in Road Map countries, but have a limited effect on the costs and resources needed to meet the reduced demand and the amount of visa fees received. Expansion would only modestly reduce overall staffing needs. Further, expansion would not bring about significant cost savings from facilities, in part because Road Map countries lack consular or other embassy space that could be sold. If all 13 countries were admitted to the program, we estimate that State would stand to lose approximately $74 million to $83 million each year in collected visa fee revenues. State would likely be able to accommodate program expansion with relatively minimal disruption, therefore requiring limited planning because of the minimal impact expected on staffing and facilities. Visa demand in any Road Map country would decline after the country’s admission to the program. However, visa volume is relatively small in most of the Road Map countries. For instance, the recent visa volume in Estonia and the Czech Republic, two countries currently being considered for expansion, is only around 6,000 visas and 32,500 visas per year, respectively. Even if all 13 Road Map countries were to join the program, and if all of those countries’ citizens who previously traveled with visas were to travel to the United States without visas, the total reduction in visa demand would be only around 710,000; more than 400,000 of this reduction would be in South Korea alone. In addition, these posts, many of which already issue relatively small numbers of visas, would continue to experience some demand for tourism and business visas and continued growth in student, long-term work, and other types of visas. Officials told us they generally expect continued visa travel as some percentage of VWP travelers will be rejected by ESTA and directed to apply for a visa at the embassy. Further, consular officials told us they expect that some travelers may choose to obtain a visa rather than travel via the Visa Waiver Program due to the risk of being rejected by ESTA, as well as the greater flexibility a visa offers. For instance, travelers with a 10-year visa can choose to travel at any time during that 10-year period, and also can decide to extend the length of their visit for longer than the 90 days allowed under the program. In addition, consular officials in South Korea stated that they continue to have large increases year to year in other types of visas issued there. So, even if most demand for short-term business and tourism visas were to decrease following acceptance into the program, there could still be significant growth in demand for other types of visas. Consular officials in the Road Map countries we visited stated they expected that their visa volume in the near term would likely be between 20 percent and 40 percent of recent numbers of travelers— between 220,000 visas and 440,000 visas. Given all of these factors that could affect visa demand, consular management officials at locations we visited stated that, if their posts were added to the Visa Waiver Program, they would be reluctant in the near term to lose more than 50 percent of their Foreign Service officers processing visas until they better understand their new staffing needs. Expansion of the Visa Waiver Program would modestly impact overall staffing needs in Road Map countries. Although consular conditions in Road Map countries vary, consular officials in the Road Map countries we visited expect modest reductions in their visa processing staff needs as these countries gain acceptance to the program. For example, in the three Road Map country posts of Athens, Prague, and Budapest, the combined annual visa workload of around 90,000 is relatively small, and consular management there told us that eliminating this workload would result in only around four fewer Foreign Service officer staff among the three countries. In the Czech Republic, where consular officials process around 32,500 visas per year, officials expect they will need only one fewer Foreign Service officer when accepted into the program, and only two to three fewer Foreign Service nationals. Similarly, consular officials expected relatively modest staff reductions for South Korea, where consular staff process around 402,000 visas per year—the highest number of U.S. visas issued among Road Map countries. If South Korea is admitted to the program, consular officials expect to need 6 to 9 fewer Foreign Service officers and 15 to 20 fewer Foreign Service nationals. In total, if all 13 Road Map countries joined the program, we estimate that about 21 to 31 Foreign Service officers could be moved to other posts in need of staff and 52 to 77 Foreign Service national positions could be cut. Beyond its limited impact on staffing in VWP Road Map countries, expansion of the program would have even less of an effect on State’s overall worldwide staffing needs and costs. First, for fiscal year 2006, Consular Affairs reported that posts in Road Map countries represented a small portion of total U.S. visa operations; these countries employed 105 Foreign Service officers and 301 other consular employees, less than 10 percent of the over 4,500 consular employees overseas in 2006. Moreover, officials in State’s Consular Affairs Bureau said that Foreign Service officers no longer needed at new VWP posts will not be eliminated but rather transferred to other posts, thereby shifting, not eliminating, costs and revenues generated by those staff. Consular officials noted there is a need for additional staff in other countries with large backlogs of visa applications, such as China, India, and Mexico, and consular officials in South Korea noted that, if the country were accepted into the program, Foreign Service officers not needed would likely be transferred to such posts at the end of their regular tour of duty. Although acceptance into the Visa Waiver Program would reduce total visa processing costs for Road Map countries, State and embassy officials expect increases in some short-term costs, such as severance pay to the displaced Foreign Service national workforce and the costs of hiring new Foreign Service national staff to support the transferred Foreign Service officer staff at their new posts. State and embassy officials noted that short-term costs for severance payments to displaced Foreign Service national employees would vary depending on the laws of individual countries. We estimate that nonrecurring costs of expanding the program to Road Map countries would be between $3.7 million and $4.3 million— with around $3.3 million to $3.9 million to cover severance costs and approximately $385,000 to $476,000 for the costs to hire and train additional supporting Foreign Service nationals in the Foreign Service officers’ new locations. Embassy officials in Road Map countries we visited did not expect dollar savings from reduced facility usage after the countries are accepted into the Visa Waiver Program. In the countries we visited, the visa operations generally occupy a portion of embassy facilities. None of U.S. embassy visa operations in Road Map countries are currently housed in leased space, and therefore no lease savings will accompany reduced visa operations. Visa operations in South Korea, the Road Map country processing the greatest number of visas, are based in an embassy operating above normal capacity, and U.S. embassy officials there stated that any space freed as a result of gaining acceptance to the Visa Waiver Program would simply allow the existing consular space, which currently requires a waiver since it does not meet fire code, to operate under less cramped and strained conditions. Similarly, U.S. embassies in Road Map countries we visited in Eastern Europe do not expect to gain any facility cost savings as a result of joining the program. Acceptance of Road Map countries into the program would significantly reduce the amount of visa fee revenues collected in those countries. While embassy officials in all four Road Map countries we visited told us they expected to retain some visa demand in the event their host country entered into the program, they agreed most business and tourism visa demands would significantly decrease. Assuming all 13 Road Map countries were admitted to the program and that most eligible foreign citizens traveled under the program, we estimate that State would lose approximately $74 million to $83 million each year in collected visa fees, generally offsetting any savings from reduced personnel costs. State likely would be able to accommodate program expansion with minimal disruption because of the limited impact expected on staffing and facilities. As a result, preparing for potential program expansion would require little additional advanced planning by State. For example, as noted above, even in South Korea—the Road Map country issuing the highest number of U.S. visas, about 56 percent of all visas issued in Road Map countries—consular officials expected relatively modest staff reductions. Embassy officials in the four Road Map countries we visited told us that, were their host countries’ entries into the program confirmed, they could take steps at the posts to adjust to and accommodate decreased visa demand and its impact on staffing and facility resources. In one Road Map country we visited, embassy officials told us that if they were instructed by State officials in Washington to prepare for immediate admission of the host country into the program, the embassy would make decisions about how to handle the repercussions of decreased visa demand on staffing and on current visa processing facility space. For example, given the decreases in the number of Foreign Service national staff positions needed at post under program expansion, embassy officials said they would try to find other positions for these staff, if possible, to ease the impact on the Foreign Service national workforce and mitigate potential severance costs. In addition, as discussed previously, while embassy officials in all four Road Map countries we visited said they did not think program expansion would significantly impact their mission’s facilities, they noted that any freed space would be easily used by other consular or embassy functions, and such a transition would be planned and implemented by the embassy. ESTA implementation could increase visa demand in existing VWP countries. However, State and DHS are uncertain how many applicants would likely be rejected through the ESTA screening process and therefore required to apply for a visa, and they also are unsure how many potential travelers would choose to get a visa rather than participate in the ESTA screening. State has not developed contingency plans for how it will manage the expected increase in visa demand, citing lack of information from DHS on the effect of ESTA. ESTA implementation will increase visa demand in VWP countries, though the full extent to which it will do so remains uncertain. DHS officials told us that, given CBP’s operational experience administering the Visa Waiver Program as it currently exists, they currently believed that when ESTA is fully implemented, less than 1 percent of all VWP country travelers would be rejected by the ESTA screening. DHS officials also told us the rejection rate could be 2 percent to 3 percent in early years, eventually tapering off to 1 percent as the system became more established and travelers became more acclimated to using it, while some officials said it could range as high as 5 percent. However, DHS officials told us they can not yet determine the rate of rejection from ESTA, because DHS has not yet decided what databases it will use to screen names of ESTA applicants. In addition, given ESTA’s potential for causing last-minute travel disruptions, consular staff at the posts we visited told us they believe an additional unknown number of travelers from VWP countries would choose instead to proactively apply for visas at embassies. For example, around 14 percent of annual entries to the United States from VWP countries are made by repeat travelers; one senior consular official estimated that these travelers who visit the United States multiple times a year may prefer to travel using a visa rather than through the program. Most officials predicted that the percentage of travelers who choose to obtain a visa could exceed potential ESTA rejection rates of 1 percent to 3 percent. State officials told us that the influx of even a small percentage of current travelers in larger VWP countries to obtain visas could significantly disrupt visa operations at U.S. embassies. For example, if 1 percent of the United Kingdom citizens who currently travel to the United States without visas needed to or chose to apply for a visa, visa demand there could increase by 35,000 per year, or around a 31 percent increase in visa workload. Further, embassy officials in the three VWP countries we visited told us that if 3 percent of current visa waiver travelers applied for visas, it would result in visa demand that would overwhelm their current staffing and facilities. However, DHS acknowledges that it does not know how many travelers may prefer to directly seek a visa rather than participate in ESTA-approved travel, and it acknowledges that ESTA rejection rates and the rate of voluntary visa travelers may vary by country. We developed a series of estimates of ESTA’s potential impact on demand and consular staffing needs, as well as on visa fees. Though State officials provided us with data to support these estimates, they did not provide data on ESTA’s impact on facility costs. A State official told us that predicting ESTA’s impact was very difficult, particularly for facilities, and that developing such data for facilities would require an extensive analytical effort involving multiple offices within State, particularly OBO, as well as extensive management involvement—and, moreover, would take a long time. Our estimates in table 3 include scenarios where 1, 2, 3, 5, or 10 percent of travelers who currently travel under the Visa Waiver Program come to U.S. embassies for a visa, and they take into account increases in visa applications due to ESTA rejections as well as from travelers choosing voluntarily to apply for visas. Further, table 3 represents estimates of the potential increases in visa demand globally and for the largest VWP countries, as a result of the implementation of ESTA, assuming 1, 2, 3, 5, or 10 percent of current VWP travelers went to the embassies to apply for a visa. All of these scenarios would place a strain on existing embassy staffing, particularly in larger VWP countries, while even small rates of increase in larger VWP countries could strain consular facilities, necessitating the construction of new facilities in some countries at potentially significant costs, according to State and embassy officials. As noted earlier, State has had a difficult time meeting recent staffing needs globally; State and embassy officials told us they could find it difficult to meet staffing needs in high-volume VWP posts in the near term if higher percentages of current travelers come to the embassies for visas. Furthermore, while we did not develop estimates of the number of facilities that would be needed, State and embassy officials agreed that the costs of new facilities could potentially be significant and that such facilities would most likely be needed in high-traveler-volume countries where existing facilities are more likely to become strained. Table 4 shows the impact, on all VWP countries combined, that such increases in visa demand—of 1, 2, 3, 5, or 10 percent—would have on staffing needs and visa revenues; the table does not include potentially significant facility costs. We estimate annual visa fee revenues would increase and offset the year-to-year recurring staffing costs. However, there would be a lag between when State would have to fund the staffing increases in the first year and when it would receive the offsetting increases in visa fees in the second year. Moreover, visa fee revenues would be less than the costs for facility construction, and, according to State, visa fee revenue is not used to offset the costs for constructing new facilities to process visas. Two high-traveler-volume VWP countries we visited in particular would be challenged to address the likely increased visa demand resulting from ESTA. In Japan, with around 3.26 million travelers to the United States each year, embassy officials told us that if visa demand increased by 1 percent of Japan’s current VWP travelers, it would present significant challenges for their existing staff and a workload level that could not be sustained over the long term. An increase of 2 percent would increase visa demand by about 70 percent over current levels; as a result, existing staff could not meet the demand, and current facility space would become crowded and strained to capacity. Any increase over 2 percent, embassy officials said, would overwhelm existing staff and facilities; more staff would be needed, and new facilities for processing visas would need to be obtained or constructed. In France, where about 870,000 citizens travel to the United States each year, embassy officials told us that if visa demand increased by 1 percent of France’s current VWP travelers, their existing staff would be greatly challenged to meet the demand. They said that existing staff could handle this increase only for a temporary period of time without additional staff. An increase of 2 percent could not be accommodated with existing staff, and visa waiting rooms and processing space, which are already crowded, would be strained, embassy officials said. However, if more staff were added, embassy officials told us the embassy could develop creative ways to work within the existing space, for example, by adding another shift for visa processing every day. An increase of 3 percent, embassy officials told us, would require acquiring or constructing a new visa processing facility. Figure 4 shows that the visa waiting room space in the U.S. embassy in Paris is already crowded. Though State has attempted to address general long-term growth in visa demand, the department has done little planning to address the increased visa demand that could result from implementation of ESTA, citing lack of information from DHS on the effect of ESTA. State Department and federal guidelines, as noted previously, have highlighted the importance of planning for potential program changes that could impact operations. Despite the fact that DHS officials have said they plan to have ESTA operational for all countries by mid-2009 and for some countries by the summer of 2008, DHS officials told us the department has not determined what tests, if any, it will conduct to study ESTA rejection rates and determine ESTA’s impact on visa demand. In addition, as noted earlier, State has not developed data on ESTA’s likely impact on facility needs and costs, despite the fact that ESTA could be implemented by the summer of 2008 and that State officials have acknowledged that developing such data would be a complex and time-consuming process. Furthermore, embassy officials we met with in three VWP countries told us that they have neither prepared plans to address visa demand upon ESTA implementation, nor has State headquarters communicated with them in order to plan for this new requirement. For example, in one VWP country we visited, we found that State had begun plans for a new embassy facility, but no additional space had been included to accommodate additional visa demand, including demand resulting from ESTA implementation. Consular and management section officials there told us they had never heard of ESTA and that OBO had not raised the issue of additional demand with them or considered new visa demand in their initial design of the new facility. Officials told us that our visit and questions had fostered their first real consideration of what was involved in planning for the impact of ESTA. In addition, consular officials in the three VWP countries we visited expressed concern that, without information regarding the likely impact of ESTA on visa demand, they are not able to plan at the embassy level to address the staffing shortfalls and space limitations that could result from ESTA implementation. Many important factors need to be considered regarding potential changes to the Visa Waiver Program, given its impacts on U.S. security, trade, commerce, tourism, diplomatic, and other interests. Ensuring that the proper resources are in place to handle visa demand globally is essential for State to meet its mission to facilitate legitimate travel to the United States while screening out possible threats. Elimination of the Visa Waiver Program has the potential to dramatically increase visa demand, severely disrupting U.S. visa operations in the short term and costing billions of dollars. And, while State would likely be able to accommodate program expansion with minimal disruption, U.S. embassies soon will have to deal with the impact of the ESTA requirement, which could result in a substantial number of new travelers needing or choosing to obtain a visa, potentially creating significant resource gaps and affecting the ability of the United States to conduct visa operations globally. State has done limited planning in headquarters or the field for any such changes in the program. Given the resource and cost implications involved, it is imperative that State work with posts to plan for imminent as well as potential program changes. Though the likelihood of program elimination is unknown, having a comprehensive understanding of how staffing and facilities would be impacted will enable State to help Congress make informed decisions on the fate of the Visa Waiver Program and to devise broad measures to address the immense challenges that would follow elimination of the program, should it occur. In addition, the development of estimates of the increases in visa demand in high traveler volume countries likely to result from ESTA implementation would give State’s headquarters and embassy officials the necessary information to make decisions on allocations of staff among posts and also would give State’s Bureau of Overseas Buildings Operations the information it needs to construct any needed new facilities. We recommend that the Secretary of State develop contingency plans for U.S. embassies in Visa Waiver Program countries to address the potential increases in visa demand that could result from program elimination. These plans would include identifying what options State has for providing additional resources and taking actions that could be needed, as well as the extent to which increased visa fee revenues would cover the cost of these resources. In addition, we recommend that the Secretaries of Homeland Security and State develop estimates of increased visa demand in Visa Waiver Program countries resulting from ESTA implementation. These estimates would include information on how many applicants can be expected to be rejected from ESTA and how many potential travelers can be expected to choose to come to the embassy for a visa. Based on these estimates, we recommend that the Secretary of State develop plans for how the department will manage the increased workload in the existing 27 Visa Waiver Program countries. State and DHS provided written comments on a draft of this report, which are reproduced in appendixes II and III, respectively. We also received technical comments from State and DHS, which we have incorporated throughout the report where appropriate. State said it would ask embassies to discuss management plans in the event that the program were eliminated, but did not indicate whether it fully concurred with our recommendation that State conduct contingency planning. State said that it has responded to situations that presented challenges to its workforce, during which it has considered and used several tools that could be helpful in addressing some of the challenges presented by program elimination. We believe that asking posts to discuss their management plans in the event the program were eliminated, as State said it would do, is a good step. In addition, we believe that State needs to develop contingency plans that include options for addressing program elimination so that State is better prepared to cope with the dramatic increases in workload that would result from the elimination of the program. State agreed with our recommendation that it develop estimates of, and conduct planning for, the impact of ESTA implementation, but said that its ability to do so was limited by the fact that DHS had not resolved a significant number of crucial details about ESTA and, as a result, had not provided State with key information it would need to conduct related planning. For example, according to State, CBP has provided data suggesting how many names might be rejected by ESTA by considering the rate of name rejections from CBP’s Advance Passenger Information System (APIS) database. However, according to State and DHS, DHS and CBP have not decided which databases ESTA will screen against, and it is therefore unclear what the ESTA rejection rate will be. We believe it is important for DHS to determine which databases it will use for ESTA screening, so that it can develop accurate estimates of the number of people whose ESTA applications might be rejected, which is essential information for State’s planning purposes. Moreover, State said there is no data on how many people would choose to obtain a visa to avoid the uncertainty associated with ESTA, and State believes that number could be significant. State said that the lack of such data is another factor complicating its planning for ESTA implementation. We agree that this is an important factor and that is why we recommend that DHS and State should develop a method for producing accurate information on the number of potential travelers who could be expected to choose to come to the embassy for a visa, rather than applying through ESTA. DHS agreed with our recommendation that it work with State to develop estimates of the impact of ESTA implementation on visa demand. DHS said that it has been coordinating with State as DHS develops ESTA and plans its implementation. However, as noted previously, DHS and State have not yet developed estimates of the changes in visa demand that could result from implementation of ESTA. For example, DHS has not determined how many VWP travelers would not be approved to travel under the program and would have to obtain visas. Without this information, it is difficult for State to plan for how it will meet changes in visa demand. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and to the Secretaries of State and Homeland Security. We will also make copies of this report available to others upon request. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-4128 or fordj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. We examined how three changes—Visa Waiver Program (VWP) elimination, program expansion, and implementation of the Electronic System for Travel Authorization (ESTA)—would affect the demand for visas, and how changes in demand would affect the resources the Department of State (State) needs and the amount of visa fee revenue that State receives. We recognized that there could be other implications of major changes in the program, particularly if the program were eliminated. These implications—for security, tourism, commerce, business, trade, diplomacy, and reciprocity regarding visa-free travel, for example—could potentially be significant. However, we limited the scope of our review to the impact on visa demand, visa resources—including staffing and facilities—and the associated costs and revenues. We undertook the following methodologies for all three of these objectives. We analyzed relevant law regarding the program and its requirements, and documentation, including State’s most recent cost of consular services study from 2004, and met with State’s Bureau of Consular Affairs to discuss the fee that State charges for visa applications, what costs of processing a visa the fee is intended to cover, and how much new revenue State would generate in the event of program elimination. To assess the staffing and resource costs to State under these scenarios, we developed our own high-level cost estimates, using State and Department of Homeland Security (DHS) data. DHS/U.S. Customs and Border Protection (CBP) provided us with data on the number of travelers from each VWP and Road Map country. State’s Bureau of Consular Affairs provided us with information on the number of visas processed in VWP and Road Map countries as well as estimates on the number of visas that could be expected to be processed by a Foreign Service officer, the number of Foreign Service national staff that could be expected to support Foreign Service officers, and—for the elimination scenario—estimates of the number of new facilities that could be needed. State’s Bureau of Overseas Buildings Operations (OBO) provided data on the cost, type, and size of recently completed U.S. government construction projects overseas, the costs and sizes of overseas leased facilities used for visa processing, and estimates of operations and maintenance costs for U.S. embassy facilities. State’s Bureau of Resource Management provided information on the costs of Foreign Service officer and Foreign Service national staff. In addition, we collected post-specific data on the above costs when we traveled to selected U.S. embassies. We determined that the data provided to us were sufficiently reliable for the purposes of the report. We met with State officials to determine the sources of the data provided to us. For instance, State’s data on the costs of putting a Foreign Service officer overseas came from Resource Management’s database of actual expenditures, which they use to input costs and for budget purposes. In addition, we collected data from other sources to cross-check State data whenever possible. For instance, we collected data on the actual costs of Foreign Service officers and Foreign Service nationals in the specific countries we visited. Similarly, we cross-checked the costs of facility operations and maintenance against Department of Defense facilities pricing guide data to ensure reasonableness. Finally, the uncertainty analysis we conducted provided us with a level of confidence in our estimates, because we took into account and stated a range of possible costs spanning our point estimates. This analysis provided us with sufficient confidence for the high-level type of estimates we are presenting. We prepared the cost estimates using fiscal year 2007 constant dollars. For purposes of estimating State’s costs, we assumed numbers of travelers to remain constant, as State officials told us and State documentation stated that its goal is to accommodate changes in visa demand and avoid disruptions of travel to the United States. In addition, we performed uncertainty analyses on cost models we developed for each scenario, using a Monte Carlo simulation tool called Crystal Ball to analyze the effects of varying inputs and outputs of the modeled scenarios. Monte Carlo simulation uses a random number generator to simulate the possible variance of designated inputs, such as estimates of the number of additional Foreign Service officers needed in the VWP elimination scenario, and calculates the subsequent possible ranges of the outputs. This allowed us to try multiple hypothetical scenarios with our spreadsheet cost model values. We used the results of these analyses to provide a probability value for our point estimates, as well as to provide a range of cost estimates for these scenarios. In addition to these activities, we undertook methodologies specific to each of the three objectives, which are described below. To determine how program elimination in particular would affect the demand for visas, and how changes in demand would affect the resources State needs and the amount of visa fee revenue that State receives, we analyzed CBP data on the number of travelers to the United States from each VWP country annually from 2001 to 2007, and reviewed State’s data from 2001 to 2007 on the numbers and types of visas issued in each VWP country. We used 2001 to 2007 data from the number of travelers to the United States from each VWP country, rather than the number of travelers coming to the United States under the program. We did not use data on the number of travelers coming to the United States under the program because (1) data from 2001 to 2007 on the number of travelers coming to the United States under the program were not available; (2) available data from 2004 to 2007 averaged 12.5 million travelers, a difference of less than 1 percent from the 12.6 million calculated using the number of travelers to the United States from each VWP country; and (3) we could not independently calculate the number of travelers coming to the United States under the program by subtracting the number of visas issued in each country annually, because not all of those people who received visas necessarily traveled in that year. We also reviewed data State provided on the current number of Foreign Service officer and Foreign Service national staff involved in visa processing in each VWP country. We analyzed information provided by State on the number of visas that could be expected to be processed by a Foreign Service officer and the number of Foreign Service national staff that could be expected to support Foreign Service officers. We also analyzed State’s rough order estimates of the number of new facilities that could be needed in the event of program elimination. Consular Affairs projected the possible number of facilities that could be needed based on information on the conditions of existing consular space in these countries, as well as assumptions of the number of people that would be seeking visas if the program were eliminated, and the subsequent increases in Foreign Service officer and Foreign Service national staff necessary to accommodate the new visa demand. OBO officials said Consular Affairs’s rough order estimate was generally realistic, based on the potential increases in staffing that could be needed in the event of program elimination and the capacity of buildings to absorb those new staff. However, both Consular Affairs and OBO officials agreed that it was impossible to predict the exact number of facilities that would be needed. We met with officials in Consular Affairs, OBO, and at embassies in three VWP countries—Japan, France, and Spain—to determine the extent that embassy officials expected increases in visa demand and the number of additional staff and facilities that would be needed to meet those increases in those countries and to confirm information that we had collected from State officials in Washington. We selected U.S. embassies in Japan, France, and Spain for field work for several reasons. First, we selected these embassies because they are in VWP countries with high numbers of people traveling to the United States each year. Specifically, Japan represents the VWP country with the second-largest number of travelers to the United States each year, while France represents the fourth-largest number, and Spain the eighth. Further, of the countries we visited, differences in the sizes of traveler volumes provided us with information on differences in the extent of potential impacts among VWP posts in the event of program elimination. Second, we selected these countries for purposes of assessing the different potential aspects of program elimination for countries in different regions of the world. Lastly, we selected these countries because, in the case of France and Spain, there have been terrorist plots or attacks in those countries in recent years, contrasting with Japan, where this has not been the case. We met with officials in Consular Affairs and OBO and with officials in State’s Bureau of Resource Management and the Office of Rightsizing to gain data for constructing our cost estimates on the staffing and facilities that would be needed to support increases in visa demand in VWP countries. Using the data provided by State and DHS, we created cost models to estimate the costs and savings due to changes in the number of consular and overseas buildings operations personnel; the construction, leasing, and operations and sustainment of consular facilities; and visa application fee revenue. For this scenario, we estimated two sets of nonrecurring cost elements: costs for the construction of new consular facilities, and the costs for the temporary staffing increase to manage the construction of those facilities. Also, we estimated four sets of recurring cost elements: costs of additional consular personnel, costs to operate and sustain new consular facilities, costs to lease consular facilities until the completion of new construction, and visa application fee revenue. We then performed the uncertainty analysis described above to generate cost estimate ranges for each of the scenarios. To determine the extent to which State had prepared for the possibility of program elimination, we reviewed State’s Performance Plan for the Bureau of Consular Affairs to determine State’s goals and objectives regarding visa issuance, as well as any planning that State had done. We also reviewed standards for internal controls in the federal government, including those addressing the importance of identifying risks to achieving program goals and planning ways to mitigate those risks in order to continue to meet program objectives. To assess how program expansion in particular would affect the demand for visas, and how changes in demand would affect State’s resource need and the amount of visa fees that State receives, we analyzed CBP data on the number of travelers to the United States from each Road Map country annually from 2001 to 2007. We also reviewed data that State provided on the number of Foreign Service officer and Foreign Service national staff involved in visa processing in each Road Map country. We met with officials in State’s Bureau of Consular Affairs, OBO, and at embassies in four Road Map countries—South Korea, Greece, Czech Republic, and Hungary—to determine the extent to which embassy officials expected decreases in visa demand, the number of staff that could potentially be freed and any associated costs or savings, and the number of facilities, if any, that could either be sold or where the embassy could relinquish lease commitments. We selected U.S. embassies in these countries for several reasons. First, these countries represent four of the seven highest visa- issuing Road Map countries. The U.S. embassy in Seoul processes by far the most visas in the Road Map countries, while the embassies in Athens, Prague, and Budapest, while much smaller, process among the highest number of the remaining 12 Road Map countries. In addition, we selected these countries for purposes of assessing the different potential aspects of program expansion for countries in different regions of the world. Consular Affairs provided data on the number of Foreign Service officers, Foreign Service nationals, and visas processed in each Road Map country. We used that data to estimate the number of Foreign Service officers that could be moved to other posts from Road Map countries once they enter the Visa Waiver Program, as well as the number of possible Foreign Service national positions that could be terminated. We met with officials in State’s Bureau of Consular Affairs, OBO, Bureau of Resource Management, and the Office of Rightsizing to gain data for constructing cost estimates on any costs and savings that may result regarding staffing and facilities in Road Map countries. For the VWP expansion scenario, we created cost models to estimate the costs and savings due to changes in the number of consular personnel, ESTA development, initial and recurring reviews by DHS of candidate countries’ suitability for the program, and visa application fee revenue. We reported on two sets of nonrecurring cost elements: estimated costs for the termination and hiring of Foreign Service nationals, and the cost of development of ESTA, as provided by DHS. We estimated two sets of recurring cost elements: costs of DHS’s reviews of VWP countries, and visa application fee revenue. We then performed the uncertainty analysis described above on these models To assess how implementation of the new ESTA requirement in particular would affect the demand for visas, and how changes in demand would affect the resources State needs and the amount of visa fees that State receives, we analyzed CBP data on the number of travelers to the United States from each VWP country annually from 2001 to 2007, and reviewed State’s data from 2001 to 2007 on the numbers and types of visas issued in each VWP country. We also reviewed data State provided on the number of Foreign Service officer and Foreign Service national staff involved in visa processing in each VWP country. We met with officials in Consular Affairs and OBO and at embassies in three VWP countries—Japan, France, and Spain—to determine the extent to which embassy officials expected increases in visa demand and the number of additional staff and facilities that could be needed to meet those increases. We selected these three countries for our field work for similar reasons as the VWP elimination scenario—for reasons of size, geographical diversity, and recent history, or lack thereof, of terrorist plots or attacks. We met with Consular Affairs and OBO officials and with officials in State’s Bureau of Resource Management and the Office of Rightsizing to gain data for constructing our cost estimates on the staffing and facilities that would be needed to support increases in visa demand in VWP countries. We also reviewed relevant law, including the August 2007 Implementing 9/11 Commission Recommendations Act of 2007. For the five ESTA scenarios to demonstrate the effects of different possible percentages of travelers coming to U.S. embassies for visas, we created cost models to estimate the costs and savings due to changes in the number of consular and overseas buildings operations personnel and visa application fee revenue. We estimated two recurring cost elements: costs of additional consular personnel and visa application fee revenue. We then performed the uncertainty analysis described above to generate cost estimate ranges for each of the scenarios. To determine the extent to which State had prepared for the effects of the implementation of ESTA, we reviewed the fiscal year 2007 Consular Affairs Bureau Performance Plan to determine State’s goals and objectives regarding visa issuance, as well as any planning that State had done. We also reviewed standards for internal controls in the federal government, including those addressing the importance of identifying risks to achieving program goals and planning ways to mitigate those risks in order to continue to meet program objectives. We conducted this performance audit from May 2007 to April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following are GAO’s comments on the Department of Homeland Security’s letter dated April 30, 2008. 1. DHS has indicated that it will begin implementing ESTA in the summer of 2008. As of May 1, 2008, State had not received data from DHS indicating how many VWP travelers would likely not be approved for travel by ESTA and would therefore need to obtain a visa in order to travel to the United States. It is important that DHS provide data to State that will help State (1) determine how ESTA will affect visa demand and (2) formulate plans to meet the demand. 2. Under ESTA there would be some foreign citizens whose ESTA application would not be approved and would therefore be required to obtain a visa, and some who would choose proactively to obtain a visa for the increased flexibility and travel convenience that a visa could offer. DHS and State need to develop estimates of the total possible increase in visa demand due to both outcomes brought about by ESTA. DHS and State have asserted that there are existing data that could be helpful in developing these estimates. We agree. However, until DHS determines which databases it plans to use for screening ESTA applications, DHS officials told us they will not be able to determine how many VWP travelers would likely not be approved for travel by ESTA. In addition, DHS officials told us that it was unclear at this point what testing, if any, DHS would do to determine how many foreign citizens would choose to obtain a visa, rather than use ESTA. 3. We believe that sharing of this type of information between DHS and State is a first step toward implementing our recommendation that DHS and State estimate how ESTA will affect visa demand in existing VWP countries, but we believe much more needs to be done. 4. We did not intend to suggest that DHS is not sharing information with State. Our point is that DHS has not made a final decision on what databases it will use to screen ESTA applications; as a result, neither DHS nor State can estimate the number of VWP citizens who would not be approved to travel under the program. Without this information, it is difficult for State to plan for how it will meet changes in visa demand. 5. We do not believe that DHS should estimate the resources that State would need to manage increased visa demand or how ESTA could affect visa fee revenues collected by State as a result of the implementation of ESTA. These are actions that State needs to take. However, State has not yet performed this analysis and planning largely due to, according to State officials, the lack of information from DHS on the number of foreign citizens whose ESTA application might be rejected and might seek a visa from U.S. embassies. 6. We agree that it is difficult to predict how many travelers would prefer to apply for a visa instead of using ESTA. Consular officials in Washington and overseas said that the number of travelers choosing to do this could be significant. Some DHS, State, and embassy officials suggested that an actual test or pilot use of ESTA in one or more existing VWP countries could provide information on the number of foreign citizens who choose to obtain a visa rather than use ESTA and the number of foreign citizens whose ESTA applications are not approved by ESTA. However, as noted previously, DHS told us it is unclear what tests, if any, it will undertake to better understand ESTA’s impact, even though DHS stated it plans to implement ESTA in the summer of 2008. In addition to the individual named above, John Brummet, Assistant Director; J. Addison Ricks; Brian Bothwell; Joe Carney; Carmen Donohue; Jennifer Echard; Tim Fairbanks; Grace Lui; and Karen Richey made key contributions to this report. | Under the Visa Waiver Program (VWP), citizens from 27 countries can travel to the United States visa free. Terrorism concerns involving VWP country citizens have led some to suggest eliminating or suspending the program, while the executive branch is considering adding countries to it. Legislation passed in 2007 led the Department of Homeland Security (DHS) to develop its Electronic System for Travel Authorization (ESTA), to screen VWP country citizens before they travel to the United States; if found ineligible, travelers will need to apply for a visa. GAO reviewed how (1) program elimination or suspension, (2) program expansion, and (3) ESTA could affect visa demand, resource needs, and revenues. We collected traveler, staffing, facilities, and cost data from the Department of State (State), DHS, and embassy officials and developed estimates related to the three scenarios above. The potential elimination or suspension of the Visa Waiver Program could cause dramatic increases in visa demand--from around 500,000 (the average number of people from VWP countries who obtain a U.S. visa each year) to as much as 12.6 million (the average number of people who travel to the United States from VWP countries each year)--that could overwhelm visa operations in the near term. To meet visa demand, State officials said they could need approximately 45 new facilities, which we estimate could cost $3.8 billion to $5.7 billion. We estimate State would also need substantially more staff--around 540 new Foreign Service officers at a cost of around $185 million to $201 million per year, and 1,350 local Foreign Service national staff at around $168 million to $190 million per year, as well as additional management and support positions for a total annual cost of $447 million to $486 million. Because VWP elimination would increase the number of travelers needing a visa, we estimate annual visa fee revenues would increase substantially, by $1.7 billion to $1.8 billion, and would offset the year-to-year recurring staffing costs. State has done limited planning for how it would address increased visa demand if the program were suspended or eliminated. Adding countries to the Visa Waiver Program would reduce visa demand in those countries, but likely have a relatively limited effect overall on resources needed to meet visa demand and on State's visa fee revenues. The volume of visa applications is relatively small in most of the 13 "Road Map" countries the executive branch is considering for expansion. If all 13 Road Map countries were to join the program, and if all of those countries' citizens who previously traveled with visas were to travel to the United States without visas, the reduction in workload would, we estimate, permit State to move about 21 to 31 Foreign Service officers to other posts in need, and to cut 52 to 77 Foreign Service national positions. In addition, though program expansion would result in less space needed for visa operations, this would likely result in little or no building or lease savings because any resulting excess consular space is in government-owned facilities, and could not be sold. If all 13 Road Map countries were admitted to the Visa Waiver Program, we estimate that State would lose approximately $74 million to $83 million each year in collected visa fees, offsetting any savings in personnel costs. State and DHS officials acknowledged that the implementation of ESTA could increase visa demand in VWP countries, though neither State nor DHS has developed estimates of the increase. DHS is currently developing ESTA, and DHS officials told us the ESTA rejection rate could be between 1 percent and 3 percent, but they currently do not know. In addition, State and embassy officials believe some travelers might choose to apply for a visa rather than face potential, unexpected travel disruptions due to ESTA. Neither DHS nor State has attempted to estimate how these two factors would affect visa demand, and, as a result, State has not estimated what additional resources would be needed to manage the demand, and what additional visa fees would be received. However, State officials told us that, if 1 percent to 3 percent of current VWP travelers came to embassies in VWP countries for visas, it could greatly increase visa demand at some locations, which could significantly disrupt visa operations. |
The federal Food Stamp Program is intended to help low-income individuals and families obtain a more nutritious diet by supplementing their income with benefits to purchase food. FNS pays the full cost of food stamp benefits and shares the states’ administrative costs—with FNS usually paying 50 percent—and is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. The states administer the program by determining whether households meet the program’s income and asset requirements, calculating monthly benefits for qualified households, and issuing benefits to participants, usually on an Electronic Benefits Transfer (EBT) card. The program is usually administered out of an assistance office and, oftentimes, assistance offices also offer other benefits, including Temporary Assistance for Needy Families (TANF), Medicaid, and child care assistance. Figure 1 outlines the general steps a household must take to participate in the Food Stamp Program and how each step occurs. Eligibility for participation in the Food Stamp Program is based on the Department of Health and Human Services’ poverty guideline for households. In most states, a household’s gross income cannot exceed 130 percent of the poverty guideline (or about $1,654 per month for a family of three living in the contiguous United States) and its net income cannot exceed 100 percent of the poverty guideline (or about $1,272 per month for a family of three living in the contiguous United States). In addition, most states place a limit of $2,000 on household assets, and basic program rules limit the value of vehicles an applicant can own and still be eligible for the program. Other factors affecting benefit levels include size of household, income level, shelter expenses, child care costs, and child support payments. (Eligibility requirements are less stringent for households with elderly or disabled members.) Participants must also periodically recertify by documenting their continued eligibility for program benefits. In fiscal year 2003, the Food Stamp Program issued more than $21 billion in benefits. In September 2003, more than 22.7 million individuals participated in the program. This is an increase from the same month in 2002, when the Food Stamp Program provided benefits to almost 19.8 million Americans. As shown in figure 2, the increase in the average monthly participation of food stamp recipients in 2003 continues a recent upward trend in the number of people receiving benefits. The decrease in number of recipients from 1996 to 2001 can be explained, in part, by the passage of the Personal Responsibility and Work Opportunity Act of 1996 (PRWORA), which toughened eligibility criteria and made certain groups ineligible to receive benefits, and had the effect of un-tethering food stamps from cash assistance. In some cases, this caused participants to believe they were no longer eligible for food stamps when TANF benefits were ended. In addition, studies have suggested that the economic growth in the late 1990s played a major role in the decrease of recipients. Since 2000, that downward trend has reversed, and stakeholders believe that the downturn in the U.S. economy, coupled with changes in the program’s rules and administration, has led to an increase in the number of food stamp recipients. Although the total number of food stamp recipients is still below the 1996 level, since February 2001, the number of recipients has increased over 30 percent. Despite this increase, it remains the goal of FNS and several states to increase participation in the program among eligible families, while maintaining program integrity. FNS’s fiscal year 2000 strategic plan makes it a goal of the administration to improve the rate of food stamp participation among all eligible people to 68 percent by 2005. According to FNS officials, eligible immigrants, elderly Americans, and members of working families are the major subgroups targeted to increase participation. The administration has chosen to focus on participation among working families, in part, because of the increased emphasis placed on the need for work supports such as food stamps, the Earned Income Tax Credit (EITC), and child care and transportation subsidies—since PRWORA. In addition, the Farm Security and Rural Investment Act of 2002 (the 2002 Farm Bill) included provisions intended to encourage participation among underserved groups, including working families, and simplify program administration. For example, the 2002 Farm Bill gave states the option to maintain food stamp benefits at a consistent level for a transition period for individuals who left TANF to go to work. The 2002 Farm Bill also made it possible for FNS to provide financial awards to states with higher or improved performance in program administration. In response, FNS has targeted improving program participation in addition to its existing focus on payment accuracy and lowering error rates. The food stamp error rate was 8.26 percent in fiscal year 2002, the lowest in the program’s history. In the last few years, working families have become a greater proportion of the overall food stamp participant population. As of fiscal year 2002, about 40 percent of those individuals receiving food stamps were members of households with earnings, up from about 33 percent in 1997. As shown in figure 3, this increase occurred at the same time that the proportion of food stamp recipients receiving TANF declined dramatically. This can be explained, in part, by the fact that when TANF recipients leave that program, they may still be eligible for food stamp benefits. Thus, if TANF recipients leave that program because they have found employment, they can continue to receive food stamps until their income increases enough to disqualify them from the program or until they are no longer eligible for other reasons. Because of the increase in the proportion of food stamp participants who are living in households with earned income, serving low- income working families has taken on an increased importance for the Food Stamp Program in recent years. A lower percentage of food stamp-eligible individuals in working families received food stamp benefits than those in eligible nonworking families, and certain family characteristics are associated with the likelihood of participation. In September 2001, the most recent data available, the participation rate of likely food stamp-eligible individuals in households with earnings was estimated to be approximately 52 percent. At the same time, estimated participation among members of eligible nonworking families was almost 70 percent. Despite their lower participation rate, the average participating working family received a larger benefit than the average nonworking family. The amount of food stamps a working family is eligible for appears to be one of the major factors associated with the participation of working families, with those families eligible for larger food stamp benefits more likely to participate in the program. Other characteristics that are associated with the likelihood of food stamp receipt among working families include family size, amount spent on shelter, and the marital status of the head of household. Finally, working families that receive unearned income through other government assistance programs are more likely to receive food stamps than those with no unearned income. In September 2001, an estimated 52 percent of individuals in eligible working families participated in the Food Stamp Program, according to an analysis done for FNS. In the same month, the participation rate among all eligible individuals was estimated by FNS to be 62 percent, and the rate among members of nonworking families was almost 70 percent. As shown in figure 4, the participation rate among working families has been relatively constant in recent years—hovering around 50 percent—and it has consistently been lower than the rate among nonworking families. Among the families that receive food stamps, working families get larger benefits than nonworking families. In 2002, working families that participated in the Food Stamp Program received, on average, $210 a month in food stamps per household, according to information collected by FNS. This amount is more than the $159 average benefit received by households with no earned income. The fact that working families received more benefits, on average, than nonworking families is, in part, due to family size. In general, the larger the family size, the larger the family’s benefit. Working food stamp families have an average of 3.2 persons per household, as opposed to nonworking families that receive benefits, which average fewer than two persons per household. In addition to household size, household income level also affects benefit level, as do other factors such as cost of shelter, child care costs, and child support payments. While it is true that the amount of food stamp benefits that a working family is eligible for decreases as the family’s gross income increases, there is not an immediate drop-off in benefit level as income increases, nor is there a one dollar drop in benefits for every additional dollar in income earned. To demonstrate the effect of additional earned income on working families that receive food stamps, FNS provided us with an example of how earnings might impact a hypothetical family consisting of a single mother with two children. Figure 5 shows estimates of the amount of food stamps for which this family would be eligible given varying monthly income levels. Our data analysis shows that there are several characteristics that are associated with an eligible working family’s likelihood of participating in the Food Stamp Program. To determine the family characteristics that contribute to the likelihood of program participation for eligible working families, we analyzed a database produced by Mathematica Policy Research, Inc., of likely eligible working families based on the March 2001 Current Population Survey (CPS). This is the most current data available. Table 1 shows the differences between participating working families and those we estimate are eligible but not participating in 2000, the last year for which information was available. Some characteristics are associated with the increased likelihood of participation. For instance, food stamp participation was more likely among working families that were eligible for a larger amount of food stamp benefits; specifically, each $100 increase in monthly benefits for which families were eligible increased the likelihood of participating in the program by approximately 30 percent. Working families with young children—under 5 years old—in the household were also more likely to participate than likely eligible working families without young children. Other characteristics are associated with the reduced likelihood of participation. For example, working families with higher shelter expenses were less likely to participate; each $100 increase in monthly shelter expenses decreased the likelihood of participating by about 10 percent. In addition, working families that owned rather than rented their dwellings, were less likely to participate in food stamps than other working families, by about 50 percent. Families with a noncitizen head of household, and families with elderly or married individuals in the household, were also only about half as likely to participate in the program. Finally, families with any unearned income were more than 2 times as likely as those without any unearned income to participate in the Food Stamp Program. And, the likelihood of participating was almost 11 times higher for those families that received Medicaid benefits than for those who did not, over 6 times higher for those who received energy assistance and over 4 times higher for households in which someone received job training. Similarly, the likelihood of participating in the Food Stamp Program was about 3 times higher for working families participating in free or reduced school lunch program or in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) than for those eligible nonparticipating working families that did not participate in those programs. In assessing the results of our analysis, it is worth noting that some of the characteristics that are associated with the participation by likely eligible working families also are likely to be associated with the participation of all eligible participants. For this study, however, the analysis focuses on how these characteristics are associated with working families. By focusing on the differing characteristics of participating and nonparticipating working families, it is possible to develop a better understanding of how working families that receive food stamps are different from likely eligible working families that do not receive benefits. This analysis does not, on its own, offer any explanation for why these families choose to participate, but it does help identify characteristics of those families who do and do not participate. The analysis also provides additional support for how certain impediments we identified can affect a working family’s decision to apply for and receive food stamp benefits. The following section elaborates on those factors. Several factors may impede an eligible working family’s participation in the Food Stamp Program, according to our fieldwork and literature on the subject. Among them are whether the family is aware of the program’s existence and the family’s possible eligibility, the family’s willingness to deal with the program’s administrative process, whether the family judges the amount of food stamp benefits received to be worth the effort and cost of participating in the program, and the extent to which the family associates a stigma with food stamp receipt. Figure 6 shows how these factors interact with the steps necessary for a working family to receive food stamps. To receive food stamps, a family has to apply for the benefits, a step which is taken, generally, by a member of the family going to a local assistance office and filling out an application. Participation, therefore, is dependent on the family being aware of the program’s existence and its possible eligibility. Yet, studies of participation in the program that we reviewed offer evidence that many eligible families lack such awareness. For example, a study done by Mathematica Policy Research, Inc., for FNS, based on interviews with likely eligible individuals that do not participate in the program, found that 72 percent of those surveyed were not aware of their probable eligibility. Program stakeholders, too, said that lack of information about the program and how it works plays a key role in nonparticipation for working families. For instance, according to officials in Florida, working families may not participate because they are uncertain about the program’s rules and eligibility criteria and how to participate. A worker for a community- based organization in Florida who did outreach to working families said that many individuals are unfamiliar with the program’s workings, making food stamp receipt difficult. Program officials also suggested that many working individuals assume that their having a job makes their family ineligible for the program. As one official in Oregon said, she believes that some working people do not think of themselves as food stamp recipients, because they believe that food stamps are something for the very poor, and thus do not think they would be eligible given that they have jobs. Officials in Florida and Massachusetts agreed that some potentially eligible working families do not participate because they do not know that they are potentially eligible for food stamps. Confusion about the relationship between food stamp eligibility rules and TANF eligibility rules can also contribute to working families wrongly believing that they are ineligible for food stamps, according to program officials that we talked with. An official for the New York Office of Transitional and Disability Assistance said that some people still believe that when one’s TANF case closes, one’s food stamp case closes as well. The official said that, despite New York’s best effort to combat this false information, some people leave the Food Stamp Program when they leave TANF because they believe that they are no longer eligible for food stamps. Another factor influencing whether a family participates in the Food Stamp Program is how the food stamp administrative process is perceived. In other words, according to the literature we reviewed and the program officials we spoke with, if the administrative process is seen as being burdensome, families may not participate because of the effort required to apply for and receive food stamps. In addition, our analysis of CPS data demonstrates that, in 2000, working families that participate in the Food Stamp Program are more likely to receive other types of government assistance—such as Medicaid, WIC, and energy assistance—than nonparticipating working families. One possible explanation for this difference is that those that have a comfort level with the administrative process of applying for and receiving assistance might be more likely to participate in the Food Stamp Program. We identified certain administrative practices during our site visits to food stamp offices in Florida, Massachusetts, New York, and Oregon, that could be considered burdensome by potential recipients who work and that might deter participation. Among the practices identified were multiple required office visits, food stamp office operating hours, food stamp applications, requirements for eligibility documentation and verification, finger imaging for program participants, and the requirement for workers to report changes in their income and hours worked. However, we found that not all of these practices that are potential impediments to participation were in place in every local office that we visited and that these practices are not in place in exactly the same fashion at each office. In addition, it is clear that there are potentially significant benefits— including fraud and error prevention, targeting benefits to need, and the provision of more cost-effective service—to some of the administrative processes. Among the practices that can influence whether a family participates in the Food Stamp Program are: Required office visits. In some cases, potential recipients make a trip to the assistance office to fill out a food stamp application and a separate trip for the recipient to meet with a caseworker to determine eligibility. In addition, potential clients may have to return to the food stamp office if they do not bring all the required documentation to their first visit. This means that a family often has to make two or more trips to the office to participate in the program, which can be difficult for individuals who are working. Office hours. Assistance offices are often only open during regular working hours. For example, we visited an office that opened from 8:30 a.m. to 4:30 p.m. from Monday to Friday. For working individuals, getting to a food stamp office during the work week can be difficult. A recent study by the Urban Institute supports the notion that a working families’ participation status is influenced by the hours they work and, perhaps, by the hours a food stamp office is open. The study found that those who work so-called traditional hours are less likely to participate than those who work a less traditional schedule. However, offering longer hours of service can have cost implications such as additional personnel, utility, computer, and security costs. The food stamp application. During our site visits, program advocates said that applications, which often serve both food stamps and other assistance programs, such as Medicaid and TANF, are too complex. For instance, an advocate said that she believed that the food stamp application was too long and required a reading level that was too advanced for most potentially eligible individuals. State officials in Oregon, however, said that having a slightly longer food stamp application allows for better integration of assistance programs, which can benefit recipients, as well as a reduction of workload for caseworkers at assistance offices. Eligibility documentation and verification. Participating in the program requires proof of income level, residency, and family size, among other information. Providing such proof usually is done by bringing documentation to the food stamp office at the time of enrollment. This, however, can be perceived as being burdensome for potential clients. For example, current and former food stamp clients surveyed in an Oregon focus group reported that various documentation forms in that state are intrusive and often excessive. However, under current program rules, these requirements are an essential component of ensuring that food stamp applicants are eligible to receive food stamps and that they receive the proper benefit amount. The finger-imaging requirement. Four states in the country have requirements that new recipients of food stamps are finger-imaged at the assistance office before they receive their benefits. New York was the only state we visited that had such a requirement. Advocates in that state complained that being finger-imaged was a deterrent to participation, in that it potentially required them to make an additional trip to the food stamp office. However, quality control officials in that state believed that it was a vital way to prevent people from defrauding the Food Stamp Program by allowing officials to verify that the applicant did not already have a case open somewhere else in the state. Change reporting requirement. Participating in the program often requires families to report income changes, meaning that some working families would have to be in frequent contact with their caseworker as the amount of hours they worked or the wages they received fluctuated. The requirement has the potential to add to the burden of participation, and program officials said that the requirement was a potential deterrent for working families. However, doing so also ensures that food stamp recipients continue to receive the correct benefit amount. These income changes can result in either an increase or decrease of benefit levels. Government officials we talked with acknowledged that the food stamp administrative process can be burdensome and that participating in the program is complex. However, officials spoke positively of many of the practices in their states, such as finger imaging and the requirement for multiple office visits. Many of the practices that might be perceived by potential recipients as causing burdens contribute to other priorities of the program, such as streamlining the eligibility process and keeping the program’s error rate as low as possible. The perceived impediments associated with many of the administrative processes, and the justifiable reasons the processes exist, highlight the tradeoffs between the various program goals, including increasing program access and reducing error rates, that are inherent with the design of the Food Stamp Program. Some of these practices probably contribute to some eligible working families not participating in the program, but they also probably help to ensure that only eligible families receive benefits, which is vital to maintaining public support for the program. Another factor influencing whether eligible working families participate in the Food Stamp Program is how much they value the food stamp benefit, according to evidence from available public data, the literature we reviewed, and visits to four states. Working families may make an informal cost-benefit analysis of whether their need for the benefits they would receive outweighs the effort and cost of participation. Costs can include taking time off from work and the transportation costs of getting to a food stamp office. Our analysis of 2000 CPS data—which demonstrates that working families that receive other government assistance are more likely to participate in the Food Stamp Program—is consistent with that. Given that many assistance programs are administered at the same office and sometimes using the same application as food stamps, participating in other programs is likely to reduce the cost of food stamp participation, which makes a working family more likely to participate in food stamps. Our analysis of the 2000 data also demonstrates that working families that are eligible for larger benefits are more likely to receive food stamps than those that are eligible for smaller benefit amounts. Program officials also cite the amount of benefits as a reason that some working families do not participate. An official in Massachusetts said that some working families may qualify only for a small dollar amount a month, which our evidence supports, and, because of that fact, some potential recipients believe that the effort associated with applying is not worth the small amount. In addition, available research shows that whether a family is willing to participate in the program can also be influenced by the extent to which the family believes it needs the benefit. In a survey and focus groups Mathematica Policy Research, Inc., conducted for FNS, it found that many likely eligible working families did not participate because they believed that they could get by without food stamps and that others need them more. Such families seem to be placing a minimal value on their food stamp benefit. Moreover, research done by USDA’s Economic Research Service suggests that families that are food insecure are more likely to participate than families that are food secure. Both of these research efforts suggest that a family’s level of need plays a role in whether a working family participates in the Food Stamp Program. Those families believe that they do not need food stamps are less likely to bear the costs of participating in terms of lost time and inconvenience, while those families that are in need may be more likely to participate no matter what the benefit level is. A study published by The Lewin Group reinforces the idea that need plays a role in the decision to participate. In a study using data from the Survey of Income and Program Participation (SIPP), the authors found that likely eligible nonparticipating working households differed from participating working households in their income variability. Nonparticipating households were more likely to have experienced a short-term drop of income than participants and were more likely to have had recent past income that exceeded 100 percent of the federal poverty level. From these findings, the authors suggest that many nonparticipants have expectations of higher future income and do not see the need for food stamps, which helps to explain why they do not participate. The stigma associated with the Food Stamp Program is one of the reasons some eligible families do not participate in the program, according to existing research and interviews with program stakeholders. Although the program’s primary mission is nutrition assistance, program stakeholders believe the stigma associated with food stamps is largely related to the program’s welfare connotations. Focus groups of current and former food stamp recipients, conducted by a community-based organization in Oregon, echoed that sentiment. A theme that ran through the focus group responses was that people were ashamed, or too proud, to receive food stamps. The focus group responses indicated that individuals can have personal shame about receiving food stamp benefits and may be worried about being looked down upon for receiving them. For working families, the welfare stigma can be a particular deterrent toward food stamp participation. For example, program officials cited the occasional need to verify a food stamp recipient’s wages and employment status with the recipient’s employer as one stigma associated with food stamp receipt for working families. A related deterrent for working families is that to participate in the program, a family usually has to make a trip to the food stamp office, which is also the “welfare office.” Advocacy groups said that this was a requirement that discouraged participation among working families. Former Florida food stamp recipients told us that caseworkers ask personal questions regarding how they manage their finances. For example, how one pays for hair care and laundry, which they considered intrusive and made them less likely to participate in the program. However, local officials in Florida said that these questions are an effective method to deter program fraud and ensure that food stamp benefit amounts were provided accurately. Measuring the extent of stigma can be difficult, because stigma is often a personal matter. Many of the officials we spoke with said that the move toward EBT cards has helped alleviate the stigma of the program for working families and others by making food purchases by program recipients look more like ordinary food purchases, thus making it more difficult for other shoppers at grocery stores to identify food stamp recipients’ purchases. Still, many of the same officials said that stigma remains an issue. FNS and the states and localities we visited have taken or suggested a variety of steps to address identified program impediments that may hinder the participation of working families in the Food Stamp Program. These efforts include informing the public about the availability of food stamps, easing the administrative processes, estimating eligibility and the potential size of benefits, and reducing the stigma associated with food stamps while also adopting strategies to ensure that serving working families does not jeopardize program integrity. Several federal, state, and local efforts are in place to make information about the Food Stamp Program available to potentially eligible working individuals. These include efforts to inform the public through outreach efforts, such as media campaigns, and to reach potential program participants in locations where they are likely to be, such as their places of employment. While officials we spoke with were hopeful about the ability of these efforts to reach the right audience, little outcome data are available to determine which outreach efforts are most effective. FNS has provided some specific grants to states and organizations to conduct food stamp outreach; however, FNS does not know the total amount of other funds states spend on outreach. In fiscal years 2001 and 2002, FNS awarded 100 percent funded competitive outreach grants to state- and community-based organizations. Some of these grants specifically targeted working families while others targeted all low-income families. The impact of these grants are largely unknown to date, although FNS is conducting assessments. Because the grants are awarded to address local needs, FNS officials reported that they do not expect major findings on ways to improve service to working families, but do expect results to reveal potentially effective ways to do localized outreach. In addition, FNS also recently awarded competitive program participation grants made available by the 2002 Farm Bill to agencies or universities. The goal of these grants is to improve the food stamp application process and work to identify and eliminate barriers to participation. FNS will in addition, pay for half of any outreach effort funded by the states. Some of these efforts are formalized through an approved outreach plan, and the funds spent on them are reported separately. Other state outreach efforts, however, may be conducted without FNS’s knowledge and claimed as an allowable administrative expense but not separately identified as outreach in the states’ fiscal reports according to an FNS official. Table 2 provides more information about the known outreach efforts. FNS regional offices also conduct program access reviews of selected local offices in all states to determine whether state and/or local policies and procedures served to discourage individuals from applying for food stamps or whether local offices had adopted measures to improve customer service. Some of these measures are gathered into a periodic best practices guide published by FNS. The guide contains information about the goal of the practice being tried, the number of places where it is in use, and contact information for a person in these offices. For the most part, however, the guide does not include any evidence that these efforts were successful or any lessons learned from these or other efforts. FNS is launching a $4 million, nationwide radio food stamp promotion campaign to raise awareness about the benefits of the Food Stamp Program. The goals of the campaign are to position the program as a nutrition assistance and work support program and improve the public’s understanding of the program’s purpose and who may be eligible, including working families. Transit ads and radio spots have been developed and will be placed in key locations throughout the nation, promoting the national or state toll-free Food Stamp Program numbers, as appropriate. The ads will refer potential food stamp recipients to either FNS’s or the state’s telephone hotline to receive information about the Food Stamp Program. In 2003, the FNS bilingual (English and Spanish) hotline averaged about 1,900 calls per month according to FNS. Some states have also launched media campaigns. For example, in New York, as part of its approved outreach plan, efforts were underway to garner interest in the program in the form of a statewide, $300,000 media campaign and a $500,000 media campaign for New York City. In addition, in each of the four states we visited, either the state- or a community-based organization had established a hotline to provide broader outreach to potential clients and to make them aware of program eligibility requirements and the documentation they need to apply for benefits. For example, from September 2001 to June 2003, the Community Food Resource Center in New York City fielded over 110,000 calls from 59,000 individuals requesting food stamp assistance. The center reported that these calls resulted in 3,240 new food stamp cases. Other media outreach efforts, both statewide and local, included advertising on television and radio, posters, and shopping bags and in newspapers and direct-mail supplements. Many of these broad outreach efforts were not specifically targeted to working families, but since some working families may not believe they are eligible for food stamps, these efforts may help to make them aware of the eligibility requirements, promote the image of the Food Stamp Program as a nutrition assistance program, and inform families what they have to do to apply for benefits. Some efforts are made to reach working families specifically by making applications and informational materials available where eligible working families are likely to go, such as at tax preparation sites, health clinics, supermarkets, WIC centers, and food pantries. For example, FNS has partnered with H&R Block to promote food stamps to those families who qualify for the EITC, which can indicate eligibility for food stamps. FNS officials said this effort resulted in an increased number of calls to their hotline during the tax season. FNS plans to expand this type of partnership further to tax preparers at the Voluntary Income Tax Assistance Program. In Oregon, we spoke with a food stamp worker who is regularly stationed in a local food pantry. She noted that many working people are more comfortable coming to the food pantry to apply for food stamps because government food stamp offices can be off-putting to some people. She estimated in the last 2 years she has done 1,000 intakes at the food pantry. However, food stamp officials in all four states cited problems with tight state budgets resulting in staffing freezes or cuts. As a result, some offices have cut back on such resource-intensive practices. Food stamp advocates have also worked with employers whose employees would likely be eligible for benefits. For example, in Miami, the Human Services Coalition of Dade County, as part of the Greater Miami Prosperity Campaign, is attempting to reach out to employers of low- income workers to promote certain available work support programs for their employees. The goal is to convince employers that these work supports are a win for employees because they augment the wages of low- income workers; they are a win for employers, because they bring stability to the life of their employees who, therefore, feel more loyalty to their employer; and, they are a win for the community at large, because more federal dollars are brought into the local economy through the spending of those who receive work supports. Representatives of the coalition and its partners are working with the Greater Miami Chamber of Commerce and are making presentations to employers and their low wage employees and human resource manager associations around the region focusing on this message. The coalition representatives ask employers to take three actions to support the campaign: (1) send letters to employees about available work supports; (2) provide information about the EITC, children’s health care, and food stamps when sending out copies of government documents such as Internal Revenue Service W-2 earning statements; and (3) allow coalition workers to pre-screen employees at the workplace. The prescreening allows the advocates to more fully explain the eligibility requirements and what steps applicants must take to qualify for benefits. As of August 2003, the advocates had convinced a large Miami-based cruise line to send out information about the work support programs with employees’ W-2 forms and pay stubs, and they had also conducted on-site pre-screening for employees at several local businesses. Some state and local programs we visited have also partnered with other assistance programs, such as the EITC, Medicaid, Head Start, school lunch program, and WIC, to make working and nonworking families aware of their potential eligibility for food stamps. Stakeholders spoke highly of such efforts, and as previously discussed, our analysis of simulated data show that the likelihood of working families participating in the Food Stamp Program was much higher if they participated in other assistance programs as well. Finally, our previous work also showed that 26 states are conducting food stamp eligibility interviews in at least some of their Workforce Investment Act one-stop centers. In addition to the outreach efforts that have been tried, one local official suggested that food stamp outreach could be greatly expanded if the state used taxpayer records to identify potentially eligible working families. Adopting such a strategy, however, could be problematic because of the need for state human service agencies and departments of revenue to coordinate with one another, as well as privacy concerns over the use of tax data. States and local offices we visited have adopted a number of different practices to make administrative processes less burdensome on potential participants. Among the efforts that resonated particularly with working families were those intended to save participants’ time and allow them to fulfill program requirements to ensure only eligible families receive benefits in ways that minimize their need to miss work. While officials we spoke with were hopeful about these efforts, little outcome data are available to determine their effectiveness at easing administrative burdens. States and local offices we visited have adopted a number of different practices to facilitate the food stamp application process. Oregon and Florida have adopted a “no wrong door policy” that allows people to apply for benefits at any food stamp office, and states with Web sites have placed food stamp applications on the Web, which is a requirement of the 2002 Farm Bill. In addition, New York, Oregon, and Massachusetts shortened and simplified their food stamp applications. While well received, shortening the application has had some drawbacks. For example, New York officials told us that because their shortened application was for food stamps only, it limited the client’s ability to apply for more than one assistance program at the same time. Also, local officials in Oregon told us that their shortened form required their already overburdened caseworkers to spend more time with clients gathering information previously captured on the longer application forms. States are also facilitating the food stamp application process by adopting certain available administrative options that can simplify the application process. For example, when considering the value of a vehicle as an asset, states may choose to substitute the more generous asset rules from other assistance programs in place of Food Stamp Program rules thereby reducing the amount of documentation collected from individuals applying for more than one program. All four states we visited have adopted similar vehicle policy options. All four states have also adopted an option that allows certain families with incomes up to 200 percent of the poverty level to be automatically eligible for the Food Stamp Program. Several states have experimented with alternative practices to requiring applicants to come to the food stamp office during traditional office hours. Three local offices we visited experimented with offering extended office hours during the week or on Saturdays. State and local officials reported mixed success with these options. For example, officials at one local office in Oregon said that adopting client friendly policies such as these has led to an increase in the caseload while local officials in New York and Massachusetts dropped these efforts after few potential clients took advantage of the extended hours. In addition, in an effort to help working families avoid missing work and overcome transportation impediments, Massachusetts adopted liberal rules allowing local offices to interview clients and take food stamp applications over the telephone or via the mail if coming to the office would be a hardship for them. Using this practice, clients still must submit the necessary documentation to ensure program integrity. In the period from November 2002 to June 2003, over 5,000 food stamp applications were received through the mail. Some states have taken advantage of options to simplify on-going reporting requirements. Typically, working families were expected to report earned income changes. FNS was concerned that the increase in employment among food stamp households would result in larger and more frequent income fluctuations, which would increase the risk of payment errors and be burdensome for the working poor. As a result of these concerns, FNS established regulations in November 2000 that gave states the option to require working families to report changes in income between 6 month certification periods only when a change in their income made them ineligible for food stamps. All of the four states we visited chose this option. In addition, FNS continued to support efforts to further expand states’ flexibility to streamline complex rules, simplify program administration, and help ease the transition from welfare to work through their support of the 2002 Farm Bill amendments. For example, the 2002 Farm Bill simplifies on-going reporting requirements by allowing states to disregard changes in certain amounts deducted for child care expenses, child support payments made, and medical expenses. One of our four states, New York, has chosen this option. Finally, Oregon has simplified on-going participation by allowing clients to recertify their program eligibility via the mail rather than by requiring face-to-face interviews. For families who are leaving cash assistance, the 2002 Farm Bill also allows states the option of facilitating continued program participation by providing 5 months of automatic transitional food stamp benefits when a family leaves the TANF program without requiring the family to reapply or submit any additional paperwork. Of our four states, Massachusetts, New York, and Oregon have adopted this option. Finally, because application and continuing program participation impediments can vary from state to state and from locality to locality, some states and localities have established working groups of program stakeholders to identify program impediments and to generate ideas on how to remove them. For example, the Oregon Hunger Relief Task Force established a committee of officials from the state Department of Human Services and other state agencies, community advocates, food bank representatives, local office workers, and former recipients to assess program access and participation issues. These efforts have opened the lines of communication and have been deemed successful by both the state officials and advocates we interviewed. Some program advocates and officials have taken steps to develop ways to reach people who may have the wrong impression about their eligibility and the size and value of food stamp benefits. While the usage of these tools shows promise where they have been put into place, the final outcomes of their use are still largely unknown. FNS’s Web site has a pre-screening tool that allows individuals to log on from personal computers and, guided by questions regarding family characteristics, determine their potential food stamp eligibility and the size of their benefit. FNS, however, has not yet started to track how often this tool is used. Some experts we spoke with suggested that such Web-based tools are most effective when a third party, such as a program advocate, is available to help potential clients use them. We visited three community-based organizations that had prescreening tools available to help individuals determine their eligibility and estimate their benefits. Project Bread, located in Massachusetts, uses a Web-based tool similar to FNS, while Florida Impact and the Community Food Resource Center in New York City send staff members with laptops to sites where likely eligible people are found—including emergency food programs or pantries, WIC centers, health clinics, hospital lobbies, unemployment offices, supermarkets, and senior centers—to prescreen potentially eligible clients. The Community Food Resource Center’s prescreening tool collects client information, estimates their potential food stamp benefits, and prints out a document guide listing the documents necessary to apply. This estimated benefit information allows the client to decide whether the potential benefit would outweigh the perceived burden of following through with the application process. Table 3 has selected results from these efforts. Officials from these organizations have not studied why potentially eligible people chose not to apply for food stamps. Because some working families believe that their food stamps benefits are likely to be too low to make participation worthwhile, some local offices have taken steps to promote the related benefits of food stamp participation, such as reduced utility bills in some states and categorical eligibility for school meals. While such efforts may convince potential participants of the value of food stamps, many of the stakeholders we interviewed believe that more people would participate in the program if the minimum food stamp benefit was raised from $10 to at least $25. Doing this, however, would increase program costs according to FNS. Program stakeholders are taking steps to address the stigma associated with receiving food stamp benefits, trips to the “welfare office,” and being a “food stamp recipient.” Program officials and stakeholders noted changes that have already been made in the program to limit the stigma and suggested additional changes. While officials we spoke with were hopeful about these efforts, little outcome data are available to determine their effectiveness at easing administrative burdens. PRWORA mandated that states replace food stamp coupons with the EBT card, a change that introduced a greater element of privacy during food purchases. Many of the stakeholders we spoke with believe the EBT card has helped to reduce the stigma associated with the use of food stamps. Use of the EBT card has also had the effect of reducing food stamp fraud. As of September 2003, 95 percent of all food stamp benefit issuance is provided via the EBT card. Some states and local outreach organizations have taken the additional step of re-branding, or renaming, their EBT cards. Oregon promotes its card as the Oregon Trail Card, and the Community Food Resource Center in New York City promotes the EBT card as “the Food Card.” Beyond renaming the card, many officials suggested that stigma could be reduced if the program’s name was more suggestive of a nutrition program rather than a welfare program. Four states across the nation have already renamed their programs. For example, Michigan has changed the name of its Food Stamp Program to the “food assistance program.” FNS is currently considering renaming the program and is consulting with its state partners on what the name should be. To corroborate the Food Stamp Program as a nutrition program and to eliminate trips to “the welfare office,” some officials suggested moving the Food Stamp Program out of the state welfare office and placing it under the Health Department. However, because states decide where their various nutrition programs reside, this program change would be difficult to implement nationally. New York State is testing a model that allows potential applicants to avoid the welfare office. The state has developed Transitional Opportunity Program centers for former TANF recipients who are working and who are still eligible for work supports, such as food stamps. The idea behind these centers is to provide benefits and case management for low-income workers in a friendlier, more positive environment where the focus is on helping low-income workers achieve self-sufficiency. To do so, caseworkers provide active case management, bank officials provide seminars on how to open and manage a bank account, tax preparers discuss the EITC, former welfare recipients discuss paths to success, childcare providers highlight strategies for childcare, and nutritionists discuss healthy eating habits. The case managers are also available to help if a rent or utility emergency arises. Finally, some food stamp researchers have suggested a fundamental reshaping of the way the Food Stamp Program is administered and overseen. They suggested delivering program benefits to those who work regularly through the tax code, much like the EITC program. Such a change would eliminate the need for working individuals to go to the food stamp office. However, such a fundamental reshaping of the program from food assistance to cash assistance has significant implications for program mission and integrity, targeting intended beneficiaries, and administration and would require significant study and review. State officials believe that food stamp cases with earned income are more complex and error prone than cases with no income. Food stamp quality control data show that in fiscal year 2001 cases with only earned income accounted for about twice the percentage of dollars attributed to errors as cases with no income. These cases are more complex because low-income working families’ incomes tend to fluctuate as the numbers of hours they work rise and fall. Therefore, tracking eligibility status, proper benefit level, and accurate income level is more difficult. This is important to note because officials in three of the four states we visited were supportive of the goal of increasing the participation of working families but were also concerned about the impact these more complex cases could have on their program error rates. Data indicate, however, that the increase in the proportion of working recipients from fiscal years 1997 to 2001 did not unduly affect the program error rate. Food Stamp Program quality control data show that over this same period the percentage of dollar payments made in error to households with only earned income remained about the same while the overall program error rate declined. These data suggest that program integrity can be maintained as states strive to better serve working families. The program simplification options that many states have adopted also have the potential to reduce program error while easing the administrative burden on states and on working families. Some of the options ease the administrative burdens on families by reducing the number of times they have to report changes in their cases, in turn reducing the number of potential errors that can occur responding to those changes. Other options ease program participation by simplifying the eligibility determination process. By adopting these options, states are hoping to reduce program errors while better serving working families. Passage of the 1996 welfare reform law changed the safety net landscape for families by placing greater emphasis on work and self-sufficiency. In this new environment, the Food Stamp Program can play an important role in supporting low-income working families, either in their attempt to avoid receiving cash assistance or as they leave cash assistance and strive for self-sufficiency. Current efforts focus attention and resources on increasing participation among all eligible families, particularly working families. Yet, almost half of those working families that are likely eligible to receive benefits do not participate in the program. Many of the federal, state, and local officials we spoke with believe the program could do more to serve eligible working families, and FNS’s goal is to make it easier for low-income and working families to access the benefits to which they are entitled. We observed a number of initiatives that show promise in addressing one or more of the reasons why working families do not participate in the program. Most of the initiatives we observed have only been tried on a small scale at various scattered locations. While we know many efforts are being undertaken, a complete picture is unavailable because FNS does not systematically track state activities, nor does it require that states collect and evaluate outcome data on their own efforts. Although FNS is beginning to assess the outcomes of some of the outreach grant efforts, not enough is currently known about all the practices being tried and whether they have achieved their goals. In addition, in those cases where initiatives have achieved positive outcomes, there is no systematic vehicle for disseminating lessons-learned to other programs or community-based organizations interested in taking similar steps. Efforts to systematically collect and report simple outcome data on such initiatives could be a significant resource for other states that want to increase the food stamp participation among their eligible working families. However, despite FNS’s and states’ best efforts, some eligible working families may continue to choose not to participate in the Food Stamp Program and may have good reasons for making that choice. Other eligible families could benefit significantly if they did participate. Some of the factors that influence a family’s decision about whether to apply for food stamps are unrelated to the program’s design. Some families may make a personal decision that the effort and cost to them of applying for and receiving benefits, including complying with the measures in place to promote program integrity, is not worth the ultimate gain. This seems to be especially true for families with higher earnings. Each family must make its own personal calculation based on its unique circumstances, and some families will likely continue to opt out of receiving benefits. To better target federal, state, and local outreach efforts; maximize the benefits of the available outreach dollars; and identify and eliminate impediments to food stamp participation, we recommend that the Secretary of Agriculture direct FNS to encourage states to collect and report on the results of their outreach and other efforts to increase participation among eligible working families and disseminate the lessons learned from those efforts to other states and localities. We provided a draft of this report to the U.S. Department of Agriculture for review and comment. On February 9, 2004, we met with FNS officials, including the acting deputy administrator for the Food Stamp Program, to get their comments. The officials said that they generally agreed with our findings, conclusions, and recommendations. FNS also provided us with technical comments, which we incorporated where appropriate. The FNS officials reiterated their commitment to increase working families’ participation in the Food Stamp Program and suggested that we provide a fuller recognition of their efforts to increase this participation. The officials said they believe their ongoing efforts to better inform the public about food stamp availability and the program’s eligibility criteria are contributing significantly to the overall goal of increasing program participation. In addition, the officials highlighted their efforts to work with state and local food stamp agencies and other partners—such as nonprofit organizations, retailers, and employers—to assist in developing and implementing outreach strategies. The officials also cited their efforts to encourage the states to simplify the administrative process and adopt user friendly options. In addition, we were asked to highlight additional examples of FNS’s efforts, and we did, where appropriate. Agency officials agreed that our recommendation that FNS track outreach activities and collect outcome data could provide valuable information. However, the officials expressed concern that imposing additional data collection, reporting, and evaluation requirements could be seen as burdensome by states or local agencies and may discourage some from undertaking desirable, but optional, activities like outreach. We agree that requiring rigorous research and evaluation of all outreach efforts would be costly and difficult. However, we believe encouraging states to report simple and uniform outcome data on the results of USDA-funded efforts could be a cost-effective means of collecting information of value to others attempting to increase working families’ participation in the program. For efforts that are funded locally, USDA could provide a suggested template of data to collect so that similar data elements would be gathered across various locations. For example, the sites we visited did not systematically collect similar information on the number of working families reached by different activities and the disposition of their cases. USDA could also use cost-effective means of sharing lessons-learned with states and localities by posting this information on its Web site. We are sending copies of this report to the Secretary of Agriculture; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staffs have any questions about this report. Other major contributors to this report are listed in appendix III. Our analysis relied on simulated data produced by Mathematica Policy Research, Inc., based on the March 2001 Current Population Survey (CPS). The simulated data were used to establish a universe of all working families that are likely eligible to receive food stamps for the purpose of comparing the characteristics of participating working families to likely eligible nonparticipating working families. Mathematica created this simulated data, in part, because comparisons between the CPS estimates of Food Stamp Program participation and administrative data from the program suggest that program participation is underreported in the CPS, and eligibility for program benefits cannot be directly observed or reported in existing survey data. To complete the simulation, Mathematica assigned individuals in each CPS household to one or more “food stamp units.” For each food stamp unit, Mathematica used CPS data and information from other sources to assign simulated values for variables such as monthly shelter expenses and monthly earned income. Mathematica Policy Research, Inc., then tested each food stamp unit to assign the unit as eligible or ineligible to receive food stamps. The cumulative characteristics of all households with eligible food stamp units, as determined by Mathematica’s simulated data, are shown in table 4, and include income-related and demographic factors associated with the households and variables that reflect whether anyone in the household was participating in other government assistance programs. According to table 4, on average, the households with earnings—working families—that were deemed eligible to participate in the Food Stamp Program were eligible to receive $153 in food stamps per month. The monthly shelter expenses of these families averaged $508, and the monthly income for these families averaged $956. Slightly more than one-third (37 percent) of the families reported some nonearned income, and a similar percentage (34 percent) of the families involved had homes or dwellings that were owned rather than rented. The rest of the results can be discerned similarly. In addition to assigning a determination of whether a unit within a household is eligible to receive food stamps, Mathematica Policy Research, Inc., made an assignment, based on its known participation patterns, as to whether eligible food stamp units were receiving food stamp benefits as of a fixed reference month. However, we could not use Mathematica’s simulated variable that identifies units receiving food stamp benefits to conduct the substance of our analysis, which was primarily focused on the difference among participating and likely eligible nonparticipating food stamp units. This is because Mathematica’s procedures were not amenable to multivariate procedures that would allow an estimate of the “net” effects of different factors on Food Stamp Program participation – for example, the effect that food stamp benefit amounts have on the likelihood of participating after the associations of benefit amounts and participation likelihoods with other potentially confounding factors are taken into account. Instead, to conduct this analysis, we relied on CPS estimates of participating working households and compared those households with those that were eligible, but not participating, based on Mathematica’s work. Given that, it should be recognized that the results below are affected by our having chosen to use CPS’s variable to identify participants and Mathematica’s variable to identify eligibility. Among households with working families an estimated 26 percent of the households with an eligible unit (as defined by Mathematica) were identified as participating by CPS’s variable. By contrast, an estimated 31 percent were identified as participating by Mathematica’s simulated variable. This difference masks somewhat the extent of the discord between the two variables; an estimated 38 percent of all households that Mathematica’s simulation indicates as participating were not coded as participating by CPS, and an estimated 2 percent of the households that Mathematica’s simulation indicates as nonparticipating were coded as participating in CPS. Additionally, an estimated 30 percent of the households that CPS recorded as participating were deemed ineligible to participate by Mathematica’s simulation process. Still, the work that went in to Mathematica’s simulation gives us confidence that the results presented in table 5 are a reasonable approximation of the different characteristics between participating and nonparticipating eligible working families. It is worth noting that variations from the procedures produced by Mathematica for estimating eligibility could yield results that differ from our analysis since our work relies on Mathematica’s simulation of eligibility. To estimate the net effect of different factors affecting the likelihood of participating, we used logistic regression models that produce odds ratios to indicate how the odds on participating differed across different types of households, or across various levels of continuous variables (like income or the value of food stamp benefits that households were eligible for) that are associated with each unit. Overall, the odds on participating were 0.35; that is, 35 eligible households participated for every 100 that did not. These odds differed markedly across different households, however, and the odds ratios from bivariate models shown in table 5 indicate the bivariate effects of various factors on the odds on eligible food stamp working families participating in the Food Stamp Program, when each factor is considered in isolation, or independently, from every other factor. Model 1 and model 2 test for the effect of any characteristic using multivariate models, in order to control for other factors in measuring whether any single factor effects likelihood of participation. These bivariate results demonstrate that, based on our estimates, food stamp participation was more likely in eligible households in which the benefits of participation were greater; that is, each $100 increase in monthly benefits for which household members were eligible increased the odds on participating by a factor of 1.31, or by 31 percent. Likely eligible households with higher shelter expenses were, at the same time, less likely to participate; each $100 increase in monthly shelter expenses decreased the odds on participating by a factor of 0.91. While households with higher incomes were not significantly different from households with lower incomes to participate, households with any nonearned income were 2.6 times as likely as those without any nonearned income to participate. Larger households were also more likely to participate than smaller ones (i.e., every additional person in the eligible household increases the odds on participating by a factor of 1.1). While the presence of elderly or married individuals in a household reduces the odds on participation by roughly half, the presence of young children (under age 5) in the household nearly doubles the odds of participating. Households consisting of all black members (including black Hispanics) were nearly twice as likely as families with all white (non-Hispanic) members to participate, though there were no significant differences between households consisting of other races and households that were all white. Households with any noncitizen unit head, and households involving owned rather than rented dwellings, were also less likely to be participating in food stamps than other households. Participation in the Food Stamp Program was also greatly affected by whether the persons in the eligible household participate in other programs. That is, the odds of participating were over 10 times higher for those working households that received Medicaid benefits (than for those who do not), over six times higher for those who received energy assistance, and over four times higher for households in which someone was receiving job training. Similarly, the odds of participating in the Food Stamp Program were about three times higher for those working households participating in free lunch programs or in WIC than for those not participating in those programs, and they were roughly twice as great for those who received any SSI benefits. The first multivariate model (Model 1) provides estimates of the effects of the various socioeconomic and demographic factors when they are estimated simultaneously, using a multivariate logistic regression model. While odds ratios estimating the different effect sizes change modestly in some cases, most of the factors that appeared significant when they were estimated from bivariate models remain significant when they are estimated in a multivariate context and the effects of other factors are controlled. Model 2 of the multivariate analysis shows the estimates of the effects of participating in other programs, net of each other, and net of the effects of the socioeconomic and demographic factors. Here too, most of these effects remain consistent with what was found in the bivariate analyses, except that receiving SSI does not appear to affect Food Stamp Program participation net of the other factors and, when other factors are controlled, households involved in the Children’s Health Insurance Program appear to be only a third as likely as households that do not receive food stamps. While our estimates of the effects of participating in other programs on food stamp participation are somewhat attenuated or diminished when they are estimated simultaneously, rather than independent of one another, it remains the case that households, including someone who receives Medicaid, energy assistance, or job training are the most likely to receive food stamps. We believe that, these multivariate estimates of the effects of program participation are, by virtue of being estimated simultaneously and while controlling for the socioeconomic and demographic characteristics of the eligible households, somewhat better estimates than those obtained in our bivariate analyses. Appendix II: Summary of Farm Bill Provisions Treats legally obligated child support payments to a nonhousehold member as an income exclusion rather than a deduction. Excludes types of income that are not used to determine eligibility for TANF or Medicaid, with some exceptions Simplified definition of resources (option) Excludes certain types of resources that the state does not count for TANF or Medicaid. Simplified determination of housing costs (option) Allows states to use a standard deduction from income of $143 per month for homeless households with some shelter expenses Disregard reported changes in deductions during certification periods except for changes associated with a new residence or earned income until the next recertification. Expand simplified/semiannual reporting systems to most households, not just those with earned income. Transitional food stamps for families moving from welfare (option) Continue food stamp benefits to households for up to 5 months after they lose TANF cash assistance. Simplifies the Standard Utility Allowance to promote its use. Pilot project to assess feasibility of issuing standardized rather than individual benefits to certain residents of group homes. Require state agencies that have a Web site to post applications on these sites. Authorizes up to $5 million annually to pay for projects to improve access for food stamp- eligible households or to develop and implement simplified application and eligibility systems. Reform of quality control (QC) system This provision makes substantial changes to the QC system that measures states’ payment accuracy in issuing food stamp benefits. Only those states with persistently high error rates would face liabilities. Creates a performance system that will award $48 million in bonuses each year to states with high or improved performance for actions taken to correct errors, reduce the rates of error, and improve eligibility determinations. This provision restores food stamp eligibility on certain dates to qualified aliens who are otherwise eligible and meet criteria laid out in the legislation. Bob Kolasky and Thaddeus Hackworth also made significant contributions to this report. In addition, Paula Bonin, Robert DeRoy, Kevin Jackson, Beverly Ross, Sidney Schwartz, and Douglas Sloane produced our estimates of participation among working families, and Corinna Nicolaou assisted in the message and report development. Welfare Reform: Information on Changing Labor Market and State Fiscal Conditions. GAO-03-977. Washington, D.C.: July 15, 2003. Food Stamp Employment and Training Program: Better Data Needed to Understand Who Is Served and What the Program Achieves. GAO-03-388. Washington, D.C.: March 12, 2003. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Welfare Reform: States Provide TANF-Funded Services to Many Low- Income Families Who Do Not Receive Cash Assistance. GAO-02-564. Washington, D.C.: April 5, 2002. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Human Services Integration: Results of a GAO Cosponsored Conference on Modernizing Information Systems. GAO-02-121. Washington, D.C.: January 31, 2002. Earned Income Tax Credit Eligibility and Participation. GAO-02-290R. Washington, D.C.: December 14, 2001. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Food Assistance: Options for Improving Nutrition for Older Americans. GAO/RCED-00-238. Washington, D.C.: August 17, 2000. Food Stamp Program: Better Use of Electronic Data Could Result in Disqualifying More Recipients Who Traffic Benefits. GAO-RCED-00-61. Washington, D.C.: March 7, 2000. Food Stamp Program: Various Factors Have Led to Declining Participation. GAO/RCED-99-185. Washington, D.C.: July, 1999. | Eligible working families are believed to participate in the Food Stamp Program at a lower rate than the eligible population as a whole. As a result, many federal, state, and local officials believe the program is not living up to its potential as a component of the nation's work support system. This report examines: (1) what proportion of eligible working families participate in the program and what family characteristics are associated with a family's participation; (2) what factors may be acting as impediments to a working family's decision to participate in the program; and (3) what steps are being taken, or have been suggested, to help eligible low-income working families participate in the program while ensuring program integrity. In 2001, an estimated 52 percent of eligible individuals in working families participated in the Food Stamp Program compared with about 70 percent of eligible members of nonworking families. Participating working families are more likely to receive greater food stamp benefit amounts than those eligible working families that do not participate. Also, participating working families were more likely to participate in other government assistance programs and to rent rather than own their home. Factors that can impede an eligible working family's participation in the program include whether the family is aware of the program's existence and eligibility criteria and whether a family considers the program's administrative process--including having to make frequent trips to a food stamp office during working hours and providing documentation of income--overly burdensome. However, there are some potentially significant benefits, including error and fraud prevention, to some of the administrative requirements. Evidence also suggests that some families weigh the perceived burdens of participation against the benefits of doing so and perceive a stigma attached to receiving food stamps. The Food and Nutrition Service (FNS) and several states and localities have taken or suggested steps to address the impediments to participation in the program for working families, while also considering ways to balance easier participation with program integrity. These efforts include increasing food stamp outreach, adopting new administrative processes to ease participation and reduce program error, developing tools to help families estimate food stamp benefit amount, and re-naming the program to reduce the stigma associated with food stamps. Compiling a complete picture of these steps was not possible, however, because FNS does not systematically track these efforts, and the outcomes of their use are still largely unknown. |
Fallon NAS was constructed in the 1940s on land that previously had been farmed using water provided by the Bureau of Reclamation's Newlands Reclamation Project. Prior to the project, which was authorized in 1903, early settlers irrigated about 20,000 acres using simple diversions from the Truckee and Carson rivers. The Newlands project nearly quadrupled the amount of irrigated land to 78,000 acres, and the land surrounding the airfield has been irrigated farmland since. In the 1950s, the Navy obtained, as a buffer against encroachment, land surrounding the airfield that had been irrigated farmland. It has since leased the bulk of that land to farmers. Fallon NAS officials believe that continued use of the land for agriculture is of value to the local community as well as to the air station. They point out that the City of Fallon and Churchill County are concerned that any reduction in Fallon NAS' irrigation could have a negative impact on the recharging of the underlying aquifer, cause the manifestation of noxious weeds in fields, and have an impact on the economics of neighboring ranches and farms. The Navy currently holds water rights under the Newlands project for approximately 2,900 acres of the land at Fallon NAS. Of this acreage, the Navy has active water rights to about 1,900 acres of land. Water rights are attached to specific parcels of land, and Fallon NAS is entitled to 3.5 acre- feet of water per acre of water-righted land from the Newlands project. An acre-foot is the volume of water sufficient to cover an acre of land to the depth of 1 foot, which is about 325,900 gallons. The water rights for the remaining 1,000 acres are inactive. The active water rights, which would equal about 2.2 billion gallons, are used to obtain irrigation water to support the Navy's 3,595-acre greenbelt surrounding Fallon NAS' airstrip areas. The greenbelt has consumed an average of 1.6 billion gallons of this irrigation water each year since 1990. This figure includes drought years in which less water than the normal allocation was available and other years in which water over and above the acreage's entitlement was made available. As can be seen in figure 1, about a third of the greenbelt acreage lies inside the runway protection zone. Under Public Law 101-618, enacted in 1990, officials at Fallon NAS were required to develop an alternative land management plan that would control dust, provide for fire abatement and safety, and control damage to aircraft from foreign objects, while at the same time reducing the use of irrigation water. The law also required Fallon NAS to select and implement land management plans without impairing the safety of air operations. Under this act, the Navy has discretion to determine what constitutes operational air safety for Fallon NAS. In addition, the Secretary of the Navy was required to consult with the Secretary of Agriculture and other interested parties to fund and implement a demonstration project and test site at Fallon NAS for the cultivation and development of grasses, shrubs, and other native plant species. The project's goal was to help with the restoration of previously irrigated farmland in the Newlands project area to a stable and ecologically appropriate dryland condition. In responding to the act's requirements, the Navy studied various land management strategies, consulted with the Secretary of Agriculture and interested parties, and selected a strategy for the greenbelt that combines conventional farming with water conservation practices. Fallon NAS officials have started to implement this strategy for the runway protection zone. When fully implemented, the strategy would use approximately 1.4 billion gallons of water per year, somewhat of a decrease from the average of 1.6 billion gallons used annually in recent years. Fallon NAS is governed by aviation safety and operational standards established by DOD for runway protection zones. DOD's standards for military facilities and the Federal Aviation Administration's (FAA) standards for commercial airports require runway protection zones to protect lives and property. Under these standards, airports can obtain sufficient authority to restrict the use of the land for the runway protection zones in three primary ways. First, an airport can purchase the approach areas outright. Second, an airport can seek zoning requirements to control the way land owned by others is used. Third, an airport can purchase easements proscribing the incompatible use of land owned by others. Outright ownership is preferable because it gives an airport maximum control. It is DOD's and FAA's policy to oppose incompatible land uses that are proposed for property within the runway protection zones. Incompatible land uses include residences and places of public assembly such as churches, schools, hospitals, office buildings, and shopping centers. Compatible land uses within the runway protection zones are generally uses such as agriculture or golf courses that do not involve concentrations of people or the construction of buildings or other structures. DOD and FAA also allow other land uses that do not attract wildlife and that do not interfere with navigational aids. Neither policy requires the establishment of a greenbelt. In arriving at the land management strategy for Fallon NAS, the Navy considered three alternatives in detail. Each involved continued irrigation of land in Fallon NAS' greenbelt. As many as 11 different land management strategies were identified by Fallon NAS officials at the outset. Three strategies were eliminated from consideration before the initial screening was conducted. These three included covering the greenbelt with asphalt, cement, or rocks, or allowing the irrigated fields to go fallow. These strategies were eliminated because the officials believed that they would be environmentally or economically unacceptable or would cause unacceptable operational or safety impairments. They also felt that the strategies would be expensive to maintain and would not provide a “soft” landing for any aircraft accident. The remaining eight land management strategies were subjected to an initial screening on the basis of how they would contribute to the Navy's policy of zero accidental aircraft mishaps and at the same time fulfill the requirements of P.L. 101-618. Four evaluation criteria were used to assess the viability of the strategies: controlling dust and damage from foreign objects, including bird strikes; minimizing fire hazards; establishing a high probability of achieving safety objectives and contributing to zero-mishap management; and reducing the direct surface deliveries of irrigation water. Of the eight land management strategies, five were eliminated because Fallon NAS officials believed those strategies did not meet the evaluation criteria. These five strategies ranged from changing the plants allowed to be grown in the area to using drainwater for irrigation. The remaining three land use strategies were then subjected to detailed consideration. Table 1 presents a comparison of the features of the three strategies Fallon NAS officials considered in detail. The first and second strategies considered in detail included water conservation practices. The methods considered for saving water included lining canals, leveling fields for proper drainage, establishing windbreaks, and improving irrigation scheduling. The third strategy would not have required any changes to the way Fallon NAS officials had been managing the greenbelt land but would have reduced the use of water by irrigating fewer acres. Fallon NAS officials believed that, over time, this strategy would result in land degradation and that there was a low probability that it would control safety hazards such as dust, fire, and damage to aircraft from foreign objects and bird strikes. In considering these strategies, Fallon NAS officials made no distinction between the greenbelt areas that lie within the runway protection zone and the areas that lie outside the zone. Approximately 1,145 acres of the greenbelt lie within the runway protection zone, while 2,450 acres are outside of it. We found no analysis that had determined whether the 2,450 acres of the greenbelt outside the runway protection zone required the same level of prevention of foreign objects, bird strikes, or dust as the 1,145 acres within the zone. Fallon NAS officials confirmed that no such distinction had been made in conducting their analyses. Fallon NAS officials selected the first strategy: conventional farming with water conservation practices. At the time, these officials believed that the advantages of this strategy were the very high probability that it would satisfy the safety goals for the greenbelt for the long term and provide moderate water savings. They believed that the disadvantage would be the substantial capital, operations, and maintenance costs of the water conservation methods. When fully implemented, the chosen strategy would encompass 1,914 water-righted acres of land, using approximately 1.4 billion gallons of water per year. Navy officials believed that the plan would be costly to implement because it included lining irrigation canals with concrete, leveling fields for proper drainage, and other measures. According to Navy officials, the total cost to implement all these measures could be as much as $3.5 million. Since selecting the strategy of conventional farming with water conservation practices in 1995, Fallon NAS officials have undertaken efforts to implement it. As of May 1999, Fallon NAS had lined 16,419 linear feet of irrigation ditches and leveled 347 acres of fields at a cost of about $655,000. This cost was in addition to an estimated $817,000 spent on studies and pilot projects. According to the officials, the implementation of this strategy has stalled because of excessive costs and a shortage of funds. In 1998, Fallon NAS advertised a contract to line another 45,000 linear feet of ditches with concrete and level another 800 acres of fields. Fallon NAS originally estimated the cost of the additional work to be $1.4 million, but the lowest bid it received for the work was $1.9 million. According to Fallon NAS officials, because of the excessive costs, a shortage of funds, and concern that the work would save what they believed would be a relatively small amount of water, this contract was not awarded. Hence, Fallon NAS' chosen land management strategy is not currently being fully implemented. After the completion of our field work, Fallon NAS officials took action to comply with the Fiscal Year 2000 National Defense Authorization Act, which was enacted on October 5, 1999. The act included a provision concerning water usage at Fallon NAS. To comply with their understanding of the law, Fallon NAS officials informed us that they have decided to reduce the irrigated land by about 700 acres. They will cease irrigation in areas farthest from the airfield and the runway protection zone. Fallon NAS officials expressed misgivings about this action but said that it would allow them to comply with the new law. While they pointed out that the affected land is not “technically within the runway protection zones,” they were concerned that “improper management could impair operational safety and create negative environmental impacts” and that Fallon NAS may incur added costs “to properly manage the land for , fire, weed and dust control.” They also expressed concern about possible “long- term degradation of the land.” On balance, however, they said that the strategy meets the requirement of the new law, and they also pointed out that the action will serve as “an excellent pilot study” of what happens when irrigation ceases. The land management strategies varied at the seven other military facilities and commercial airports we visited. All were located in environments similar to Fallon NAS'. Two military facilities used greenbelts, while the other five did not. Officials at all seven facilities said their current land use strategies provided a safe environment for their aircraft operations. The strategies varied because of differences in land formation, history, access to established irrigation facilities, and ownership. For example, at the two Navy and one Marine Corps facilities we visited, the government owned outright the areas surrounding the airfields as it does at Fallon NAS. According to Navy officials, it has been the Navy's practice to purchase land surrounding airfields to reduce possible encroachment and, where possible, to lease this land for agricultural purposes- an activity compatible with aircraft operations. One of the two Navy facilities and the one Marine Corps facility we visited had greenbelts that were being farmed. Like Fallon NAS, Lemoore NAS in Lemoore, California, and Yuma Marine Corps Air Station in Yuma, Arizona, were constructed on land that was originally used for irrigated farming. These three facilities maintain agricultural outlease programs through which the Navy or Marine Corps leases the land adjacent to the airfields to farmers. The farmers maintain the land and grow the irrigated crops specified by the leases. The third naval location we visited, China Lake Naval Weapons Station in Ridgecrest, California, does not have a greenbelt and does not plan to have one. The station was constructed in a desert area where crops are not grown and where the vast, sparsely populated area is considered to be an ideal location for testing weapons and conducting research and development. Neither of the two Air Force bases nor the two commercial airports we visited had an agricultural program like the Navy and Marine Corps facilities', nor did they try to maintain green areas around their runways and taxiways. None has returned substantial acreage of well-established agricultural land to native conditions. Officials from these facilities told us that their research had not uncovered any reports equating the safety of air operations with vegetation at the end of runways. In addition, they said that the cost to maintain and water green areas in the absence of available irrigation facilities would be substantial. At present, their water usage for the runway protection zones was minimal. Officials at the facilities we visited expressed a strong desire to hold down their water costs and believed that maintaining green areas around runways was inconsistent with this objective. For example, Sky Harbor International Airport in Phoenix, Arizona, used rock to landscape areas surrounding the airport that were once irrigated. Additionally, Sky Harbor officials have converted a significant amount of the airport's surrounding area to desert landscaping and have adopted other water conservation measures such as using a computerized irrigation system. According to the officials, these efforts helped the airport save about 70 million gallons of water during 1997. Similarly, at Nellis Air Force Base in Las Vegas, Nevada, the terrain around the runways is mostly disturbed desert (regrown native plants, thistle, or weeds). Because of the base's increased emphasis on desert landscaping, water consumption has dropped by almost half, from about 1.4 billion gallons of water in fiscal year 1996 to about 760 million gallons of water in fiscal year 1999. The facilities we visited without green areas around their runways used several techniques to maintain their land for safety purposes. These techniques include (1) mowing their fields to maintain them as open space, (2) covering specific areas within and surrounding the airstrip with asphalt or cement, and (3) allowing their fields to go fallow and applying a soil cement sealant in strategic locations to control dust and damage to aircraft from foreign objects. Fallon NAS officials said that, while they are aware of these other land management strategies, to date they have not studied them in detail. More detailed information on the land use practices of the five military facilities and two commercial airports we visited are included in appendix I. The Navy chose a land management strategy for the runway protection zone at Fallon NAS that is water intensive in an area where water is a scarce resource. Other strategies used in similar environments use less water while at the same time providing safety for air operations. Navy officials at Fallon NAS are aware of many of these other land management strategies but, to date, have not studied them in detail. Nor have they considered adopting different strategies for specific areas within and beyond the runway protection zone. In light of the congressional concern over water consumption in this desert area as expressed in statute and in light of the techniques used at other desert air fields that are less water intensive, we recommend that the Navy consider these techniques for Fallon NAS. Specifically, the Navy should consider its earlier identified strategies and adopt specific actions that would achieve safety and operational requirements while reducing water use at the air station. It should consider adopting different strategies that recognize the distinction between areas within the runway protection zone and those beyond the zone. The results of the Navy's decision to stop irrigating 700 acres of previously irrigated land should be closely monitored to determine whether this strategy can be successfully applied to additional land at Fallon NAS. We provided the Department of Defense with a draft of this report for its review and comment. DOD's written comments are in appendix II. DOD generally concurred with the draft report's recommendation. However, DOD expressed concern that the report did not accurately provide detailed information on the water usage conditions at Fallon NAS as compared with other civilian and military installations and that the report did not fully convey the specific actions taken by the Navy to comply with the requirements of congressional direction. DOD also stated that the report did not mention the value of the Navy's use of irrigation water to the local community for agriculture and to enhancement of the safety of the Navy's operations. We have provided additional information in the report to address DOD's concerns. DOD also provided technical changes, which were made as appropriate. We performed our review from May through December 1999 in accordance with generally accepted government auditing standards. Our scope and methodology are discussed in appendix III. We will provide copies of this report to the Honorable William Cohen, Secretary of Defense; the Honorable Richard Danzig, Secretary of the Navy; and to representatives of McCarran International Airport, Sky Harbor International Airport, and the U.S. Department of Transportation. We will also make copies available to others on request. If you or your staff have any questions, please contact me at (202) 512-3841 or Brad Hathaway at (202) 512-4329. Key contributors to this report are listed in appendix IV. The decision to construct Lemoore Naval Air Station (NAS) was made in October 1954 when it became clear that Moffett Field NAS near San Francisco could not be expanded because of urban encroachment. Lemoore was chosen because of its central location, good weather for flying, relatively inexpensive land, and nearby accommodations. At the time of this decision, the land chosen for the air station and the surrounding area was agricultural, as it remains today. Lemoore still has room to expand beyond its two parallel runways, and Navy officials told us that, if necessary, they could add another runway and an additional 265 F/A-18 aircraft to the 252 now stationed there. Reeves Field at Lemoore NAS has two parallel 13,500-foot runways that are 4,600 feet apart. (See fig. 2.) According to Navy officials, the runways are offset, with hangars, fueling, fire stations, towers, and parking located between them. The shoulders of the runways are paved. Outside of the paved areas is a 10-foot-wide strip that is periodically sprayed with herbicide to control vegetation. At the end of each of the two runways is a 1,000-foot paved overrun and an additional 1,000-by-3,000-foot mowed grass overrun. The remainder of the areas around the airfield are described as grassland that is kept mowed. Beyond the overruns and to either side of the runways are cultivated fields. Approximately 11,000 acres of privately owned farmland to the west of the station are under airspace easement. The terrain throughout Lemoore NAS is best typified as flat or level. Lemoore NAS has one of the largest agricultural outlease programs in the Department of Defense (DOD). It currently leases nearly 14,000 acres of agricultural land, which brings in between $1.5 million and $2.0 million annually. These funds support conservation and natural resource activities at Lemoore NAS and other Navy locations. The water for Lemoore's domestic and agricultural uses is supplied by the Westlands Water District via the California Aqueduct, which brings water from Shasta Lake behind Shasta Dam in northern California. This water supply is generally adequate in quantity and quality. Freshwater can also be obtained from a well system on the base. In 1928, the federal government leased land for a base from Yuma County, Arizona. When the United States entered World War II, an air base was erected. At the end of the war, all flight activity at Yuma ceased, and the area was partially reclaimed by the desert. During the period of inactivity, the base was controlled successively by the War Assets Administration, the U.S. Corps of Engineers, and the Department of the Interior's Bureau of Land Reclamation, which used it as a headquarters for its irrigation projects. In 1951, the Air Force reactivated the base. The facility was signed over to the Navy in 1959 and was designated a Marine Corps Auxiliary Air Station. In 1962, the designation was changed to Marine Corps Air Station. At the Yuma Marine Corps Air Station, the Corps owns the land, which encompasses four runways, and has granted permission to the City of Yuma to operate a civilian international airport in conjunction with the air activities of the military. (See fig. 3.) Land use documents for 1994 (the latest available) indicate that military air operations were nearly two-thirds (about 95,000) of the total of 149,485 takeoffs and landings at the facility. The areas just adjacent and between the runways are maintained using different methods. The land just adjacent to the runway is mowed. In addition, there is some use of herbicide to destroy weeds. The land between the two original 1943 runways is covered with a very light coat of asphalt. The land between the newer runways built in 1962 is maintained mainly by mowing and using herbicides. The air station is located on the southern side of Yuma and is surrounded mainly by agricultural fields, with smaller sections of open space (disturbed and undisturbed desert) and business areas containing commercial and industrial facilities. Marine Corps and city officials have agreed to use the surrounding land for agricultural production or light industry because of the compatibility of those uses with the operations of the air station. The Marine Corps leases about 90 acres of this land to local farmers. Leases for this land provide between $18,000 and $60,000 in revenues annually. The city and the air station receive their water from the neighboring Colorado River. In 1943, adequate facilities were needed for the testing and evaluation of rockets being developed for the Navy by the California Institute of Technology. The Navy also needed a new proving ground for all aviation ordnance. The Naval Ordnance Test Station (NOTS) was established in response to those needs in November 1943, forming the foundations of China Lake Naval Air Weapons Station near Ridgecrest, California. An auxiliary field was established near Inyokern, and the first facilities for China Lake were established there while the main field was being constructed. Weapons testing began at China Lake less than a month after the station's formal establishment, and by mid-1945, NOTS' aviation assets had been transferred to the new airfield, Armitage Field, located at China Lake. The vast, sparsely populated desert around China Lake and Inyokern, with near-perfect flying weather year-round and practically unlimited visibility, was considered to be an ideal location for testing weapons and for research and development purposes. The China Lake Naval Air Weapons Station operates its airstrips in desert terrain. At the end of each of China Lake's three runways is a 1,000-foot clear zone. (See fig. 4.) The runways are approximately 9,100 feet long. The land between the runways is paved. The clear zones are not paved but are plowed. Beyond the clear zones and along the sides of the runways, the land is disturbed desert (regrown native desert plants) with undisturbed native desert beyond. The land surrounding China Lake's airfield has always been desert and is not watered. Navy officials at China Lake are satisfied with the type of terrain that exists at the end of the runways and in the zones under the flight paths. One of the advantages of this land is that the natural desert vegetation controls dust and does not attract birds. Navy officials believe the desert terrain allows personnel to respond more quickly to a crash site than if the area had vegetation. All water used at China Lake comes from wells. The base's golf course is watered with treated effluent. Nellis Air Force Base is located in the Great Basin area of southern Nevada, about 10 miles northwest of Lake Mead and 8 miles northeast of Las Vegas. In 1941, the property was signed over by the City of Las Vegas to the U.S. Army Quartermaster Corps for the development of a gunnery school for the Army Air Corps. Locating the school there had many advantages. Flying weather was practically ideal year-round; more than 90 percent of the area to the north was wasteland in the public domain and available at $1 per acre; the strategic inland location was excellent; rocky hills approximately 6 miles from the base afforded a natural backdrop for cannon and machine gun firing; and dry lake beds were available for emergency landings. In 1948, the base became Las Vegas Air Force Base and hosted a pilot training wing. In 1950, the base was renamed Nellis Air Force Base. Nellis Air Force Base has two parallel runways and 2.2 million square yards of airfield pavement. (See fig. 5.) The land surrounding the base consists mostly of disturbed and undisturbed native desert. The disturbed areas have regrown native plants, thistle, and weeds. The undisturbed areas consist of sagebrush. Some areas contain eroded natural flood channels. Within the areas at the end of the runways are roads and parts of a golf course. Soil cement is applied at aircraft turning points as a method of controlling dust and damage to aircraft from foreign objects. Foreign objects and dust on the runways and taxiways are controlled using flightline vacuum sweepers and having personnel walk through the area to find and pick up any lose objects. Vegetation is being removed from between the runways, and soil cement will be applied in these areas. The base has no plans for clearing vegetation from the runway protection zones. Water is provided by the Southern Nevada Water Authority, the City of North Las Vegas, and potable water wells on the base. Because of an increased emphasis on using a desert environment rather than watered- plant landscaping, water consumption dropped by almost half from about 1.4 billion gallons of water in fiscal year 1996 to about 760 million gallons in fiscal year 1999. In 1940, the U.S. Army choose a site in Arizona for an Army Air Corps field for advanced training in conventional aircraft. The City of Phoenix bought 1,440 acres of land and leased it to the government for $1 a year, and in March 1941, construction began for what was then known as Litchfield Park Air Base. The first class of 45 students arrived in June 1941 to begin advanced flight training. During World War II, the field was the largest fighter training base in the Air Corps. By 1946, the number of pilots being trained had dropped significantly, and the base was deactivated. However, after combat developed in Korea, the field was reactivated on February 1, 1951, as Luke Air Force Base. Luke Air Force Base has two runways. Both runways are 150 feet wide; the primary runway is 10,000 feet long, while the secondary runway is 9, 910 feet long. (See fig. 6.) Luke owns 2,200 acres outright and has another 2,000 acres in easement. The base is within the city of Glendale and in the jurisdiction of Maricopa County. According to the base's land use documents, there is little land available for expansion or development. The land west of Luke is primarily agricultural, as is some of the land to the east and southeast. Residential, industrial, and commercial areas are located north, south, and east of the base. Approximately 190 F-16 aircraft are housed at Luke. The runways are surrounded by the base's infrastructure on the east and part of the south and by roads, fences, golf courses (both civilian and military), and agricultural land where flowers and vegetables are grown on the north, west, and the remainder of the south. The vegetation growing immediately around the runways is mostly weeds. The area between the runways is a combination of old asphalt and disturbed desert. The unused portions of the airfield have gone untreated, and as a result, weeds are growing in the cracks. Air Force officials at Luke have a program to mow the vegetation so that it does not exceed 14 inches in height. Sections of the airstrip have been sprayed with a soil sealant that helps control dust and foreign objects. The irrigation of the green areas maintained on the base for aesthetic purposes, such as recreation areas and at base housing, uses treated effluent from the base's wastewater treatment plant piped to automatic sprinkler systems. A new golf course will be irrigated using a similar system. Potable water for the base is supplied by seven groundwater wells on the base. McCarran International Airport in Las Vegas, Nevada, is 51 years old. In 1948, Clark County purchased an existing airfield on Las Vegas Boulevard and established the Clark County Public Airport. All commercial activities were moved from an existing field to this new site, which was renamed McCarran Field. Initially, the airport served four airlines- Bonanza, Western, United, and TWA- and averaged 12 flights a day. Clark County, through its Department of Aviation, now owns and operates five airports, including McCarran. McCarran has four runways; the surrounding area is desert habitat. On average, the runways are 14,500 feet long and about 150 feet wide. (See fig. 7.) McCarran has both disturbed and undisturbed desert areas. Most of the airport's terrain has been disturbed by grading, rolling, and watering. Airport officials have attempted to control weed growth by spraying herbicides. The undisturbed areas are native sage and cactus terrain. The area between the runways is paved. The runway protection zones are graded dirt. The surrounding land encompasses a golf driving range, a golf course, a cemetery, vacant land, and industrial property. McCarran officials have studied a number of methods of controlling airport dust, including soil cement. A study on dust control, conducted by a contractor for McCarran, highlighted measures that McCarran should consider, among them mulches, rock, and native vegetation for non-traffic areas and salts, coatings, and pavement for traffic areas. Watering in both the non-traffic and traffic areas was also suggested for consideration. McCarran receives its water through the City of Las Vegas from Lake Mead. In 1935, the City of Phoenix purchased what became Sky Harbor International Airport. At that time, Sky Harbor was 258 acres of isolated and rural land. Today, the airport consists of 2,232 acres of land. The City of Phoenix operates Phoenix Sky Harbor International Airport through its Aviation Department. Sky Harbor International Airport has two runways, one 11,000 feet long and the other 10,300 feet long. (See fig. 8.) Both runways are 150 feet wide. A third runway being completed is to be about 7,800 feet long. Land use surrounding the airport varies. On the west end of the airport is an industrial park. Weeds are growing on some of the vacant lots near the airport, and these weeds are mowed when needed. However, workers first water and roll the area to keep down the dust. Workers also apply small amounts of herbicide on these areas to kill weeds. To conserve water, Sky Harbor used rocks to landscape areas surrounding the airport that were once irrigated. Additionally, Sky Harbor officials have converted a significant amount of the airport's surrounding area to desert landscaping and have adopted other water conservation measures, such as using a computerized irrigation system. According to airport officials, these efforts helped save the airport about 70 million gallons of water during 1997. Terminals and concrete can be found between the runways. To meet Federal Aviation Administration and Environmental Protection Agency regulations, Sky Harbor implemented a plan to control dust and to reduce damage to aircraft from foreign objects. The substance that proved to be the most environmentally safe and the most durable was a product called “Soil Sement,” an acrylic polymer type of liquid sealer. This sealer was applied using two separate methods- topical and soil stabilization. The topical application process consisted of applying the sealer to the undisturbed soil, while the stabilization application, which is more concentrated, was plowed into the top 6 inches of the surface of the soil. Sky Harbor receives its water from the City of Phoenix Water Service Department. After receiving a letter from Senator Harry Reid of Nevada, we visited Fallon NAS for background briefings and information on the air station's actions in response to Public Law 101-618. After follow-up discussions with Navy officials and with Senator Reid's office, we undertook this review to provide information on (1) the aviation safety and operational requirements for the runway protection zone at Fallon NAS, (2) the alternative land use strategies Fallon NAS identified in response to congressional direction and how it evaluated them, and (3) the current land use strategies at five military facilities and two commercial airports that operate in similar environments. To determine aviation safety and operational requirements, we obtained the regulations on runway protection zones issued by the Federal Aviation Administration, the Department of Defense, and the military services. We also obtained other regulations on airport safety and land requirements at military and commercial airports. We obtained extracts of Fallon NAS' air installation compatible use plans on runway protection zones. We interviewed commercial airport, Air Force, Navy, and Marine Corps officials. To determine the land use strategies Fallon NAS identified and how it evaluated them in selecting the greenbelt approach, we obtained Fallon NAS' Natural Resources Management Plan, its Environmental Assessment for Management of the Greenbelt Area, and a study by the U.S. Department of Agriculture's Natural Resources Conservation Service, “Plant Materials Trials on Revegetation of Abandoned Farmland.” We interviewed Fallon NAS and Conservation Service officials on the results of these studies. We analyzed the efforts of Fallon NAS officials in evaluating the land use strategies. To determine the current land use practices at military and commercial airports that operate in desert-like environments and the impacts these practices have on water usage, we visited seven airports- five military (Navy, Air Force, and Marine Corps) and two commercial facilities: Lemoore Naval Air Station, California; Yuma Marine Corps Air Station, Arizona; China Lake Naval Air Weapons Station, California; Nellis Air Force Base, Nevada; Luke Air Force Base, Arizona; McCarran International Airport, Nevada; and Sky Harbor International Airport, Arizona. We obtained land use documents at the seven locations and their documents on water use and consumption. We also interviewed safety and operations officials at the seven locations. In addition, Rudolfo G. Payan, Uldis Adamsons, Richard W. Meeks, Doreen S. Feldman, and Kathleen A. Gilhooly made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on alternative land uses that could save water at Fallon Naval Air Station, Nevada, focusing on: (1) the aviation safety and operational requirements for the runway protection zone at Fallon Naval Air Station (NAS); (2) the alternative land use strategies Fallon NAS identified in response to congressional direction and how it evaluated them; and (3) the land use strategies at five military facilities and two commercial airports that operate in similar environments. GAO noted that: (1) Fallon NAS must comply with the Department of Defense's (DOD) aviation safety and operational requirements for runway protection zones; (2) these requirements specify the maximum safe heights for buildings, towers, poles, and other possible obstructions to air navigation; (3) under these requirements, where possible, areas immediately beyond the ends of runways and along primary flight paths should be developed sparsely, if at all, to limit the risk from a possible aircraft accident; (4) at Fallon NAS, the agricultural and other low-density land uses are compatible with air operations; (5) the land surrounding the airfield is owned by the Navy and leased to farmers for agricultural use, which is permitted by DOD; (6) Fallon NAS gave detailed consideration to three land management strategies in developing its approach to managing land in the runway protection zone in the early 1990s; (7) each of these strategies involved irrigating the greenbelt; (8) as many as 11 different land management strategies were identified at the outset, but three of them were eliminated before an initial screening because Fallon NAS officials believed they would be environmentally or economically unacceptable or would cause unacceptable operational or safety impairments; (9) Fallon NAS officials eliminated five of the remaining eight strategies prior to a detailed analysis because they believed the strategies did not meet the Navy's evaluation criteria, which were based on provisions of the law; (10) the criteria Fallon NAS used in evaluating these land management strategies were based on the officials' assessment of whether the strategies would minimize dust, bird strikes, fire and other hazards, would enhance air safety, and, to a lesser extent, would reduce the amount of irrigation water used; (11) after a detailed analysis and the application of these criteria, Fallon NAS officials selected the strategy that involves conventional farming combined with water conservation practices because they believed it would have a very high probability of satisfying the safety goals while providing moderate water savings compared with the air station's historical usage; (12) at the seven other military facilities and commercial airports GAO visited, the land management strategies varied--two used strategies involving greenbelts, while five did not; and (13) the military facilities and commercial airports operating in desert-like conditions similar to Fallon NAS' have employed land management strategies that have resulted in water savings. |
In 2010, federal agencies reported about 3.35 billion square feet of building space to the FRPP: 79 percent of the reported building space was federally owned, 17 percent was leased, and 4 percent was otherwise managed. The data indicated that the agencies used most of the space—about 64 percent—as offices, warehouses, housing, hospitals, and laboratories. The five agencies we reviewed—GSA, DOE, Interior, VA, and USDA—reported owning or leasing more than 866 million square feet of building space, or about 25 percent of the total reported square footage for all agencies. Initially, FRPC defined 23 FRPP data elements to describe the federal government’s real property inventory. By 2008, FRPC had expanded the number of data elements included in the FRPP to 25. FRPC requires agencies to update their FRPP real property data annually. Each asset included in the database is assigned a unique identification number that allows for tracking of the asset to the unique data that describe it. See appendix II for a list of the 25 FRPP data elements as defined in 2010. FRPP data elements as performance measures: utilization, condition index, annual operating costs, and mission dependency. The definitions of these four data elements in 2010 can be found in table 1. FRPC’s 2010 Guidance for Real Property Inventory Reporting provides specific guidelines on how to report a building as overutilized, underutilized, utilized, or not utilized based on the building’s use and the percentage of the building that is used (see table 2). FRPC has been collecting FRPP data on federal government properties since 2005. We have reported that results-oriented organizations follow a number of sound data collection practices when gathering the information necessary to achieve their goals. For example, these organizations recognize that they must balance their ideal performance measurement systems against real world considerations, such as the cost and effort involved in gathering and analyzing data. These organizations also tie performance measures to specific goals and demonstrate the degree to which the desired results are achieved. Conversely, we have observed that organizations that seek to manage an excessive number of performance measures may risk creating a confusing excess of data that will obscure rather than clarify performance issues. Limiting the number of measures to the vital few not only keeps the focus of data collection where it belongs, it helps ensure that the costs involved in collecting and analyzing the data do not become prohibitive. Furthermore, results- oriented organizations report on the performance data they collect. Following the implementation of the executive order and nationwide data collection efforts, we have reported that agencies continue to face challenges with managing excess and underutilized properties. For example, we have previously reported that the legal requirements agencies must adhere to, such as requirements for screening and environmental cleanup as well as requirements related to historical properties, present a challenge to consolidating federal properties. In addition, before GSA can dispose of a property that an agency no longer needs, it must offer the property to other federal agencies. If other federal agencies do not need the property, GSA must then make the property available to state and local governments as well as certain nonprofit organizations and institutions for public benefit uses such as homeless shelters, educational facilities, or fire and police training centers. According to agency officials, as a result of this lengthy process, excess or underutilized properties may remain in an agency’s possession for years. Furthermore, the costs of disposing of property can further hamper an agency’s efforts to address its excess and underutilized property problems. For example, properties that contain radiological contamination must be mitigated before they can be disposed. In addition, the interests of multiple—and often competing—stakeholders may not align with the most efficient use of government resources and complicate real property decisions. Despite these challenges, both the previous and current administrations have implemented a number of cost savings initiatives associated with excess and underutilized property. In August 2005, the administration set a goal to reduce the size of the federal inventory by $15 billion by 2009. In June 2010, the President directed federal civilian agencies to achieve $3 billion in savings by the end of fiscal year 2012 through reducing annual operating costs, generating income through disposing of assets, using existing real property more effectively by consolidating existing space, expanding telework, and other space realignment efforts. Furthermore, on May 4, 2011, the administration proposed legislation— referred to as the Civilian Property Realignment Act (CPRA)—to establish a legislative framework for disposing of and consolidating real property, among other things. In September 2011, OMB projected that the proposal would save the government $4.1 billion over 10 years from sales proceeds, and that savings would also be achieved through decreased operating costs and efficiencies. However, the Congressional Budget Office (CBO) has concluded that CPRA would probably not result in a significant increase in proceeds from the sale of federal properties over the next 10 years. FRPC has not followed sound data collection practices, and, as a result, FRPP data do not describe excess and underutilized properties consistently and accurately. Consistent with this, FRPP data did not always accurately describe the properties at the majority of sites we visited and often overstated the condition and annual operating costs, among other things. Agency officials described ways in which key performance measures in the FRPP database are reported inconsistently or inaccurately. At 23 of the 26 sites that we visited, we found inconsistencies or inaccuracies related to the following performance measures described in the background: (1) utilization, (2) condition index, (3) annual operating costs, and (4) mission dependency. As a result of the discussions we had with agency officials about how FRPP data are reported, as well as the inconsistencies and inaccuracies described in the following sections, we question whether FRPP data provide an adequate tool for decision making or measuring performance, such as the cost savings initiatives put forth by OMB. We found that the agencies we reviewed do not report property utilization consistently. FRPC guidance states that for offices, hospitals, and warehouses, utilization is the ratio of occupancy to current design capacity. Although USDA requires its agencies to follow FRPC guidance, USDA stated that FRPC has not established governmentwide definitions for occupancy or current design capacity. As a result, each agency within USDA has its own internal procedures for determining a building’s utilization level. Moreover, VA defines utilization differently from FRPC guidance, that is, the ratio of “ideal space” to existing space, which VA stated is different from occupancy. Despite the inconsistency of this method of defining utilization with FRPC guidance, VA officials reported that OMB staff approved of their method of reporting utilization.Furthermore, OMB acknowledged that it is standard practice for agencies to measure utilization tailored to the agencies’ specific needs and circumstances. Among the 26 federal sites we visited, we found utilization data inconsistencies or inaccuracies for properties at 19 of these sites. For example, at one VA site, a building we toured was reported to have a utilization of 39 percent in 2010 FRPP data and 45 percent utilization in 2011 source data, even though local officials said this building has been fully occupied since 2008. See figure 1. Another building that we toured at the same site was reported to be 0 percent utilized in 2010 FRPP data and 59 percent utilized in 2011 agency source data. However, all but one of the rooms in the building were vacant, and local officials said only 10 percent of the building was utilized. In addition, at one USDA site we visited, we found two houses that have been empty since 2009; however, they were both reported to the FRPP as utilized for 2009 and 2010. See figure 2 to view images of these two USDA buildings. We also found problems with the utilization data at properties owned by the other three agencies included in our review. As was the case with utilization, we found that agencies do not report the condition of their properties consistently. According to FRPC guidance, condition index is a general measure of the constructed asset’s condition and is calculated by using the ratio of repair needs to the plant replacement value (PRV). Needed repairs are determined by the amount of repairs necessary to ensure that a constructed asset is restored to a condition substantially equivalent to the originally intended and designed capacity, efficiency, or capability. However, we found that agencies do not always follow this guidance. For example, when agencies have determined that a property is not needed and will ultimately be disposed, they may assign no repair needs to that property even though the property may be in a state of significant disrepair. Doing so allows agencies to use their limited funds to maintain properties that they regularly use, but it can lead to condition index data that do not accurately reflect each property’s condition as set forth in FRPC guidance. Figure 3 is an example of how the condition index of a building with high repair needs can significantly change depending on whether agency officials choose to follow FRPC guidance or if they assign zero dollars in repair needs because repairs are not planned. While it may be a good practice not to assign repair needs to dilapidated buildings that no longer support agencies in carrying out their mission, the fact that these buildings may report a perfect or near-perfect condition index provide decision makers with an inconsistent representation of the condition of buildings at a given site. We found examples at all five agencies we visited where a property in very poor condition received a higher condition index score than a property in good condition. Figure 4 demonstrates examples of this at an Interior site we visited. We found condition index reporting inconsistencies and inaccuracies at 21 of 26 sites visited. The practice of assigning no repair needs to many excess and underutilized buildings because agencies have no intention of repairing them led to severely blighted buildings receiving excellent condition scores.received high condition index scores, even though they are in poor condition. Some of the problems with these buildings include asbestos, mold, collapsed walls or roofs, health concerns, radioactivity, deterioration, and flooding. The federal government has taken some steps to address excess and underutilized property management problems by developing the FRPP database, among other things. However, cost savings efforts associated with excess and underutilized property over the years were discontinued and recent efforts may overstate potential savings. Although the federal agencies we reviewed have taken some actions to try to address excess and underutilized properties, long-standing challenges remain. As a result, a national strategy could help the federal government prioritize future management efforts. The federal government has made some progress in managing real property since we first added this issue to our high-risk series. In a 2007 review of federal real property, we found that the administration at that time made progress toward managing federal real property and addressing some long-standing problems. The 2004 executive order established FRPC to develop property management guidance and act as a clearinghouse for property management best practices. FRPC created the FRPP database and began data collection in December 2005. As part of a 2011 update to our high-risk series, we reported that the federal government has also taken steps to improve real property management, most notably by implementing some GSA data controls and requiring agencies to develop data validation plans.management as high risk, reliable tools for tracking property were generally unavailable. Consequently, we determined that the development of a database and the implementation of additional data quality controls were steps in the right direction. However, on the basis of our current work, it appears that data controls have not brought about widespread improvements with data consistency and accuracy as was anticipated. Nonetheless, we found that the FRPP can be used in a general sense to track assets. For example, during our site visits, agency officials were able to match assets with the real property unique Prior to designating property identification numbers assigned to them in the FRPP database and were able to locate even small, remote buildings using these numbers. In addition to establishing FRPC, developing the FRPP, and implementing the executive order, the previous and current administrations have sought ways to generate cost savings associated with improving management of excess and underutilized properties. However, these efforts have not led to proven cost savings associated with the management of these properties. Cost savings goals set by the previous administration were discontinued. In 2007, we reported that adding real property management to the President’s Management Agenda in 2004 increased its visibility as a key management challenge and focused greater attention on real property issues across the government. As part of this agenda, the previous administration set a goal of reducing the size of the federal real property inventory by 5 percent, or $15 billion, by the year 2015. OMB staff at the time reported that there was an interim goal to achieve $9 billion of the reductions by 2009. OMB staff recently told us that the current administration is no longer pursuing these goals. Furthermore, the senior real property officers of the five agencies we reviewed told us that they were never given specific disposal targets to reach as part of these prior disposal goals. Cost savings associated with improved management of excess and underutilized properties as directed in the June 2010 presidential memorandum are unclear. OMB staff also said that while the goals of the previous administration are no longer being pursued, the current administration issued a memorandum that directed civilian agencies to achieve $3 billion in savings by fiscal year 2012 through better management of excess properties, among other things. According to the administration’s website, as of September 2011, approximately half of the cost savings had been achieved ($1.48 billion). Almost half of the total goal (about $1.4 billion) is targeted to the five agencies we reviewed. Officials from these agencies reported various cost savings measures such as selling real property, forgoing operations and maintenance costs from disposed properties, and reducing energy costs through sustainability efforts to achieve agency savings targets. As of the first quarter of fiscal year 2012, only two of the agencies we reviewed—GSA and USDA—were claiming any sales proceeds from the sale of federal real property: GSA reported $41.1 million in savings from sales proceeds and USDA reported approximately $5.6 million. Interior officials stated that individual sales with positive net proceeds are offset by those sales in which the cost of the disposal (i.e., as a result of environmental remediation and repair) is greater than any proceeds realized. Furthermore, DOE officials reported that the disposition costs of the properties they sold during the time frame of the memorandum were actually greater than the proceeds. As a result, DOE has reported a net loss of $128 million on property sales for this time period. VA also did not include asset sales as part of its savings plan. Four of the five agencies told us that they believe they will reach their savings targets by the end of fiscal year 2012; however, whether they claim to reach those goals or not, the actual and estimated savings associated with excess and underutilized property management may be overstated. Furthermore, agencies were not required to develop cost savings that reflected a reduction in agency budgets. We found problems with cost savings estimates related to excess and underutilized property management from all five of the federal agencies we reviewed (see table 4). OMB staff has not provided information to support projected cost savings if CPRA is enacted. In addition to the expected savings resulting from the June 2010 presidential memorandum, OMB staff reported that CPRA—the legislation the administration has proposed to address real property management obstacles—will result in $4.1 billion in savings within 10 years following enactment from sales proceeds as well as unspecified savings from operating costs and efficiencies. However, the CPRA projections may not reflect true cost savings. OMB staff did not provide a methodology, calculations, or any other basis for its stated projections. Furthermore, CBO concluded that CPRA would probably not result in a significant increase in proceeds from the sale of federal properties over the next 10 years. CBO noted that the Department of Defense holds about one-third of the excess properties. CPRA would have no effect on these properties, because the proposal only applies to civilian agencies. Furthermore, CBO estimated that implementing CPRA would cost $420 million over the 2012 through 2016 period to prepare properties for sale or transfer. The President’s fiscal year 2013 budget requested $17 million to implement CPRA (if it is enacted) and $40 million to establish an Asset Proceeds and Space Management Fund to facilitate the disposal process intended to reimburse agencies for some necessary costs associated with disposing of property. This amount is far short of the $420 million that CBO projected would be needed to prepare properties for sale or transfer within a 4-year period. Despite problems with data collection and national cost savings goals, we found that agencies have taken steps to address excess and underutilized properties in their portfolios. For example, all five agencies we reviewed have taken steps to use property more efficiently, as follows: Identifying underutilized assets to meet space needs. VA officials told us that they implemented a process to identify vacant and underutilized assets that they could use to meet space needs. In addition, VA officials stated that the department is planning to reuse currently utilized assets that will be available in the future. VA officials added that they have identified 36 sites that include 208 buildings and more than 600 acres that they can use to provide more than 4,100 units of homeless and other veteran housing. Consolidating offices among and within agencies. USDA and Interior signed a memorandum of understanding in November 2006 that allows the agencies to colocate certain operations and use their buildings more efficiently. The memorandum of understanding enables the agencies to share equipment and space. In addition, USDA closed laboratories at four locations and consolidated operations with existing USDA sites. In its National Capital Region, USDA has consolidated five separate leased locations, totaling 363,482 square feet, into one location at Patriot’s Plaza in Washington, D.C. USDA reported that the consolidation into Patriot’s Plaza will result in annual rent savings of about $5.6 million. DOE officials also stated that the department encourages offices to consolidate operations when it is cost effective to do so. The department also increased the use of an office building at the Lawrence Livermore National Laboratory from 22 percent to 100 percent by changing its use from office space to a building that houses computers. Furthermore, VA consolidated its medical center campuses in Cleveland, Ohio, and engaged a number of private partners directly to reuse the unneeded sites, using its Enhanced Use Lease authority. Reducing employee work space. To use space more efficiently, Interior reduced new space utilization per employee from 200 usable square feet per person to 180 usable square feet per person. This action decreased total new space by 10 percent in all areas including employee work space and conference space. Using operations and maintenance charges to reduce operating costs and encourage efficient use of space. DOE officials reported that several sites servicing multiple programs or performing work for others have developed a space charge system whereby a site charges tenants for the operations and maintenance of the square footage they occupy on a square foot basis. This charge defrays operations and maintenance costs associated with the site and encourages tenants to minimize their own space use. Transferring unneeded property to other entities. Interior officials have disposed of excess properties by transferring them to other organizations to use. For example, Interior officials reported that the department donated a freezer building and a laboratory building at the Woods Hole Science Center in Falmouth, Massachusetts to the Woods Hole Oceanographic Institution. The department also transferred buildings and land at a Corbin, Virginia site to the National Oceanic and Atmospheric Administration. Creating alternate uses for unused assets. GSA found an alternate use for 400,000 square feet of a concrete slab that remained after demolishing an excess building. When needed, GSA leases the slab to the Federal Emergency Management Agency as outdoor storage space for electric generators and other heavy equipment and as a staging area for equipment during responses to disasters (see fig. 10). Using telework and hoteling work arrangements.we reviewed require or allow employees to use alternate work arrangements such as teleworking or hoteling, when feasible, to more efficiently use space. For example, GSA instituted a pilot hoteling project at the Public Building Service headquarters in Washington, D.C., to reduce needed space. Progress notwithstanding, agencies still face many of the same long- standing challenges we have described since we first designated real property management as a high-risk area. Agency disposal costs can outweigh the financial benefits of property disposal. USDA officials reported that the costs of disposing of real property can outweigh savings that result from building demolition and that limited budgetary resources create a disincentive to property disposal. USDA determined that the total annual cost of maintaining 1,864 assets with annual operating costs less than $5,000 was $3 million. Conversely, USDA concluded that the disposal costs for these assets equals or exceeds their annual operating cost of $3 million. Thus, disposal of the assets would not result in immediate cost savings, and USDA has not demolished the assets. In addition, Interior officials reported that numerous National Park Service buildings acquired during the planning for a Delaware River dam that was never built are excess, as are many cabins and houses along the Appalachian Trail. Because Interior is not spending any operations and maintenance on these assets, disposing of them would not provide savings to the department. As a result, Interior has made a business decision to only fund a small percentage of these disposals at the Delaware River dam site. Legal requirements—such as those related to preserving historical properties and the environment—can make the property disposal process lengthy according to agency officials. Meeting requirements associated with historical properties can delay or prevent disposal of excess buildings. The National Historic Preservation Act, as amended, requires agencies to manage historic properties under their control and jurisdiction and to consider the effects of their actions on historic preservation.dispose of a 15,200-square-foot building at Menlo Park, California that has been used as both a residence and a research building during its 83- year history. The building has been scheduled for demolition since 2001, but VA cannot demolish it because of a historical designation. In addition, in 2010, Interior canceled the disposal of a 95-square-foot stone property that we visited because it was found eligible for historic designation. The property is in poor condition and has not been used for many years, but Interior officials told us that they are now planning to stabilize and restore the structure (see fig. 11). The federal government has made some progress in managing real property since it was first added to our high-risk series. The FRPC created the FRPP database to track federal property and the federal agencies we reviewed have taken some actions to address excess and underutilized property. Even with long-standing efforts to improve the management of excess and underutilized properties and save costs, federal agencies continue to face many of the same challenges that we have reported for over a decade. The problems still facing the federal government in this area highlight the need for a long-term, comprehensive national strategy to bring continuity to efforts to improve how the federal government manages its excess and underutilized real property and improve accountability for these efforts. Such a strategy could lay the framework for addressing the issue of inconsistent and inaccurate data on excess and underutilized federal properties. We continue to believe that consistent and accurate data on federal real property are necessary for the federal government to effectively manage real property. While the 2004 executive order charged the Administrator of GSA, in consultation with FRPC, to develop the data reporting standards for the FRPP database, the current standards have allowed agencies to submit data that are inconsistent and therefore not useful as a measure for comparing performance inside and outside the federal government. Also, the current definitions of certain data elements could perpetuate confusion on the nature of federal government properties. For example, the FRPP data element, PRV, is commonly referred to as an asset’s “value,” which can cause decision makers to make assumptions about the worth of the asset even though the PRV cannot be accurately used in this way. Moreover, many agencies do not have the resources to collect data at the asset level, and the information that is reported in order to meet requirements for asset-level data is likely conveying an inaccurate picture of excess and underutilized property. Furthermore, federal government agencies have vastly different uses for properties, and it may be challenging to collect certain kinds of property management data using a single database. This makes it difficult for decision makers to understand the scope of the problem and assess potential cost savings and revenue generation. Now that FRPC has had several years of experience with these data, it is in a better position to refine data collection requirements by identifying data that are suitable for comparison in a nationwide database. Following sound data collection practices could help FRPC to thoroughly evaluate and retool the FRPP so that it collects and provides data that are consistent and accurate to decision makers, even if this means collecting less data in the short term. GSA is uniquely positioned to lead this effort because of its charge to develop FRPP data reporting standards. We are making two recommendations, one to the Director of OMB and one to the Administrator of GSA. We recommend that the Director of OMB require the OMB Deputy Director for Management, as chair of FRPC, in collaboration and consultation with FRPC member agencies, to develop and publish a national strategy for managing federal excess and underutilized real property that includes, but is not limited to, the following characteristics: a statement of purpose, scope, and methodology; problem definition and risk assessment; goals, subordinate objectives, activities, and performance measures, including the milestones and time frames for achieving objectives; resources, investments, and risk management; organizational roles, responsibilities, and coordination; and integration and implementation plans. We recommend that the Administrator of GSA, in collaboration and consultation with FRPC member agencies, develop and implement a plan to improve the FRPP, consistent with sound data collection practices, so that the data collected are sufficiently complete, accurate, and consistent. This plan should include, but not be limited to the following areas: ensuring that all data collection requirements are clearly defined and that data reported to the database are consistent from agency to agency; designating performance measures that are linked to clear performance goals and that are consistent with the requirements in the 2004 executive order (or seeking changes to the requirements in this order as necessary); collaborating effectively with the federal agencies that provide the data when determining data collection requirements and limiting the number of measures collected to those deemed essential, taking into account the cost and effort involved in collecting the data when determining data collection requirements; and developing reports on the data that are collected. We provided a draft of this report to OMB, GSA, VA, USDA, DOE, and Interior for review and comment. OMB did not directly state whether it agreed or disagreed with our recommendations. OMB agreed that challenges remain in the management of the federal government's excess and underutilized properties; however, OMB raised concerns with some of the phrasing in our report and offered further context and clarification regarding the administration’s overall efforts on real property reform. OMB’s comments are contained in appendix III, along with our response. GSA agreed with our recommendation to improve the FRPP and described actions its officials are taking to implement it. GSA also partially agreed with our findings and offered some clarifications. GSA’s comments are contained in appendix IV along with our response. VA generally agreed with the overall message of our report, but disagreed with how we presented certain issues. VA’s comments are contained in appendix V along with our response. USDA provided clarifying comments which we incorporated, where appropriate. USDA’s comments are contained in appendix VI. DOE provided technical clarifications, which we incorporated where appropriate, but did not include as an appendix. Interior did not provide comments. OMB stated that, because our conclusion regarding the accuracy of FRPP data is based on our sample of 26 site visits, further study is needed to determine whether the problems we found are systemic. However, as discussed in the report, our findings are primarily based on the issues we identified with FRPC’s data collection practices, which are the basis of the entire FRPP data collection process and are thus systemic. The 26 sites that we visited complement those findings and illustrate how poor data collection practices affect data submissions; however, they are not the only basis for our findings. Furthermore, OMB stated that the administration has a strategy for improving the management of federal real property that serves as an important foundation for the national strategy we recommend in this report. While the initiatives OMB described may represent individual, positive steps, we do not believe that they fully reflect the key characteristics of a cohesive national strategy. A national strategy would improve the likelihood that current initiatives to improve real property management will be sustained across future administrations. A more detailed discussion of our views on OMB’s comments can be found in appendix III. GSA stated that our report correctly identifies many of the problems that hampered effective FRPP data collection in 2011. According to GSA, it has taken specific actions to begin addressing our recommendation, including modifying FRPC guidance to the agencies to clarify report definitions and proposing reforms of the collection process to FRPC consistent with our recommendation. GSA also offered a few clarifications on our findings. GSA stated that it was unclear whether the examples of inconsistencies we discuss in our report are systemic. As noted, our findings are primarily based on the problems we found with the overall data collection process. Thus, our recommendation to GSA involves adopting sound data collection practices. In addition, GSA stated that, because FRPP data is reported annually, property utilization and condition may change from the time that information is submitted. However, we took steps, including discussing the history of each property with local property managers, to ensure that any inconsistencies we found were not due to changes between the time data was reported and the time we visited the building. These steps and a more detailed discussion of our views on GSA’s comments can be found in appendix IV. VA generally agreed with our findings and provided additional information on VA’s federal real property portfolio, their methods of reporting real property data, and efforts the department is taking to address its excess and underutilized properties. However, VA disagreed with some of our statements related, for example, to property utilization. A more detailed discussion of our views on VA’s comments can be found in appendix V. In addition, USDA provided comments and clarifications which we incorporated, where appropriate. For example, USDA clarified its previous statement regarding utilization reporting to emphasize that component agencies are directed to follow FRPC guidance, but acknowledged that this guidance was inconsistent. USDA also clarified a previous statement regarding problems faced by the agency when reporting FRPP data in 2011. USDA’s comments can be found in appendix VI. We are sending copies of this report to the Director of OMB; the Administrator of GSA; and the Secretaries of Energy, Interior, Veterans Affairs, and Agriculture. Additional copies will be sent to interested congressional committees. We will also make copies available to others upon request, and the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-5731 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VII. Our objectives were to determine to what extent (1) the Federal Real Property Profile (FRPP) database consistently and accurately describes the nature, use, and extent of excess and underutilized federal real property, and (2) progress is being made toward more effectively managing excess and underutilized federal real property. We identified five civilian real property-holding agencies for our review: the General Services Administration (GSA); the Departments of Energy (DOE), the Interior (Interior), and Veterans Affairs (VA); and the U.S. Department of Agriculture (USDA). We chose GSA, DOE, Interior, and VA because these were the four largest agencies in terms of total building square footage of all civilian real property agencies that are required to submit data under the executive order. On the basis of the data available, these five agencies report approximately two-thirds of the building square footage reported by civilian agencies. We did not consider agencies in the Department of Defense because we previously reported on the department’s excess facilities. We added USDA to our list of selected agencies because USDA reported more excess properties than any other civilian agency in 2009. To determine to what extent the FRPP database described the nature, use, and extent of excess and underutilized federal real property, we obtained and analyzed FRPP data submissions and other real property data from the five selected agencies; interviewed real property officers at these agencies; visited sites where the agencies had reported excess or underutilized properties; interviewed Office of Management and Budget (OMB) staff; and reviewed FRPC guidance and other documents related to the agencies’ real property data and the FRPP database. We obtained the agencies’ FRPP data submissions for fiscal years 2008 through 2010. According to our conversations with agency officials, FRPP submissions can only be changed by the agency submitting the data. As a result, we believe that the FRPP submissions obtained from the agencies match the data contained in the FRPP database and are sufficiently reliable for the purpose of evaluating the consistency and accuracy of the FRPP database. In addition, for select data elements, we obtained real property data from the source databases that each agency uses to generate its annual FRPP submissions. We obtained source system data to get the actual percentage of utilization of each property as of the date when these data were extracted and provided to us in September or October of 2011. For the years of our FRPP data review (fiscal years 2008 through 2010), agencies were only required to report utilization using four categories: overutilized, utilized, underutilized, or not utilized. However, the FRPP guidance stated that agencies should maintain the actual percentage of utilization in their own systems for audit purposes. We posed questions to senior real property officers at the five agencies about the collecting and reporting of real property data. To gather detailed examples of excess and underutilized properties and to learn about the processes by which data on such properties are collected and submitted to the FRPP database, we visited sites where the five agencies had reported excess or underutilized properties. We selected these sites using information from the agencies’ FRPP submissions. To narrow our scope, we chose only federally owned buildings for our visits. Using the most recent FRPP submissions we had at the time (fiscal year 2010), we selected a nonprobability sample of owned buildings for each agency that were listed as excess (on the status indicator data element) or underutilized (on the utilization data element), or both. Because VA did not classify any of their owned buildings as “excess,” we also selected VA buildings classified as “not utilized.”sample, observations made at these site visits do not support generalizations about other properties described in the FRPP database or Because this is a nonprobability about the characteristics or limitations of other agencies’ real property data. Rather, the observations made during the site visits provided specific, detailed examples of issues that were described in general terms by agency officials regarding the way FRPP data is collected and reported. We focused on sites clustered around four cities: Washington, D.C.; Dallas, Texas; Los Angeles, California; and Oak Ridge, Tennessee. This strategy afforded both geographic diversity and balance among our selected agencies while also accommodating time and resource constraints. In selecting sites and buildings in and around these four cities, we took into account the following factors: We prioritized sites that had multiple excess and/or underutilized properties. This allowed us to see more properties in a limited amount of time. We prioritized the selection of excess and/or underutilized properties that fell into one of the five types of real property uses required to submit utilization data in 2010—offices, warehouses, hospitals, laboratories, and housing. However, we also selected some buildings classified as “other,” particularly buildings that were large or that had high reported values. We attempted to balance the numbers of excess and underutilized buildings we selected. (Some buildings were classified as both excess and underutilized since these classifications are made in different data elements in FRPP.) We attempted to visit four or five sites from each of the five different However, most GSA sites consisted of only one building, agencies.so we selected more sites for GSA. In the end, we selected four sites from each of Interior and USDA, five from each of DOE and VA, and eight from GSA. In all, we selected 26 sites. Whereas we selected sites based in large part on the numbers and kinds of buildings they had, the exact set of buildings we visited at each site depended on additional factors. At some sites, there were too many excess and underutilized properties to see them all. In those circumstances, we prioritized large buildings with high reported values and tried to see a number of different kinds of buildings (e.g., a mix of offices and warehouses). At several sites, local property officials identified other properties with issues related to excess and underutilized property that we toured and analyzed. Prior to each site visit, we analyzed the FRPP data submissions for fiscal years 2008 through 2010 and agencies’ source system data we obtained in September or October 2011, and developed questions about the data submissions for local property managers. During our site visits, we interviewed local property managers and compared what we observed at each building with the FRPP data for that building. When not restricted by security concerns, we photographed the building. In addition to questions about individual properties, we questioned the local officials about the kind of data they collect on the properties and how they collect it. To summarize inconsistencies and inaccuracies between our observations at the properties we visited and the FRPP data for those properties, we analyzed 2008 through 2010 FRPP data for all of the properties. As part of this review, we checked the reported utilization, condition index, value, and annual operating costs for each building for all three years. Four analysts, working together, evaluated these data both for inaccuracies (cases where the data clearly misrepresented the actual utilization, condition, value, or annual operating costs of a property) and for year-to-year inconsistencies (cases where reported values showed large year-to-year changes that did not correspond to observable changes in the property and that agency officials could not explain). Each of the 26 sites was counted as having a problem on a given data element if at least one inconsistency or inaccuracy was identified for that element.The four analysts discussed each case and arrived at a consensus as to whether a problem existed in each data element for each site. To determine the progress being made toward more effective management of federal excess and underutilized real property, we asked the senior real property officers at each of our selected agencies to provide written responses to a standard list of questions. These questions addressed management issues related to excess and underutilized owned buildings, how FRPP data are reported, and progress the agency is making toward sales and utilization goals set by the OMB. We analyzed the written responses to our questions and reviewed supporting documentation provided by agency officials such as regulations, policies, and other documents. In addition to reviewing the written responses to our questions, we reviewed a number of our previous reports and pertinent reports by the Federal Real Property Council (FRPC), the Congressional Budget Office, and the Congressional Research Service. We also reviewed and analyzed federal laws relating to real property for the major real property-holding agencies. Because OMB chairs FRPC and has set cost savings goals related to federal real excess and underutilized properties, we analyzed documents related to these goals—including the 2004 executive order, the June 2010 presidential memorandum on “Disposing of Unneeded Federal Real Estate,” and legislation proposed by the administration known as the Civilian Property Realignment Act (CPRA). We also interviewed knowledgeable OMB staff about agency-specific targets related to the June 2010 presidential memorandum, the methodology used to project potential cost savings if CPRA were to be enacted, and progress toward costs savings goals set by the previous administration. We conducted this performance audit from May 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Data element definition Real property type indicates the asset as land, building, or structure. Real property use indicates the asset’s predominant use as land, building, or structure. The legal interest indicator is used to identify a real property asset as being owned by the federal government, leased to the federal government (i.e., as lessee), or otherwise managed by the federal government. Otherwise managed properties are (1) owned by a state or foreign government that has granted rights for use to the federal government using an arrangement other than a lease, or (2) trust entities that hold titles to assets predominantly used as museums, yet may receive some federal funds to cover certain operational and maintenance costs. Status reflects the predominant physical and operational status of the asset. Buildings, structures, and land assets have one of the following attributes: Active. Currently assigned a mission by the reporting agency. Inactive. Not currently being used but may have a future need. Includes real property in a caretaker status (closed pending disposal; for example, facilities that are pending a Base Realignment and Closure action) and closed installations with no assigned current federal mission or function. Excess. Formally identified as having no further program use of the property by the landholding agency. Disposed. Required for assets that have exited the federal portfolio of assets during the current reporting period. Each asset owned or leased by the federal government (and those otherwise managed by museum trusts) has one of the following historical status attributes: National Historic Landmark National Register listed National Register eligible Noncontributing element of a National Historic Landmark or National Register listed district Evaluated, not historic Reporting agency refers to the federal government agency reporting the property to the FRPC inventory database. Using organization refers to the predominant federal government agency or other nonfederal government entity occupying the property. Size refers to the size of the real property asset according to appropriate units of measure. The unit of measure used for the three real property types is as follows: For land, the unit of measure is acreage and is designated as either rural acres or urban acres. For buildings, the unit of measure is area in square feet and is designated as gross square feet. For structures, the unit of measure includes the size (or quantity) and unit of measure, and can include square yards, linear feet, miles, and the numbers of specific types of structures. Data element definition Utilization is defined as the state of having been made use of, that is, the rate of utilization. The utilization rate for each of the five building predominant use categories is defined as follows: office: ratio of occupancy to current design capacity, hospital: ratio of occupancy to current design capacity, warehouse: ratio of gross square feet occupied to current design capacity, laboratory: ratio of active units to current design capacity, and housing: percent of individual units that are occupied. Value is defined as the cost of replacing the existing constructed asset at today’s standards and is also known as plant replacement value (PRV) or functional replacement value. Condition index is a general measure of the constructed asset’s condition at a specific point in time. The condition index is calculated as the ratio of repair needs to PRV. Repair needs are the amount necessary to ensure that a constructed asset is restored to a condition substantially equivalent to the originally intended and designed capacity, efficiency, or capability. Agencies will initially determine repair needs based on existing processes, with a future goal to further refine and standardize the definition. The condition index will be reported as a “percent condition” on a scale of zero to 100 percent. Mission dependency is the value an asset brings to the performance of the mission as determined by the governing agency: mission critical: without constructed asset or parcel of land, mission is compromised; mission dependent, not critical: does not fit into mission critical or not mission dependent categories; and not mission dependent: mission unaffected. Annual operating costs consist of the following: recurring maintenance and repair costs, utilities, cleaning and janitorial costs, and roads and grounds expenses Main location refers to the street or delivery address for the asset or the latitude and longitude coordinates. Real property unique identifier is a code that is unique to a real property asset that will allow for linkages to other information systems. The real property unique identifier is assigned by the reporting agency and can contain up to 24 alpha-numeric digits. The city or town associated with the reported main location in which the land, building, or structure is located. The state or District of Columbia associated with the reported main location in which the land, building, or structure is located. The country associated with the reported main location in which the land, building, or structure is located. The county associated with the reported main location in which the land, building, or structure is located. The congressional district associated with the reported main location in which the land, building, or structure is located. The ZIP code associated with the reported main location in which the land, building, or structure is located. Data element definition Installation identifier. Land, buildings or other structures, or any combination of these. Examples of installations are a hydroelectric project, office building, warehouse building, border station, base, post, camp, or an unimproved site. Subinstallation identifier. Part of an installation identified by a different geographic location code than that of the headquarters installation. An installation must be separated into subinstallations and reported separately when the installation is located in more than one state or county. However, an agency may elect to separate an installation into subinstallations even if the installation is not located in more than one state or county. Restrictions are limitations on the use of real property and include environmental restrictions (cleanup-based restrictions, etc.), natural resource restrictions (endangered species, sensitive habitats, floodplains, etc.), cultural resource restrictions (archeological, historic, Native American resources, except those excluded by Executive Order 13007, Section 304 of the National Historical Preservation Act, etc.), developmental (improvements) restrictions, reversionary clauses from deed, zoning restrictions, easements, rights of way, mineral interests, water rights, air rights, other, not applicable Agencies are required to provide all assets that have exited the federal portfolio of assets during the reporting fiscal year. This will include, but is not limited to, sales, federal transfers, public benefit conveyances, demolitions, and lease terminations. Disposition data is reported only in the year the asset has exited the federal portfolio of assets. Agencies are required to provide status, reporting agency, real property unique identifier, disposition. Agencies are also required to report disposition method (methods include public benefit conveyance, federal transfer, sale, demolition, lease termination, or other), disposition date, disposition value (the PRV for public benefit conveyances, federal transfers, demolitions, and other dispositions; the sales price for sales; and the government’s cost avoidance for lease terminations), net proceeds (the proceeds received as part of assets disposed through sales and termination of leases minus the disposal costs incurred by the agency), and recipient (the name of the federal agency or nonfederal recipient that received the property through public benefit conveyance or federal transfer). Data element definition Sustainability is reported for building assets, is optional reporting for structures, and is not reported for land and reflects whether or not an asset meets the sustainability criteria set forth in Section 2 (f) (ii) of Executive Order 13423. To be considered sustainable and report “yes,” the asset must meet the five Guiding Principles for High Performance and Sustainable Buildings or be third-party certified as sustainable by an American National Standards Institute (ANSI)-accredited institution: Yes. Asset has been evaluated and meets guidelines set forth in Section 2 (f) (ii) of Executive Order 13423. No. Asset has been evaluated and does not meet guidelines set forth in Section 2 (f) (ii) of Executive Order 13423. Not yet evaluated. Asset has not yet been evaluated on whether or not it meets guidelines set forth in Section 2 (f) (ii) of Executive Order 13423. Not applicable. Guidelines set forth in Section 2 (f) (ii) of Executive Order 13423 do not apply to the asset. This includes assets that will be disposed of by the end of fiscal year 2015 and are no longer in use. The legal interest element includes a lease maintenance indicator and a lease authority indicator, which are not reported for “owned” and “otherwise managed” properties. This report focuses on owned properties. The status element includes an outgrant indicator identifying when the rights to the property have been conveyed or granted to another entity. For the purposes of this report, we did not evaluate or analyze information for the outgrant indicator. 1. OMB stated that the agency agreed with the report’s general conclusion that challenges remain in the management of excess and underutilized properties, but that significant progress has been made. While we stated that limited progress has been made, our draft and final report do not describe the progress as significant. 2. OMB stated that the agency is concerned with some phrasing in the report that may lead the reader to draw unintended conclusions regarding the appropriate next steps for improving the accuracy and consistency of the FRPP. OMB stated that based on its understanding of our report, our findings are based on the 26 site visits we conducted and further study is needed to determine whether the issues we found with the consistency and accuracy of FRPP data are systemic. OMB also asserts that despite our use of a non- probability sample, we make generalizations based on the sample in the report. As we discuss in the report and reiterated in discussions with OMB staff during the comment period, our findings are primarily based on the problems we found with FRPC’s data collection practices, which affect the entire data collection process. The work we did at 26 sample sites complement those findings and illustrate how poor data collection practices impact data submissions, but they are not the only basis for our conclusions. Furthermore, in its comments, OMB acknowledged that “it has been standard practice for each agency to measure certain data elements, such as utilization, through agency-specific means tailored to the agency’s individual needs and circumstances.” Therefore, it is unlikely, as OMB asserts, that further study could find consistent data on properties outside of our sample when OMB has acknowledged that the standards themselves are inconsistent for reporting data. For these reasons, we believe our recommendation remains valid that GSA, in consultation with FRPC, should first address the problems with data collection practices, which our methodology and findings showed were in fact systemic. In response to OMB’s comment, we clarified the report to emphasize the basis for our findings. 3. OMB commented that this report conflicts with previous testimonies and our 2011 update to GAO’s high-risk series which described prior improvements. This report acknowledges such prior progress but provides a more in-depth review of multiple agencies’ data collection practices than prior work. Furthermore, as our report describes, in December 2011, changes were made to the data collection requirements which led to further concerns by agencies about data accuracy. Our report findings are also consistent with a September 2011 GAO report that discussed the Department of Defense’s FRPP data. In that report we found that the Department of Defense’s reported FRPP utilization data consisted of multiple discrepancies between the reported utilization designations and actual building utilization, along with other FRPP submission inaccuracies. Therefore, this report is consistent with our prior conclusions that some progress has been made since 2003, which we have discussed in multiple GAO reports and testimonies. However, the report on the Department of Defense’s data and this report demonstrated significant problems in the data collection process. 4. We believe that our first recommendation—that OMB develop a national strategy—would assist in addressing the tension OMB describes between providing agencies with the flexibility to define data elements based on their agency-specific requirements and establishing governmentwide data elements that can be used to support aggregate analysis across the entire FRPP database. We will continue to engage OMB on the topic of real property management and we believe that this report outlines the next steps. As we recommend in this report, a critical step is for OMB to develop a national strategy for managing excess and underutilized properties. In the area of data collection, a national strategy could help identify management priorities for problems such as this and lay out the principles for weighing the cost of uniform data collection to the agencies with the benefit that would be obtained by aggregate analysis of uniform data. As we stated in the report, if certain data elements cannot be collected consistently, they may not be appropriate to include in a database that appears to be standard across the government. 5. We agree with OMB’s statement that the method of attributing cost savings to efforts made to improve property management could be further clarified so that the public has a clear understanding of how such savings are calculated. We believe that transparency and accountability are critical in the federal government’s service to the taxpayers and would support action taken by OMB to increase transparency in this regard. We did not make a specific recommendation regarding how cost savings, particularly cost savings associated with the June 2010 presidential memorandum, should be clarified. Our report assessed real property management issues related to excess and underutilized property and recommended a national strategy that could be used to guide efforts such as the June 2010 presidential memorandum. 6. OMB stated that the report’s characterization of the administration’s Civilian Property Realignment Act (CPRA) proposal could benefit from further clarity on savings goals and further context about our recent support for the proposal. Regarding savings goals, OMB stated that the administration’s $4.1 billion estimate of the potential proceeds from the Act’s implementation reflects an analysis of the potential proceeds that would result from the entire federal real property inventory, not just those currently identified as “excess.” We acknowledged in the report that these savings, according to OMB, would also come from reduced operating costs and efficiencies. We could not, however, analyze the basis for these savings because, as we discussed, OMB did not provide us with a methodology, calculations, or any basis for its stated projections. We requested this information from OMB multiple times over a period of eight months, and were only provided with a general description of the savings, similar to what OMB provided in its letter commenting on this report. Until we can evaluate the analysis OMB references, we will be unable to provide a more thorough assessment. Furthermore, our views on the effect that CPRA could have on problems we have found in federal real property management have not changed: that CPRA can be somewhat responsive to real property management challenges faced by the government. For example, CPRA proposes an independent board that would streamline the disposal process by selecting properties it considers appropriate for public benefit uses. This streamlined process could reduce disposal time and costs. 7. OMB stated that the administration has a strategy for improving the management of federal real property that serves as an important foundation for the national strategy we recommend in this report. OMB stated that several significant initiatives, including the June 2010 presidential memorandum on excess property and the recommendation for a civilian property realignment board, represent a comprehensive and carefully considered governmentwide strategy for addressing the government’s long-term real property challenges. While the efforts OMB describes represent a range of individual initiatives, we continue to believe that they lack the key characteristics of a cohesive national strategy. A national strategy would improve the likelihood that current initiatives to improve real property management will be sustained across future administrations. The desirable characteristics of a national strategy that we’ve identified—such as a clear purpose, scope, and methodology; problem definition and risk assessment; and identified resources, investments, and risk management—could serve to articulate a more sustained, long-term strategy to guide individual initiatives such as those described in OMB’s comments. For example, related to resources and investments, agencies often lack funding to prepare unneeded properties for disposal or to pursue demolition. A national strategy could address this issue directly and transparently so that the true costs of real property reform are evaluated more completely by decision makers. 1. GSA stated that it is unclear whether the examples of inconsistencies described in our report are systemic throughout the FRPP, or are occurring in specific agencies’ reporting of the data. As we discuss in the report, our findings are primarily based on the problems we found with FRPC’s data collection practices, which negatively impact the entire data collection process. The examples of inconsistencies and inaccuracies that we describe complement those findings and illustrate how poor data collection practices affect data submissions, but they are not the only basis for our conclusions. In fact, our recommendation to improve FRPP data collection involves the sound data collection practices that we believe should be put in place. GSA has agreed with this recommendation and has taken action to begin correcting the problems we identified. In response to GSA’s comments, we made some clarifications to the report’s discussion of the basis of our findings. 2. GSA stated that, because the FRPP is an annual report, property utilization may change from the time it is submitted in December. As we conducted site visits for this review, we took steps to ensure that any inconsistencies and inaccuracies we found were not due to a significant change in the building’s use from the time it was reported to the time we visited. First, we discussed the history of the building’s use with the local officials who manage the building to ensure that there was no recent change in the building’s utilization. Second, since 2011 FRPP data had not been reported at the time we began our site visits, we obtained utilization data from the agencies’ source systems (which are used to produce FRPP utilization data) so that we had recent utilization data (as of the fall of 2011) before we began our site visits in December 2011. GSA also stated that the condition of the buildings may change or may not be updated annually. Related to this issue, we found that all five agencies did not always follow the guidance provided by the FRPC on how to calculate condition index. This led to severely blighted buildings receiving excellent condition scores, which could not be accounted for by reported changes in condition over a relatively short period of time. 3. GSA made a comment related to the computation formula for Condition Index. We have clarified this statement in the report. 1. VA stated that its complex model for calculating utilization is consistent with FRPC guidance because the guidance allows for flexibility on how agencies determine a key component of utilization (current design capacity) and that OMB agreed with their approach. However, rather than exercising flexibility in its use of current design capacity, VA used a different definition of utilization than the definition outlined in FRPC guidance. FRPC guidance defines utilization as the ratio of occupancy to current design capacity; however, VA defines utilization as the ratio of ideal space to existing space. While we acknowledge in our report that VA received OMB approval for reporting utilization differently, this method of reporting utilization is still inconsistent with the definition of utilization in FRPC guidance. Utilization is a performance measure and the 2004 executive order stated that performance measures shall be designed to allow comparing the agencies’ performance against industry and other public sector agencies. The inconsistencies we found from VA and other agencies in reporting utilization makes comparing utilization among agencies impossible. 2. VA also stated that “identifying underutilizations is much better than ignoring the fact that the building may not be properly sized to deliver services to Veterans.” We did not suggest that VA should ignore any aspect of its buildings that is problematic. We continue to believe that VA’s method of calculating utilization has led to some buildings being continuously designated as underutilized even when local officials, who know the buildings best, have told us that the buildings have been fully occupied. 3. VA stated that the reasons for the inaccuracies that we found in utilization at two VA buildings were due to the use of these buildings as “swing space,” meaning that utilization changes frequently based on need for space. In its comments, VA indicated that since FRPP data are reported annually, the designation of this space at the time of reporting changed from the time that we visited the sites. However, as we conducted our site visits for this review, we took steps to ensure that any inconsistencies we found were not due to a significant change in the building’s use from the time it was reported to the time we visited. First, we discussed the history of the building’s use since 2008 with the local officials who manage the building to ensure that there was no recent change in the building’s utilization. Second, since 2011 FRPP data had not been reported at the time we began our site visits, we obtained utilization data from the agencies’ source systems (which are used to produce FRPP utilization data) so that we had recent utilization data (as of the fall of 2011). The data we obtained from VA were current as of October 2011 and our visit took place in December 2011. Based on VA’s comments, we clarified this information in our report to show that we accounted for the time between 2010 FRPP reporting and our visit in December 2011. Based on our visits to the buildings and our discussions about the history of the buildings’ use with the VA officials who manage them, we do not believe that VA’s explanation accounts for the inconsistencies we found in utilization as detailed below: Local VA officials who manage the buildings told us that the first building VA discussed in its comments is used for accounting and payroll purposes and that it was always fully occupied during the period of our review (dating back to 2008). However, the building was reported to the FRPP as underutilized during each of these years. In fact, just two months prior to our visit, VA’s October 2011 source data showed a utilization of 45 percent for this building even though it was fully occupied. Local VA officials who manage the second building VA discussed in its comments told us that the building was mostly unoccupied because they had recently acquired it from the Department of Defense and that multiple improvements had to be made before it could be occupied by staff. Based on this, the local officials told us that it could not have been utilized at 59 percent in October 2011 as VA source data indicated. 4. VA made a comment related to individually metered buildings. We clarified VA’s statement in the report so that it is consistent with these comments. 5. In reference to our findings on problems with cost savings associated with the June 2010 presidential memorandum, VA stated that it disagreed with findings in a previous GAO report (GAO-12-305) that we referenced. In its comments on GAO-12-305, VA officials did not concur with certain parts of the report related to decreasing energy costs and improving non-recurring maintenance contracting. However, we did not reference the previous GAO report on these matters. Rather, we referenced that report’s discussion on savings associated with reducing leased space through telework. VA confirmed the problems that the previous GAO team found with the savings associated with the telework program in its comments on GAO-12-305, stating that the “telework program is still in its infancy and actual real property savings requires reducing space that is currently leased. These reductions in leased space may not be fully realized in 2012.” As a result, VA stated it its comments on GAO-12- 305 that the telework initiative was removed from the description of savings in its fiscal year 2013 budget. This is consistent with what we describe in this report. Therefore, VA’s restatement of its disagreement with findings in GAO-12-305 has no bearing on this report. In addition to the contact named above, David Sausville, Assistant Director; Amy Abramowitz; Russell Burnett; Kathleen Gilhooly; Raymond Griffith; Amy Higgins; Amber Keyser; Michael Mgebroff; John Mingus Jr.; Joshua Ormond; Amy Rosewarne; Minette Richardson; Sandra Sokol; and Elizabeth Wood made key contributions to this report. | The federal government has made some progress addressing previously identified issues with managing federal real property. This includes establishing FRPCchaired by the Office of Management and Budget (OMB)which created the FRPP database managed by GSA. GAO was asked to determine the extent to which (1) the FRPP database accurately describes the nature, use, and extent of excess and underutilized federal real property, and (2) progress is being made toward more effective management of these properties. GAO analyzed the data collection process and agency data, visited 26 sites containing excess and underutilized buildings from five civilian federal real property holding agencies with significant portfolios, and interviewed officials from these five agencies and OMB staff about how they collect FRPP data and manage excess and underutilized properties. The Federal Real Property Council (FRPC) has not followed sound data collection practices in designing and maintaining the Federal Real Property Profile (FRPP) database, raising concern that the database is not a useful tool for describing the nature, use, and extent of excess and underutilized federal real property. For example, FRPC has not ensured that key data elementsincluding buildings' utilization, condition, annual operating costs, mission dependency, and valueare defined and reported consistently and accurately. GAO identified inconsistencies and inaccuracies at 23 of the 26 locations visited related to these data elements (see the fig. for an example). As a result, FRPC cannot ensure that FRPP data are sufficiently reliable to support sound management and decision making about excess and underutilized property. The federal government has undertaken efforts to achieve cost savings associated with better management of excess and underutilized properties. However, some of these efforts have been discontinued and potential savings for others are unclear. For example, in response to requirements set forth in a June 2010 presidential memorandum for agencies to achieve $3 billion in savings by the end of fiscal year 2012, the General Services Administration (GSA) reported approximately $118 million in lease cost savings resulting from four new construction projects. However, GSA has yet to occupy any of these buildings and the agencys cost savings analysis projected these savings would occur over a 30-year periodfar beyond the time frame of the memorandum. The five federal agencies that GAO reviewed have taken some actions to dispose of and better manage excess and underutilized property, including using these properties to meet space needs by consolidating offices and reducing employee work space to use space more efficiently. However, they still face long-standing challenges to managing these properties, including the high cost of property disposal, legal requirements prior to disposal, stakeholder resistance, and remote property locations. A comprehensive, long-term national strategy would support better management of excess and underutilized property by, among other things, defining the scope of the problem; clearly addressing achievement goals; addressing costs, resources, and investments needed; and clearly outlining roles and coordination mechanisms across agencies. GAO recommends that, in consultation with FRPC, GSA develop a plan to improve the FRPP and that OMB develop a national strategy for managing federal excess and underutilized real property. GSA agreed with GAOs recommendation and agreed with the reports findings, in part. OMB agreed that real property challenges remain but raised concerns about how GAO characterized its findings on FRPP accuracy and other statements. GAO believes its findings are properly presented. The details of agencies comments and GAOs response are addressed more fully within the report. |
U.S. DOT established a minority and women’s business enterprise program for its highway, airport, and transit programs by regulation in 1980. The Surface Transportation Assistance Act of 1982 (STAA) contained the first statutory DBE provision authorizing a U.S. DOT DBE program. As amended, the provision requires that not less than 10 percent of the amounts made available by the act be expended through small businesses owned and controlled by socially and economically disadvantaged individuals—known as disadvantaged business enterprises or DBEs—unless the Secretary of Transportation determines otherwise. Following STAA, the DBE program continued to be reauthorized, and was last reauthorized in the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). 49 C.F.R. § 26.45(g). goals.18, 19 Each contract goal on an individual U.S. DOT-assisted contract is expressed as the percentage of federal-aid highway funds th state DOT will expend on DBEs on the individual contract. When a state DOT sets DBE contract goals on individual U.S. DOT- assisted contracts, bidders on those contracts must make good faith efforts to meet those goals. The bidder can meet this requirement in one of two ways: (1) meet the goal on the individual U.S. DOT-assisted contract, or (2) document that it made adequate good faith efforts— meaning that the bidder took the necessary and reasonable steps to achieve the goal even though it did not succeed in obtaining enough DBE participation to do so. State DOTs are responsible for evaluating whether bidders made good faith efforts to meet their goals, and according to U.S. DOT officials, this evaluation is subject to FHWA review as appropriate. 49 C.F.R. § 26.51(d). State DOTs must meet as much of their DBE participation goal as possible using race-neutral methods—actions that assist all small businesses without consideration of DBE status. When a state cannot achieve its goal using race-neutral methods, it must use race-conscious methods—actions used to assist only DBEs—to meet the remaining portion of the goal. The primary race-conscious method is setting contract goals on individual U.S. DOT-assisted contracts. 49 C.F.R. § 26.51. See also 49 C.F.R. § 26.5. According to FHWA officials, most states use a combination of race-neutral and race-conscious methods to meet their state goals. FHWA approval of each contract goal is not necessarily required; however, FHWA may review and approve or disapprove any contract goal established. 49 C.F.R. § 26.51(e)(3). to FHWA two types of data on spending on DBEs: committed spending on DBEs and actual spending on DBEs. Committed spending on DBEs is the total amount of federal funds that state DOTs award or commit to DBEs in the fiscal year.23, Committed spending data on contracts awarded in the fiscal year include data on contracts that will be completed in the fiscal year as well as ongoing contracts. These data include completed and ongoing contracts because some highway contracts can be relatively short and can be awarded and completed within the same fiscal year, whereas other highway contracts can cover multiple years between award and completion. Actual spending on DBEs is the total amount of actual payments that state DOTs make to DBEs using federal funds on contracts that are completed in the fiscal year. Actual spending includes spending on contracts completed in the fiscal year and can include contracts that were awarded in previous years. State DOTs award contracts to prime contractors, which may be DBEs, and prime contractors award subcontracts to DBE subcontractors, or commit to provide funds to DBE subcontractors. For each reporting period, the DBE regulations require state DOTs to provide data on the total amount of federal funds that are awarded to DBEs on U.S. DOT- assisted prime contracts, and the total amount of federal funds that are awarded or committed to DBEs on U.S. DOT-assisted subcontracts. 49 C.F.R. part 26, appendix B. Although applicable regulations refer to this as “Awards/Commitments,” for purposes of this report, we refer to this as “committed spending” or “committed spending on DBEs.” For certification of DBEs, U.S. DOT requires DBEs to be certified in each state where they want to bid on or participate in U.S. DOT-assisted contracts. DBEs are certified according to U.S. DOT’s regulatory eligibility requirements and certification procedures. Specifically, DBEs must be small businesses that are at least 51 percent owned by one or more individuals who are socially and economically disadvantaged, and that are managed, operated, and controlled by these owners. (See app. II for more details on the eligibility requirements.) Organizations that certify DBEs must take many steps to ensure that a DBE firm meets certification requirements—this includes, among other things, reviewing firms’ applications for certification; conducting on-site visits and interviews; and reviewing licenses, stock ownership, equipment, work completed, resumes of principal owners, and the bonding and financial capacity of the firm. Furthermore, DBEs are certified only in the type or types of work, such as paving, that they can perform. According to U.S. DOT, in 2009, about 27,000 DBEs were certified under its program. Generally, organizations within each state that receive DOT funds decide which organizations within the state can certify DBEs. While some states have multiple organizations certifying DBEs, others have one certifying organization. For example, in the five states we focused on as part of this review, four states (Florida, Minnesota, Missouri, and Wisconsin) have multiple organizations within the state that certify DBEs, and the remaining state (Washington) has one organization that certifies DBEs for the entire state. Each state is required by DBE regulations to have a unified approach to certification so that the certification decisions of one organization are honored by all the organizations receiving U.S. DOT funds in the state. This essentially allows for a “one-stop shop” for DBE firms seeking certification, meaning that a DBE only needs to apply for certification with one organization in the state, rather than apply for certification from all the organizations in the state with which the DBE wants to conduct work. The organizations that certify DBEs may include the state DOT, local airport authorities, state and local transit agencies, and city and county governments. For example, Florida has 13 organizations that certify DBEs—including Florida DOT, the Greater Orlando Aviation Authority, and Miami-Dade County. See figure 1 for examples of organizations that certify DBEs. The type of work a DBE performs and the DBE’s geographic location can play a role in some states in determining which organization in the state reviews a firm’s application for certification and certifies the firm as a DBE. State DOTs primarily review applications and certify DBEs that work on highway projects, while local airport authorities and state and local transit agencies primarily review applications and certify DBEs that work on airport and transit projects, respectively. Nevertheless, the DBEs certified by organizations other than state DOTs (such as local airport authorities and state and local transit agencies) can also work on highway projects if the type of work they are certified to perform can be applied to both airport or transit and highway projects. For example, if a DBE is certified to do paving, this DBE may be able to do paving work on both highway and airport projects. Geographic location may also play a role in determining which organization reviews a firm’s application for certification and certifies DBEs. For example, a county government in Wisconsin certifies DBEs from six nearby counties in the state, regardless of whether the DBE is interested in working on federal highway, transit, or airport projects. Federal agencies should perform a number of oversight activities to safeguard against fraud and to ensure that federal programs meet their objectives and comply with federal regulations. FHWA oversees the DBE program as part of its oversight of state DOTs’ implementation of federal-aid highway programs. FHWA offices—including FHWA’s Office of Civil Rights, a program office located in the agency’s headquarters office, and FHWA’s 52 division offices, which are located in each state, the District of Columbia, and Puerto Rico—oversee the DBE program. FHWA’s Office of Civil Rights is responsible for a number of civil rights programs, including the DBE program, and the division offices perform primary oversight of state DOT activities. In general, these offices oversee the DBE program through risk assessments, day-to-day monitoring of state DOT activities, and program reviews. FHWA’s annual risk assessments identify and assess risks across all federal-aid highway programs, including the DBE program. FHWA uses the results of these risk assessments to help it determine how to focus its oversight. Both FHWA’s program offices and division offices conduct these assessments and use the results to take actions to address the risks they identify, which have recently included the DBE program. In addition, FHWA uses the results of the program office and division office risk assessments to identify agencywide risk. Program office risk assessments. In 2010 FHWA’s Office of Civil Rights identified the DBE program as a risk area because of the potential for, among other things, a significant increase in fraud and abuse, improper program implementation, and costly litigation. Furthermore, the office identified specific risk areas within the DBE program, including DBE goals and DBE certifications. Given the Office of Civil Rights’ identification of the DBE program as a risk area, the office has taken actions, such as hiring a full-time DBE Program Manager, to help address the potential problems identified (e.g., fraud, improper program implementation, and other risks). Division office risk assessments. In 2010, 23 of 52 FHWA division offices identified the DBE program as a risk area. These divisions identified the DBE program as a risk area because of, among other things, challenges related to the stewardship and administration of state DOTs’ DBE programs and to contractor compliance with DBE program requirements. For example, one division office identified the DBE program as a risk area because of concerns regarding the stewardship and oversight of a state DOT’s DBE program that could lead to, among other things, confusion and lack of understanding about how to properly implement the DBE program requirements, increasing the opportunity for fraud and abuse. To address these risks, the division planned to provide training to state DOT officials on requirements specified in the DBE regulations and planned to conduct a program review of the state DOT’s implementation of specific DBE regulations, such as those related to monitoring and achieving contract goals. Agencywide risk identification. Based on the results of program office and division office risk assessments, FHWA identifies risks that are relevant for the entire agency. These risks become key drivers in developing FHWA’s strategic implementation plan, which reflects the administration’s priorities for the upcoming year. According to FHWA officials, FHWA first identified the DBE program as an agencywide high-risk area in 2007 because the DBE program accounted for 50 percent of the risks identified in the entire civil rights area. This designation as a high-risk area continued through 2010. In response to the agencywide designation of the DBE program as a high-risk area, FHWA has recently increased its oversight of the DBE program. FHWA division officials we interviewed from all five of the states we focused on said that they monitor state DOT activities on an ongoing basis to help oversee all federal-aid highway programs implemented by state DOTs, including the DBE program, and to help identify areas that need increased oversight attention. For example, one division official we interviewed said that as a part of his day-to-day oversight, he reviews the state DOT’s process for setting contract goals. In some instances, this division official said that he found that the state DOT had set all contract goals too low to meet the state DOT’s DBE goal. Furthermore, officials from all five of the division offices said that they participate in meetings with state DOTs and/or receive reports from state DOTs that help them identify areas where there may be problems and where they need to focus their attention. For example, one division office that we focused on as part of our review requires its state DOT to provide quarterly progress reports on DBE contract goals. The official we interviewed from this division office said that these reports help her identify issues, such as issues related to goals on individual U.S. DOT-assisted contracts, which she may not otherwise hear about from her daily interactions with the state DOT. In 2007 and 2008, FHWA conducted program reviews of all its federal-aid highway programs, including the DBE program. FHWA conducted these reviews to determine which federal highway programs required greater federal oversight and assistance. For the DBE program reviews, FHWA officials—including those from the divisions—reviewed each state DOT’s implementation of the DBE regulations to determine if the state DOT needed more federal oversight and assistance in implementing the regulations. Based on the results of the 2007 and 2008 reviews, FHWA’s Office of Civil Rights decided to conduct the DBE program reviews again in 2010 and 2011. Like the 2007 and 2008 reviews, the 2010 and 2011 reviews assessed how state DOTs implemented the DBE regulations. The 2010 and 2011 program reviews assessed, for example, whether a state DOT provided opportunities for public participation when establishing its DBE goal, as required by the DBE regulations. An official from FHWA’s Office of Civil Rights said that based on the results of the recent program reviews, the office plans to compile all the concerns and problems identified in the reviews in a report for FHWA division and state DOT leadership, and also implement improvements in state DOTs’ DBE programs. Officials from FHWA’s Office of Civil Rights said that they are still collecting the results of the program reviews from FHWA divisions and state DOTs and, as a result, have not yet completed an initial report that compiles all the concerns or problems identified nor implemented any improvements in state DOT DBE programs. In response to the current administration’s focus on small business programs (such as the DBE program), U.S. DOT’s interest in increasing accountability in DBE program implementation, and FHWA’s continued designation of the DBE program as an agencywide high risk area, FHWA and U.S. DOT have taken a number of steps in the past 2 years to increase its oversight of state DOT DBE programs. Some of the more significant actions are as follows: DBE Program Manager. FHWA hired a full-time National DBE Program Manager in 2010 to help oversee state DOTs’ implementation of the DBE program. Action Plans. Starting in 2010, FHWA’s Office of Civil Rights required that all FHWA divisions develop and submit action plans that, in part, describe division office oversight activities, identify areas in state DOT DBE programs that need improvement, and identify actions the division offices will take to improve their and the states’ leadership and management of the DBE program. The Office of Civil Rights also required that FHWA divisions submit quarterly reports that provide an update on what they are doing to implement the actions the divisions have identified to improve the leadership and management of the DBE program. Regulations. In January 2011, U.S. DOT updated its DBE regulations to, among other things, add a requirement for state DOTs to analyze in detail the reasons for any difference between their state DOT DBE goal and the amount of federal funds the state DOT committed to spend on DBEs for each fiscal year. State DOTs must then establish specific steps and milestones to correct the problems they identified in their analysis and to fully meet its DBE goal in the next fiscal year. These requirements went into effect in February 2011. According to U.S. DOT, the added requirement will help state DOTs understand, when applicable, why their DBE goals are not being met, and will increase state DOTs’ accountability for meeting DBE goals. Based on FHWA’s most recent data, 54 percent of state DOTs did not meet their DBE goals in fiscal year 2010 and would have been subject to this requirement in fiscal year 2010 if it had been in effect. See appendix III for further information on the number and percent of state DOTs meeting their DBE goals based on committed spending. National Review Team (NRT). In response to FHWA’s 2009 risk assessment, FHWA established the NRT to review six areas, including the DBE program, that posed a nationwide risk of misuse of Recovery Act funds. FHWA used the results of the NRT review to identify areas for improved oversight and training. For example, the NRT identified concerns about the process that some state DOTs were using to evaluate whether prime contractors bidding on contracts made adequate good faith efforts to meet DBE contract goals. Specifically, in one of these states, the NRT found that the state DOT awarded a majority of its contracts to prime contractors who did not meet the DBE contract goals, but provided documentation that they made good faith efforts to do so. While DBE regulations allow state DOTs to award contracts based on bidders’ good faith efforts and do not limit the number of contracts that can be awarded in this way, FHWA officials explained that awarding a high number of contracts on the basis of good faith efforts might be a reason state DOTs do not meet their state DBE goals. To address these concerns, the FHWA division in this state trained state DOT staff on this issue, tracked the number of contracts that the state DOT awarded based on good faith efforts on a monthly basis, and further reviewed the state DOT’s process for evaluating prime contractor good faith efforts to ensure that the state DOT was complying with the DBE regulations. Furthermore, on a nationwide level, FHWA’s Office of Civil Rights increased its training to FHWA’s division officials on this issue. In addition to these steps, FHWA took a number of other actions. For example, it increased its training on the DBE program for FHWA division offices. FHWA also increased its focus on the DBE program during national seminars and conferences. Furthermore, FHWA included the DBE program, along with other civil rights areas, in its 2011 strategic implementation plan. According to FHWA officials, the significance of this is that the 2011 plan marked the first time that civil rights programs, including the DBE program, were given agencywide attention. It is too early to determine the effectiveness of the recently implemented oversight activities described above. But each of the oversight activities could help FHWA protect against fraud in the DBE program, ensure that the DBE program is meeting its objectives, and identify areas where state DOTs may not be in compliance with the regulations. According to FHWA officials, if they find that a state DOT has fallen short in meeting any part of the DBE regulations, the FHWA division will work with the state DOT to bring the state DOT back into compliance. According to officials at FHWA’s Office of Civil Rights, should a state DOT refuse to comply with any part of the regulation without seeking a waiver from the Secretary of Transportation, FHWA would then consider other options, including withholding federal funds from the state DOT pursuant to regulations.45, For example, one of the divisions we spoke with said that because of noncompliance with parts of the DBE regulations, it presented a letter of possible funds withholding to a state DOT. Subsequently, the state DOT took steps to comply with the regulations as required. FHWA faces two fundamental problems with the DBE data it collects from state DOTs to assess whether state DOTs have met their DBE goals. First, actual DBE spending data reported by state DOTs cannot be meaningfully compared to state DOTs’ DBE goals to measure whether goals were met. Second, the proxy data that FHWA uses to measure whether goals were met—data on committed spending on DBEs—may or may not be a reasonable proxy of state DOTs’ actual spending on DBEs. 49 C.F.R. § 26.101(a). See also 49 C.F.R. § 26.103. As a result, FHWA does not know whether its data on committed spending can be relied on to evaluate state DOTs’ progress in meeting goals or whether state DOTs would benefit from FHWA assistance to meet their goals. Also, FHWA’s reporting of data on committed spending to describe progress towards DBE goals does not include statements about potential limitations of the data—namely that the data on committed spending on DBEs might not reflect actual spending. Including statements about the potential limitations of committed spending data could help FHWA increase transparency in the reporting of DBE spending data. Federal internal control standards call for an agency to track major achievements, such as spending, and compare these to its goals. FHWA is unable to make this comparison using data on actual spending on DBEs because the actual spending data that FHWA collects in the Uniform Reports reflect different time frames and, therefore, different data from state DOTs’ DBE goals. Actual spending data are based on completed contracts—some of which could have been awarded in previous fiscal years—while a state DOT’s DBE goal reflects what state DOTs will expect to spend on DBEs on contracts that are awarded or committed in the current fiscal year. This difference in time frames and when contracts are awarded makes it difficult to compare the state DOTs’ actual spending with its DBE goals. FHWA officials roughly estimate about 50 percent of contracts are completed in the fiscal year in which they were awarded. Actual spending for the remaining estimated 50 percent of contracts cover multiple years between award and completion, and are not included in Uniform Reports until these contracts are completed. Without comparing actual spending on DBEs to a state DOT’s DBE goals, FHWA may not be able to effectively track whether a state DOT has met its DBE goals as called for by internal controls. FHWA uses committed spending on DBEs as a proxy for actual spending on DBEs to determine if state DOTs are meeting their DBE goals. According to U.S. DOT officials, FHWA’s practice of using committed spending is a convention necessary to provide timely reporting; if FHWA used actual spending to determine whether DBE goals were met, it would have to wait several years for some contracts to be completed. Based on the committed spending data in the Uniform Reports, about half of the state DOTs met their DBE goals each fiscal year from fiscal years 2006 through 2010. See appendix III for the number and percentage of state DOTs meeting their DBE goals from fiscal years 2006 through 2010, based on committed spending data. FHWA has not conducted a nationwide analysis comparing committed spending to actual spending to know whether it is a reasonable proxy for actual spending. Thus, committed spending on DBEs may or may not definitively show whether state DOTs met their DBE goals. According to FHWA and state DOT officials, committed spending could be similar to actual spending, or it could differ from actual spending. Specifically, an FHWA headquarters official told us about two cases where she personally compared committed to actual spending, and found that the committed spending was close to actual spending. The headquarters official said that in one of the two cases, she compared committed to actual spending on individual contracts and in the second case, she compared committed to actual spending on completed contracts historically, over a period of time. Separately, FHWA and state DOT officials we contacted said that committed spending could be lower or higher than actual spending. For example, an FHWA official from one state we focused on said prime contractors’ committed spending on DBEs may be higher than actual spending on some projects because of changes to the contract that reduce the amount of work performed by DBEs. In another state, a state DOT official said a prime contractor’s committed spending on DBEs at the start of a contract may be 8 percent, but the actual spending on DBEs at the end of the contract may be only 2 percent. The official said in fiscal year 2008 the state DOT did not meet its fiscal year 2008 DBE goal because the prime contractor on a highway project spanning multiple years spent less on DBEs than committed. FHWA’s NRT also found that committed spending may not match actual spending and thus may not provide a complete picture of whether state DOTs are meeting their DBE goals. The NRT offered findings and recommendations to two of the five state DOTs that we contacted during our review pertaining to the issue of data on committed and actual spending. For example, the NRT found that one state DOT did not require a prime contractor to provide information to the state DOT on actual spending on DBEs if the contractor provides work to a DBE after the DBE goal on the contract is met. Thus, according to the NRT report, the state DOT may not accurately capture or report all actual spending to DBEs, and as a result may spend more on DBEs than committed. In another state, the NRT recommended that the state DOT compare committed to actual spending on DBEs on completed DBE subcontracts to help ensure that the DBE program was compliant at the project level. Furthermore, in addition to the NRT’s findings, the U.S. DOT Office of Inspector General recommended in another state that the state DOT capture all actual spending on DBEs regardless of whether the DBE goal on a contract had already been met. Without a nationwide analysis comparing committed spending to actual spending, FHWA cannot be certain whether committed spending reflects actual spending for DBEs in all state DOTs. Therefore, FHWA does not know whether its data on committed spending can be relied on to evaluate state DOTs’ progress in meeting goals; hold state DOTs accountable for meeting their DBE goals, as emphasized in U.S. DOT’s update to its regulations; make program decisions based on whether state DOTs are meeting their DBE goals on an annual basis; or provide assistance to state DOTs that are not meeting their goals. Furthermore, if committed spending data on DBEs do not reflect actual spending on DBEs, then state DOTs might potentially take inappropriate action or inaction, depending on whether the data show a state DOT has met its DBE goal. For example, if the data on committed spending show a state DOT is meeting its goal, it might, as one state DOT said it does, discontinue setting DBE contract goals, which are DBE goals on individual U.S. DOT-assisted contracts. If the data on committed spending do not reasonably match actual spending, however, then the state DOT might stop its use of DBE contract goals prematurely. U.S. DOT has a departmentwide working group that, according to a U.S. DOT official, considers various improvements to the administration of the DBE program, including improvements to the various reporting forms used in the DBE program. As part of its efforts, this working group is considering revising the form—the Uniform Report—that state DOTs use to report committed and actual spending on DBEs. In February 2011, the members reviewing the Uniform Report held their first meeting but did not determine what changes to recommend. This working group could provide an opportunity for FHWA to identify options it can use to evaluate whether the committed spending data it uses to determine if state DOTs have met their DBE goals is a reasonable proxy for actual spending and whether this data can be relied on to measure progress towards goals. For example, on a nationwide basis, FHWA could compare committed to actual spending—using historical data on committed and actual spending on completed contracts—to determine whether committed spending reflects actual spending on DBEs. We have previously reported that while no data are perfect, agencies need to report any limitations of performance data to provide transparent information on government operations so that decision makers, such as members or committees of Congress and program managers, can use the information appropriately. In addition, recent initiatives, such as a June 2011 Executive Order on accountable government and the GPRA Modernization Act of 2010 (GPRAMA) have placed increased emphasis on transparency, including enhancing the transparency of federal spending. Transparency of DBE spending data is important because it helps stakeholders oversee and monitor progress of the DBE program. However, FHWA’s reporting of data on committed spending to describe progress towards DBE goals does not include statements about potential limitations of the data—namely that the data on committed spending on DBEs might not reflect actual spending. For example, in a March 2009 hearing on the DBE program, U.S. DOT reported to the House of Representatives Committee on Transportation and Infrastructure that DBEs were awarded $3.3 billion in contracts, representing more than 11 percent of the total federal amount provided through U.S. DOT-assisted contracts in fiscal year 2008.59, In this example, U.S. DOT did not explicitly state that “awarded contracts” (committed spending) might not be the same as actual spending, and that it was using “awarded contracts” as a convention to facilitate reporting. Including statements about potential limitations of committed spending data in the information it provides to decision makers, including Congress, could help FHWA increase transparency in the reporting of DBE spending data. The $3.3 billion reflects awards to DBEs performing work on all U.S. DOT-assisted contracts (i.e., FHWA-, FAA-, and FTA-assisted contracts). Although the $3.3 billion is for all U.S. DOT-assisted contracts, this dollar amount is relevant for our report—which focuses on only FHWA—because it includes awards to DBEs on FHWA-assisted contracts. U.S. DOT and three of its operating administrations—FHWA, FTA, and FAA—oversee, review, and monitor the certification activities of all the organizations that certify DBEs in a state. Officials we interviewed from U.S. DOT’s and FHWA’s Offices of Civil Rights described an oversight approach in which the oversight of the certification activities of the organizations that certify DBEs is delegated to one of the administrations—depending on which administration provides federal funding to the organization. For example, FHWA is responsible for overseeing the certification activities of state DOTs—which primarily certify DBEs that work on highway projects—because these state DOTs receive federal-aid highway funds. FTA and FAA are responsible for overseeing certifying organizations, such as state transit agencies and local airport authorities, that receive federal transit and aviation funds, respectively. Defining roles and responsibilities in this way can help federal agencies effectively oversee programs, particularly when multiple federal entities are involved in oversight. In addition to FHWA’s, FAA’s, and FTA’s responsibilities for certification oversight, officials from U.S. DOT’s Office of Civil Rights said that their role, prior to the 2011 update to the DBE regulations, was and continues to be to review and make decisions on DBE certification appeals. According to officials, under the updated DBE regulations that went into effect in February 2011, U.S. DOT’s Office of Civil Rights will have additional responsibilities to enforce and oversee DBE certifications. FHWA officials we interviewed said that their responsibilities for overseeing state DOT DBE programs included overseeing the state DOTs’ certification activities. As discussed earlier in this report, FHWA oversees state DOTs’ implementation of the DBE program—including how state DOT’s certify DBEs—using risk assessments and program reviews. For example, in conducting their annual risk assessments, FHWA’s Office of Civil Rights and FHWA division offices consider risks related to the DBE program, including how state DOTs certify DBEs. Furthermore, in the 2010 and 2011 program reviews, FHWA division offices assessed whether state DOTs were following U.S. DOT’s regulatory eligibility requirements and certification procedures when they certified DBEs. In addition to these activities, U.S. DOT’s Inspector General’s Office (IG) has a role in overseeing DBE certifications. In April 2011, the IG announced that it would conduct an audit of the DBE program, and officials from the IG indicated that they plan to review certifications of DBEs as part of this audit. In addition to risk assessments and program reviews, all five FHWA division offices we focused on conducted additional oversight activities to oversee the certification activities of state DOTs. As is the case with the oversight of other aspects of the DBE program and other federal-aid highway programs, FHWA divisions use their discretion to determine how much and how often to carry out these additional oversight activities. For example, one FHWA division participates in annual reviews of the state’s DBE certification processes, procedures, and activities—which includes conducting spot-checks of selected certification files and interviewing personnel engaged in managing, supervising, and performing certification activities. Another division official said that because DBE certifications are not a high-risk area in his state, he oversees the state DOT’s certification activities when there are questions or concerns with a specific certification decision. Furthermore, FHWA officials in three of the four states that have multiple certifying organizations said that they rely on FAA and FTA to oversee the certification activities of organizations other than the state DOT (such as transit agencies and airport authorities). FTA’s and FAA’s oversight of these organizations is relevant for federal-aid highway projects because these organizations certify DBEs that can work on highway projects. For example, officials from one certifying organization we interviewed said that while it is a transit agency, about 40 percent of the DBEs it certifies could work on transit projects as well as other types of projects, such as highway projects, because the skills needed for transit and highway projects are similar. According to U.S. DOT officials, it is common for a DBE to work on more than one type of project, such as on airport and highway projects. In general, FAA and FTA officials we obtained information from said that they conduct compliance reviews to oversee the certification activities of these other organizations. See appendix IV for more information on the actions FAA and FTA take to oversee the certification activities of the organizations they are responsible for overseeing. U.S. DOT’s DBE program has existed for more than 30 years and provided billions of dollars to DBEs across the country. FHWA’s oversight of how state DOTs implement their DBE programs is critical for ensuring that the programs are implemented according to U.S. DOT’s DBE regulations. FHWA has taken several steps, some of which have been recently implemented, which could help ensure state DOTs’ compliance with DBE regulations. However, FHWA faces fundamental problems in the data it uses to oversee DBE participation. Knowing whether a program is meeting its goals and ensuring that data accurately reflects federal dollars spent is a primary responsibility of oversight. Without addressing its fundamental data problems, FHWA cannot effectively make program decisions and implement DBE regulations. For example, if the extent to which state DOTs are meeting their goals is unclear, FHWA will not be able to effectively hold state DOTs accountable for meeting their DBE goals, as emphasized in U.S. DOT’s update to its regulations. Further, without addressing its data problems, FHWA cannot be sure that the data that shows that about half of the state DOTs are meeting their DBE goals from fiscal years 2006 through 2010 is accurate. U.S. DOT’s working group that considers various improvements to the administration of the DBE program provides FHWA with an opportunity to identify options it can use to evaluate whether the committed spending data it uses to determine if state DOTs have met their DBE goals is a reasonable proxy for actual spending and whether this data can be relied on to measure progress towards goals. Additionally, including statements about the potential limitations of committed spending data in information it provides to decision makers could help FHWA increase transparency in the reporting of DBE spending data. To know whether its data on committed spending can be relied on to determine state DOTs’ progress in meeting goals, to enhance FHWA’s ability to know whether state DOTs meet their DBE goals, and to help increase transparency in the reporting of spending on DBEs, the Secretary of Transportation should direct the FHWA Administrator to take the following two actions: 1. Evaluate whether its committed spending data is a reasonable proxy for determining whether state DOTs are meeting their DBE goals. 2. In the information it provides to decision makers, including Congress, include statements about potential limitations of the data it uses to determine state DOTs’ progress towards goals. We provided a draft of this report to the Department of Transportation for review and comment. We received e-mail and oral comments on the draft report from U.S. DOT through the department’s liaison. Our draft report recommended that (1) FHWA identify and use data that are reliable and accurately reflect whether state DOTs have met their DBE goals, and (2) FHWA clearly note in reports to decision makers, including Congress, that FHWA’s data might not represent actual DBE spending until FHWA identifies and uses reliable data. U.S. DOT’s comments on our draft recommendations covered two broad areas: the reliability of committed spending data and data limitations. Specifically, U.S. DOT commented that the committed spending data that FHWA is using is the most reliable and accurate data available to determine on a timely basis whether state DOTs are meeting their DBE goals. U.S. DOT also commented that the information presented in our report on committed spending does not relate to the reliability of the committed spending data but rather relates to the reporting of these data. However, as stated in our report, FHWA may or may not be able to rely on committed spending data to measure progress towards goals because FHWA does not know whether committed spending is a reasonable proxy for actual spending. We clarified our first recommendation to better reflect the need for FHWA to evaluate if the proxy data can be relied on to determine whether state DOTs are meeting their DBE goals. U.S. DOT agreed to consider our modified recommendation. Regarding the second recommendation, U.S. DOT officials commented that ensuring that the appropriate methodological disclosures are included in their reporting was not enough of an issue to warrant a recommendation. As we note in our report, including statements about the potential limitations of spending data is important because it improves transparency of the data so that decision makers can oversee and monitor progress of the DBE program appropriately. Consequently, we retained this recommendation with slightly revised language. U.S. DOT officials noted that they would re- evaluate their disclosures with regard to the data used to determine if DBE goals are met. Finally, U.S. DOT provided technical comments, which we incorporated as appropriate throughout the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the U.S. Department of Transportation and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report focuses on the Federal Highway Administration’s (FHWA) oversight of the U.S. Department of Transportation’s (DOT) Disadvantaged Business Enterprise (DBE) program on federally assisted highway projects. Specifically, the objectives of this report were to examine how FHWA (1) oversees state DOTs to ensure that they are implementing their DBE programs in accordance with applicable regulations, (2) assesses whether state DOTs have met their DBE goals, and (3) oversees organizations that certify DBEs that work on federal-aid highway projects. To address all three of our objectives, we reviewed relevant laws, regulations, U.S. DOT and FHWA documents on the DBE program, and GAO and other reports. Specifically, we reviewed the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) and other relevant legislation, such as the Surface Transportation Assistance Act of 1982 (STAA), which authorized U.S. DOT’s DBE program. In addition, we reviewed the GPRA Modernization Act of 2010 (GPRAMA) and Executive Order 13576 on delivering an efficient, effective, and accountable government. We also reviewed U.S. DOT’s regulations on the DBE program that describe state DOTs’ requirements for implementing the DBE program, as well as proposed and final rules that U.S. DOT published in the Federal Register which recently amended DBE regulations. We also reviewed and analyzed U.S. DOT, FHWA Office of Civil Rights, and FHWA division office documents on the DBE program, including procedures and guidance on DBE goals, certifications, DBE program implementation, and oversight of the DBE program. Furthermore, we reviewed and analyzed prior GAO reports and other reports on the DBE program, oversight, and accountability in the federal government, including GAO standards and guidance on internal controls. We used some of these reports to assess FHWA’s oversight of the DBE program. Furthermore, we conducted semistructured interviews on topics related to our objectives. In particular, we reviewed documentation and interviewed officials from U.S. DOT and FHWA in headquarters, including officials from U.S. DOT’s Office of General Counsel, U.S. DOT’s Office of Civil Rights, and FHWA’s Office of Civil Rights. In addition, we conducted semistructured interviews with representatives from associations, including the American Association of State Highway and Transportation Officials, Associated General Contractors of America, American Road & Transportation Builders Association, Conference of Minority Transportation Officials, and National Association of Minority Contractors. We interviewed officials from these associations because they are national associations knowledgeable about the DBE program, and because these national associations represent various stakeholders involved in the DBE program, such as state DOT officials and contractors. We also reviewed documentation from and interviewed officials in five states: Florida, Minnesota, Missouri, Washington, and Wisconsin. We judgmentally selected these states to obtain variation in geographic location, state population, state DOT use of race-conscious and/or race- neutral methods to meet DBE goals, whether state DOTs met their overall DBE goals, and number of certifying organizations within the states. We also considered whether the state used a disparity study to determine its DBE participation goal, whether the state was involved in litigation regarding the DBE program, whether the state was located within the jurisdiction of the U.S. Court of Appeals for the Ninth Circuit, and recommendations from stakeholders who are familiar with the DBE program. We visited Florida because it was the only state mentioned by stakeholders that met its goal by using only race-neutral methods. In each of the selected states, we interviewed FHWA division officials and state DOT Civil Rights and DBE Program managers. Finally, in Florida, Wisconsin, and Washington, we interviewed prime contractors and DBE firms, or organizations in the state that represented the DBE firms. In addition to these efforts, to describe how FHWA oversees state DOT DBE programs to ensure that the state DOTs are complying with DBE regulations, we reviewed and analyzed information related to FHWA’s oversight activities. For example, we obtained and reviewed documentation from FHWA’s Office of Civil Rights and FHWA divisions on FHWA’s risk assessment process and DBE program reviews. We also conducted semistructured interviews with the National Review Team (NRT) that FHWA established to assess DBE program implementation on Recovery Act projects. We also reviewed the findings of the NRT’s review, which provided a programmatic assessment of oversight at a national level while also providing insights for the specific states we selected for this review. We also reviewed the Action Plans from our five selected states to see how those divisions explained how they oversee state DOT DBE programs. To determine how FHWA assesses whether state DOTs have met their DBE goals, we reviewed and analyzed FHWA’s national data on state DOTs’ committed and actual spending and how FHWA determined whether state DOTs achieved their DBE goals over a 5-year period (fiscal years 2006 through 2010) for state DOTs in all 50 states, the District of Columbia, and Puerto Rico. FHWA officials said they compiled the national data from state DOTs’ Uniform Report of DBE Awards or Commitments and Payments (commonly referred to as the Uniform Report). To help ensure the accuracy of the national data, we conducted selected quality checks of the data and we discussed and resolved any inconsistencies in the data we identified with the appropriate agency officials. We also compared FHWA’s national data to the data in the Uniform Reports for the five state DOTs that we contacted, and resolved inconsistencies with FHWA and the state DOTs. Given our review of the data provided to us by FHWA, we identified problems with the data FHWA uses to assess whether state DOTs achieved their DBE goals. These issues are discussed in this report. Even so, our review of FHWA’s national data provided us with a perspective on how FHWA compiled and used these data. Additionally, because data on actual spending on DBEs cannot be used to determine if state DOTs met their DBE goals, in appendix III, we determined the number and percentage of state DOTs meeting overall DBE goals based on committed spending data on DBEs for all state DOTs to illustrate orders of magnitude. We did not evaluate the appropriateness of the state DOT DBE goals. Finally, to determine how FHWA oversees organizations that certify DBEs, we obtained information or interviewed officials from U.S. DOT’s and FHWA’s Offices of Civil Rights, and the Federal Aviation Administration’s (FAA) and Federal Transit Administration’s (FTA) Offices of Civil Rights. Although our report focuses on FHWA’s oversight of the DBE program, we obtained information from officials at FAA and interviewed an official from FTA since DBEs can be certified to work on highway, airport, and transit projects, and since FAA and FTA, in addition to FHWA, can be involved in overseeing the certifications of DBEs that work on federal-aid highway projects. We did not examine FAA’s or FTA’s oversight of the DBE program on federally funded airport and transit projects, or airport concessions contracts. We reviewed how FHWA oversaw the certification activities of the organizations in the state. In addition, in each of the states selected, we interviewed state or local officials from at least two organizations that certify DBEs within each state, if such existed—such as state transit agencies and local airport authorities. We judgmentally selected the certifying organizations based on their geographic location, whether they certified DBEs that work on highway projects, and whether the organization was a state DOT. We conducted this performance audit from September 2010 to October 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To be a Disadvantaged Business Enterprise (DBE), firms must meet certain regulatory eligibility requirements concerning socially and economically disadvantaged status, business size, ownership, and control. Table 1 summarizes the key elements of each of the eligibility requirements. The U.S. Department of Transportation’s (DOT) Disadvantaged Business Enterprise (DBE) regulations require that each state DOT set an annual goal for DBE participation in federal-aid highway projects, expressed as the percent of federal-aid highway funds it will expect to spend on DBEs for contracts that are awarded or committed in a fiscal year. The regulations also require that all state DOTs report to the Federal Highway Administration (FHWA) the total amount of federal funds they commit to spend on DBEs using a form called the Uniform Report of DBE Awards or Commitments and Payments (commonly referred to as the Uniform Report). FHWA uses this data on committed spending on DBEs to determine whether state DOTs are meeting their DBE goals each fiscal year. Based on the committed spending data in the Uniform Reports, about half of the state DOTs met their DBE goals each fiscal year from fiscal years 2006 through 2010. See table 2. The Federal Aviation Administration (FAA) and the Federal Transit Administration (FTA) oversee the certification activities of organizations they provide funding to, such as airport and transit organizations. While this report does not focus on these administrations’ oversight of Disadvantaged Business Enterprise (DBE) certifications, we obtained information from officials at FAA and interviewed an FTA official to obtain a general understanding of their involvement in overseeing DBE certifications. Some of their oversight activities are described in the following sections. FAA officials said that they oversee the certification activities of airport authorities when they conduct compliance reviews of these authorities. These compliance reviews can include an analysis of an airport authority’s responsibilities for DBE certifications or the airport authority’s capacity as a certifying organization. FAA officials also indicated that they may also ask the airport authorities and other certifying organizations in the state to review particular certification decisions or procedure as a result of a complaint, investigation, compliance review, U.S. Department of Transportation Office of Inspector General finding, or court determination. An FTA official we interviewed explained that although FTA usually does not oversee the certification activities of any organizations that certify DBEs, including certification decisions made by transit agencies, it provides technical assistance to the organizations on certification issues. In addition, FTA reviews the DBE program during its triennial reviews and, when funding is available, conducts certification compliance reviews. In 2009, FTA began conducting compliance reviews of each state’s certification procedures and standards to ensure that their activities align with the U.S. Department of Transportation’s DBE regulations on certifications. According to an FTA official, FTA has completed between six to ten certification compliance reviews to date and has plans to conduct more reviews in subsequent years. In addition to the contact named above, Catherine Colwell, Assistant Director; Lauren Calhoun; Colin Fallon; Roshni Davé; Peter Del Toro; Leia Dickerson; Joseph Fread; Sara Ann W. Moessbauer; Josh Ormond; Lisa Shibata; and Sandra Sokol made key contributions to this report. | The U.S. Department of Transportation's (DOT) Disadvantaged Business Enterprise (DBE) program aims to increase the participation of small businesses owned and controlled by socially and economically disadvantaged individuals--known as DBEs--in highway contracting. In 2009, U.S. DOT awarded, through state and local governments, about $4 billion to DBEs nationwide. State DOTs are required to establish DBE programs and implement them on federal-aid highway projects. This report responds to a congressional request to examine U.S. DOT's Federal Highway Administration's (FHWA) oversight of state DOT DBE programs. It examines how FHWA (1) oversees state DOTs to ensure they implement their DBE programs according to applicable regulations, (2) assesses whether state DOTs have met their DBE goals, and (3) oversees organizations that certify businesses as DBEs. GAO analyzed FHWA data; reviewed relevant laws and regulations; and interviewed FHWA, and state DOT officials from five states, selected to obtain variation in, among other things, the methods state DOTs use to meet DBE goals.. FHWA uses a risk-based approach, which includes conducting risk assessments and day-to-day monitoring, to oversee DBE programs that state DOTs implement. In response to FHWA's designation of the DBE program as an agencywide high-risk area from 2007 through 2010 and other reasons, FHWA recently increased its oversight of state DOT DBE programs. For example, in 2010, FHWA hired a full-time DBE Program Manager and required FHWA division offices in each state to explain to FHWA headquarters how they oversee their state DOTs' DBE programs. While these steps could help FHWA ensure state DOT compliance with regulations, it is too early to assess their effectiveness. Although FHWA has increased its oversight, FHWA faces two fundamental problems with the DBE data it collects from state DOTs to assess whether state DOTs have met their DBE goals. First, the data that FHWA collects from state DOTs on actual spending on DBEs can cover multiple fiscal years and cannot be meaningfully compared to state DOTs' DBE goals, which reflect the percent of federal-aid highway funds state DOTs will expect to spend on DBEs for one fiscal year. Thus, FHWA may not be able to effectively track whether state DOTs have met their goals as required by federal internal control standards. Second, data on committed spending on DBEs--the proxy measure that FHWA uses instead to measure whether goals were met--shows that about half of the state DOTs met their DBE goals each fiscal year from fiscal years 2006 through 2010; however, FHWA has not conducted a nationwide analysis comparing committed to actual spending to know whether committed spending reflects actual spending for DBEs in all state DOTs. Thus, FHWA does not know whether its data on committed spending can be relied on to evaluate a state DOT's progress in meeting DBE goals. Ensuring that committed spending data are a reasonable proxy is important because state DOTs and FHWA make program decisions based on this information. U.S. DOT's working group that considers various improvements to the administration of the DBE program could provide FHWA with an opportunity to identify options it can use to evaluate its proxy data. Also, while FHWA uses committed spending data to facilitate timely reporting of whether state DOTs have met their goals, FHWA's reporting of data on committed spending to describe progress towards DBE goals does not include statements about potential limitations of the data--namely that the data on committed spending on DBEs might not reflect actual spending. FHWA oversees the certification activities of state DOTs, which certify that DBEs primarily working on federal-aid highway projects meet federal eligibility requirements. Other U.S. DOT administrations--the Federal Aviation Administration and the Federal Transit Administration--oversee other certifying organizations, such as local airport authorities and state transit agencies, that certify DBEs for work primarily in those areas; such DBEs might also have the skills required (e.g., paving) to work on highway projects. FHWA divisions use their discretion to determine how much and how often to oversee state DOT DBE certification activities. GAO recommends that FHWA (1) evaluate its committed spending data to determine if it is a reasonable proxy and (2) include statements in information provided to decision makers about potential data limitations. U.S. DOT provided comments on the draft recommendations; GAO clarified the recommendations based on U.S. DOT's comments. U.S. DOT agreed to consider the recommendations. |
Enacted in 1982, JTPA is the largest federal employment training program, with titles II-A and II-C intended to prepare economically disadvantaged adults and youths, respectively, for entry into the labor force. JTPA emphasizes state and local government responsibility for administering federally funded job training programs. In fiscal year 1995, JTPA title II-A and II-C programs received approximately $1.6 billion in funding. JTPA training programs annually provide employment training for specific occupations and services, such as job search assistance and remedial education, to roughly one million economically disadvantaged individuals. Training is provided in local service delivery areas (SDA) through service providers, such as vocational-technical high schools, community colleges, proprietary schools, and community-based organizations. The program objectives are to increase earnings and employment and to reduce welfare dependence for participants of all ages. During the NJS, participation in JTPA involved roughly 3 to 4 months of training at an average cost of about $2,400 per participant. In 1986, Labor commissioned the NJS to evaluate the impact of JTPA on adults and youths because previous findings on the effects of job training programs had been hampered by poor data and statistical problems. The NJS randomly assigned persons who sought JTPA services, and were eligible for them, to a treatment group or a control group. The treatment group was offered JTPA training, and the control group was not. The study was intended to ensure that the two groups would not differ systematically in any way except access to the program, so any subsequent differences in outcomes could be attributed solely to JTPA. The study included over 20,000 eligible participants who applied for JTPA services between November 1987 and September 1989 in 16 local SDAs. The study followed up on a sample of people in the treatment and control groups 18 months after assignment and then again at 30 months. The NJS showed mixed results on the impact of JTPA programs. Adult women assigned to JTPA training had significantly higher earnings than the control group of adult women after 18 and 30 months, but the treatment group of adult men, as well as of both male and female youths, did not have significantly higher earnings than its respective control groups. Participants assigned to receive JTPA training did not have significantly greater earnings than control group members 5 years after their assignment. For some of the four targeted worker categories—adult men, adult women, male youths, and female youths—treatment group earnings exceeded those of the control group in some of the intervening years, but any statistically significant effects disappeared by the fifth year. Annual earnings of adult men increased in each year following assignment for both the treatment and control groups. As shown in figure 1, in the first year after assignment, the average annual earnings of adult men in the treatment group grew from about $4,400 to about $6,900. This group’s earnings continued to rise in the subsequent years, reaching approximately $8,700 in the fifth year after assignment to receive JTPA training. The earnings of adult men in the control group, which did not receive JTPA training, also rose following assignment, but this group’s earnings were less than those of the treatment group for each of the 5 years. After 5 years, the difference between earnings of the treatment and control groups was not statistically significant. Five years after assignment, the treatment group’s earnings had exceeded those of the control group by approximately $300 to $500 annually, but only in the first 3 years were these differences statistically significant. Earnings of adult women showed a pattern similar to those of adult men, increasing in each year after assignment. Figure 2 shows that the annual earnings of adult women assigned to the treatment group increased from approximately $2,800 in the year of assignment to approximately $4,700 in the first year following assignment. This group’s earnings continued to climb, reaching approximately $6,600 in the fifth year. Earnings of adult women in the control group followed a similar pattern, but this group’s earnings were lower than those of the treatment group in each year, reaching approximately $6,200 during the fifth year. As with the earnings of adult men, 5 years after assignment the difference between the treatment and control groups’ annual earnings was not statistically significant. However, during the first 4 years after assignment, the differences between the treatment and the control groups’ earnings were statistically significant in each year, with treatment group earnings approximately $300 to $600 higher than control group earnings annually. The earnings of male youths in the control group, like those of adult men and adult women, increased in each year following assignment. Figure 3 shows that the earnings of male youths in the treatment group increased from approximately $2,900 in the year of assignment to approximately $4,600 in the first year after assignment. This group’s earnings continued to grow during the 5-year period, reaching a high of approximately $7,600 in the fifth year. The earnings of male youths in the control group also rose during the 5-year period following assignment, climbing from approximately $4,800 in the first year to approximately $6,800 in the fifth year. We found no significant difference between the treatment and control groups’ annual earnings 5 years after assignment. Although the control group’s earnings were higher than the treatment group’s during the first 3 years following assignment, the differences, which ranged from approximately $200 to $400 each year, were not statistically significant. During the fourth and fifth years, the treatment group had higher earnings than the control group, but these differences too were not statistically significant. Earnings of female youths showed a pattern similar to that of male youths, growing in each year following assignment. Earnings of female youths in the treatment group rose from approximately $2,000 during the year of assignment to approximately $3,300 in the first year following assignment (see fig. 4). This group’s earnings continued to climb, reaching approximately $5,400 in the fifth year following assignment. The earnings of female youths in the control group also rose during the 5-year period, climbing from approximately $3,400 in the first year to a high of approximately $5,200 in the fifth year. We found no significant differences between the treatment and control groups’ annual earnings 5 years after receiving their assignments. During the first 2 years following assignment, the control group’s earnings were higher than the treatment group’s, but the differences of less than $100 annually were not statistically significant. In the fourth and fifth years following assignment, the treatment group had earnings of approximately $100 to $300 higher than the control group, but these differences also were not statistically significant. As with earnings, employment rates of those assigned to receive JTPA training were not significantly greater than employment rates of control group members 5 years after assignment. For some of the four targeted worker categories, treatment group employment rates were higher than those of the control group in some years, but any statistically significant effects disappeared by the fifth year. The employment rates of both treatment and control group adult men peaked during the calendar year of assignment and then declined in subsequent years, eventually reaching levels lower than those of the men before entering the study (see fig. 5). For example, the employment rate for adult men in the treatment group was 87 percent in the year of assignment. The percent employed declined in the following years, reaching 72 percent by the fifth year following assignment, which was lower than the group’s employment rate of 79 percent in the year before entering the study. The adult men in the control group showed a similar pattern—their employment rate was 87 percent in the year of assignment but dropped to 71 percent in the fifth year after assignment. After 5 years, the difference between the treatment and control groups’ employment rates was not statistically significant. The treatment group’s employment rates were higher than the control group’s in each year following assignment, although the differences in the employment rates were statistically significant only in the fourth year following assignment. The pattern of employment rates of adult women was somewhat similar to that of adult men. The employment rates of adult women were highest during the calendar year following assignment, with 80 percent of the treatment group and 77 percent of the control group employed (see fig. 6). After the first year, however, the employment rates for both the treatment and control groups fell, reaching 69 percent and 67 percent, respectively, in the fifth year following assignment. These rates in the fifth year were also lower than each group’s employment rate in the year before assignment. We found no significant differences between the treatment and control groups’ employment rates 5 years after assignment. The treatment group’s employment rates exceeded the control group’s in all 5 years following assignment, usually by about 2 to 3 percent, but only in the first 3 years were these differences statistically significant. The pattern of employment rates of male youths was somewhat similar to that of adult men and women: the male youths’ employment rates peaked during the calendar year following assignment—reaching nearly 91 percent for the treatment group and over 92 percent for the control group—but then declined (see fig. 7). However, in contrast to the employment rates of adults, those of male youths were slightly higher 5 years after assignment than before assignment, reaching 81 percent for the treatment group in the fifth year, compared with 80 percent in the year before assignment. We found no significant differences between the treatment and control groups’ employment rates 5 years after assignment. While the employment rates for the control group actually exceeded those for the treatment group in the year of assignment and the first and third years following assignment, none of the differences were statistically significant. The employment rates of female youths in both the treatment and control groups peaked during the calendar year of assignment, declined somewhat over the next 4 years, and then slightly increased in the fifth year (see fig. 8). As with those of male youths, the employment rates of female youths were slightly higher 5 years after assignment than before assignment. The employment rates of female youths were 74 percent for the treatment group and 73 percent for the control group in the fifth year following assignment, compared with 71 and 73 percent, respectively, in the year before assignment. We found no significant differences between the treatment and control groups’ employment rates 5 years after assignment. Employment rates for the treatment group exceeded those for the control group in 4 of the 5 years following assignment, but none of the differences in employment rates were statistically significant. Though both long-term earnings and employment rates for NJS treatment groups surpassed those for their respective control groups, the differences did not meet our test for statistical significance. Five years after expressing an interest in JTPA-sponsored job training, individuals assigned to participate in the program did not have earnings or employment rates significantly higher than individuals not assigned to participate. In commenting on a draft of this report, Labor expressed several concerns. It took exception to what it characterized as unwarranted negative conclusions that are not consistent with the report findings. Labor also took issue with the importance the report places on tests of statistical significance applied to earnings of an individual group in a given year, preferring to emphasize other evidence of the positive effect of JTPA on participant earnings over the 5-year period. Labor also expressed concerns that the report findings have limited relevance to current job training programs. We believe that our conclusions are well supported by our findings. On several occasions where appropriate, we have noted comparisons favorable to the JTPA treatment groups, including in the “Results in Brief” and “Conclusions” sections. Although other evidence covering the 5-year period might be found to better highlight the positive effects of JTPA training, our research focused on the earnings and employment rates of each target group in the fifth year after applying for JTPA training. Also, we do not believe that current or proposed job training programs sufficiently differ from JTPA training at the time of the NJS to limit the relevance of our report findings. In its response, Labor enclosed an attachment with specific comments on the report and additional information. This attachment and our evaluation of the comments appear in appendix III. Labor also provided us with technical comments, which we have incorporated in the report where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested parties and make copies available to others upon request. This report was prepared under the direction of Wayne B. Upshaw, Assistant Director, who may be reached on (202) 512-7006 if you or your staff have any questions. Gene Kuehneman, Senior Economist, (202) 512-4091, Jill Schamberger, Senior Evaluator, and Thomas L. Hungerford, Senior Economist, were major contributors to this report. To select our sample of individuals assigned to receive training under JTPA and our control group, we used participation information from the National JTPA Study (NJS). We then obtained long-term earnings and employment information for these individuals from SSA. Our analysis compared earnings and employment levels of individuals in the treatment and control groups to determine whether differences between these groups were statistically discernable. The original NJS data set contained demographic and program information on 20,601 people who applied for JTPA services between November 1987 and September 1989 in 16 local service delivery areas. Program applicants were recruited, screened to determine their eligibility, assessed to determine their service needs and wants, and recommended for services. NJS participants were then randomly assigned to either the treatment group, which was allowed to participate in JTPA title II-A programs, or the control group, which was not allowed to participate in these programs for 18 months. Approximately two-thirds of the applicants were assigned to the treatment group and one-third to the control group. The control and treatment groups were closely matched in demographic variables such as age, race, and education, which typically allows a meaningful comparison of average outcomes between the two groups. However, two factors intervened to make such a comparison problematic. First, not all members of the treatment group participated in JTPA programs. For example, about two-thirds of the adult treatment group members enrolled in JTPA, but the other one-third either found jobs on their own or decided not to participate in the program. Second, a substantial minority of the control group members chose to participate in some alternative, non-JTPA training programs. These complications preclude attributing earnings differences between the two groups solely to JTPA training. Therefore, our findings refer to differences between the treatment and control groups rather than between individuals who did or did not receive JTPA training. Furthermore, we do not know which of the control and treatment group members chose to receive training later than 18 months after assignment. The NJS was not designed to track treatment or control group members beyond 30 months. Therefore, to calculate and compare longer term earnings and employment outcomes for these groups, we needed information from another source. We obtained annual earnings records from SSA for the individuals in the NJS treatment and control groups. SSA maintains information on annual earnings of individuals contributing to either Social Security or Medicare. We assumed that an individual was employed if his or her SSA records showed positive earnings for a given year. We adjusted data for what we assumed were data entry or processing errors, and we also rounded reported negative earnings to zero. We analyzed the NJS and SSA earnings records of 13,699 NJS participants to determine their annual earnings and employment outcomes for the 3 years before assignment to the treatment or control group, the year of assignment, and 5 years following assignment. The 3 years of prior earnings and employment data served to demonstrate the prior comparability of treatment and control groups. The 5 years of postassignment data effectively doubled the 30-month follow-up period for the NJS. The treatment group had 9,275 individuals, and the control group had 4,424 individuals. We used individual earnings data and calculated means and variances for each of the four target groups—adult men, adult women, male youths, and female youths—to compare the treatment groups’ earnings and employment outcomes with those of the control groups. We tested for differences in earnings and employment outcomes at the 5-percent significance level. We calculated annual earnings amounts using SSA information on Social Security-covered earnings for nonfederal workers and on earnings covered by Medicare for federal workers. We calculated employment rates as the percentage of each group with positive covered earnings in a calendar year. Individuals with unreported earnings may have had their earnings and employment understated in our analysis. Individuals whose earnings exceeded the Social Security withholding ceiling may also have had their earnings understated in our analysis. These limitations applied to both the treatment and control groups, and we do not believe they affected the two groups differently. Statistically significant difference at 5% level? Statistically significant difference at 5% level? no Statistically significant difference at 5% level? Employment rates (percent employed) Statistically significant difference at 5% level? (percent employed) Statistically significant difference at 5% level? no (percent employed) Statistically significant difference at 5% level? (percent employed) The following are GAO’s comments on the Department of Labor’s letter dated November 30, 1995. 1. Labor comments that our report understates the gross returns to JTPA training. Furthermore, Labor implies that these gross returns calculations compare favorably with the returns to college education. Our objective, as clearly stated in the report, was not to evaluate the cost-effectiveness of JTPA training, but rather to determine and compare the long-term effects of JTPA training. In fact, because we did not calculate the gross return to JTPA participants the report cannot have understated or overstated the values. Such calculations were not within the scope of this report. While it may be true that these returns are favorable, we have no basis to judge the favorability of the gross returns to training. 2. Labor states that the report does not acknowledge favorable aspects of this study. Specifically, Labor cited that (1) all four target groups had higher earnings in the fifth year after assignment; (2) both adult treatment groups had higher earnings than their respective control groups in each of the 5 years following assignment; and (3) for male youths, a positive trend exists, and the fifth-year earnings exceed those of the control group by over 10 percent. Contrary to Labor’s comment, we did note many of these favorable program outcomes in our report. We stated that adult male treatment group members had higher earnings than adult male control group members and presented similar findings for the other three target groups. We further stated that adult male treatment group earnings exceeded control group earnings in each of the 5 years and reported similar information for adult women. Also, we noted the positive trend for earnings of male youths. We did not note the percentage difference for male youths in the fifth year because we did not report percentage comparisons for any of the target groups. 3. Labor states that if the training impacts are accumulated over time during the 5-year follow-up period, the net benefits outweigh the costs. As we stated in comment 1, our objective was not to evaluate the cost-effectiveness of JTPA training, but rather to determine and compare the long-term effects of JTPA training. While it may be true that the net benefits outweigh the costs, we have no basis to judge this because such calculations were beyond the scope of this report. 4. Labor states that the increase in standard errors is primarily responsible for the decline in statistical significance of the estimated impacts. While Labor is correct in stating that the standard errors were greater in the fifth year, it is not accurate to attribute a decline in statistical significance to either the estimated training effects or to the standard errors. The test statistics used for our significance tests are determined by the ratio of the estimates to their standard error, and attributing the lack of significance solely to either component of these ratios is inappropriate. 5. Labor states that our conclusion requires assessing the total impact of JTPA and the overall cost and benefits. The Department states that we overemphasize the importance of year-by-year significance tests in questioning the program’s usefulness in improving participants’ long-term earnings prospects by stressing the insignificant effect of JTPA in the fifth year. We agree with Labor that year-by-year significance tests have limited value in assessing the total impact of JTPA and the overall cost and benefits. Furthermore, our year-by-year significance tests provide statistical evidence that adult treatment group members achieved higher earnings for several years following assignment to JTPA training. While other evidence covering the 5-year period might be found to better highlight the positive effects of JTPA training, we chose to address the question of whether the fifth-year earnings of those assigned to participate in JTPA differed significantly from the fifth-year earnings of those not assigned to participate in JTPA. 6. While acknowledging that the observed earnings differences between the four target groups were not statistically significant in the fifth year, Labor asserts that the odds that all four differences would be positive purely by chance is 6.25 percent. This implies that an accumulation of not statistically significant observations provides more compelling empirical evidence than the actual significance test for any one group. While the probability (not odds) that all four not significant fifth-year earnings differences would be positive purely by chance might be low, our research question is whether a significant earnings difference occurred for each target group. 7. Labor comments that the report does not report the standard errors. Labor states that the report should include confidence intervals for the estimates, sample sizes, and standard errors and specify significance levels for the estimates. We have made several additions to tables in appendix II in response to this comment. We have added the sample size, the size of the treatment and control groups, the standard errors, and a reminder that the significance level chosen is 5 percent for the tables in this appendix. Since we have not presented point estimates of the earnings effects, we did not calculate confidence intervals for these estimates. Technical readers of our report can construct such estimates and the associated intervals from the information in the appendix II tables. 8. Labor claims that the report treats figures that are not significant as zero. We do not report any training effect as zero. The magnitude of the earnings differences, whether significant or not, is discussed in the report and is easily calculated from the tables in appendix II. 9. Labor states that statistical significance is not a knife edge of yes or no but a continuum. The level used for tests of statistical significance may be chosen from a broad range (or continuum) of values. Although different researchers may choose to use different values for the significance level, choosing a significance level before analyzing any data is common practice. Once this level has been chosen, statistical hypothesis testing very much involves a yes or no decision. Either the data reject the null hypothesis of no training effect at the set significance level or not. We follow these commonly accepted procedures for hypothesis testing, and our convention is to set the significance level for such tests at 5 percent. 10. Labor also states that the report should discuss the results, the probability values, and changes in the significance levels. Our report does discuss the results as well as whether the earnings and employment effects were significant and whether this significance changed over time. Although we do not present the probability values, technical readers of our report can calculate them using the information in the appendix II tables. 11. Labor takes issue with our statement that complications (not all treatment group members received training and some control group members did receive training) precluded our attributing earnings differences solely to JTPA training. It claims that these complications led us to understate the effect of training, implying that the earnings differences observed, along with perhaps some further overlooked earnings effects, can be attributed to JTPA. We clearly state that these complications preclude solely attributing the earnings effects to JTPA training. However, we have no evidence that these factors led to an understatement of the effect of JTPA training. In the first place, a short delay can occur before an assignee can begin a training program. In some of these cases, individuals find and accept employment instead of reporting for training. To the extent that these individuals are more fully employed and may earn more than they might have if they had attended JTPA training, our estimate may actually overstate the effect of training. Second and more importantly, if those who attend training are in some way more motivated than those who do not attend, it would be difficult to separate any increase in earnings due to training from the increase in earnings due to this motivation. At a minimum, we would need to identify which control group members were motivated to attend training to draw such inferences. 12. Labor recommends adjusting the comparison by effectively removing treatment group members who did not enroll in training. We chose to compare only those assigned to JTPA training with those not assigned to training to take full advantage of the original random assignment design. As we stated in comment 11, we would have needed to identify which control group members were motivated to attend training to justify removing the treatment group members who did not attend training. Since we could not take all the necessary steps to fully implement Labor’s suggestion, we chose not to make that or any other adjustments. 13. Labor comments that we will be providing the Department’s contractor with access to Social Security data for additional analysis, including examining the results for subgroups. We would like to clarify the details of this arrangement in light of the sensitive nature and confidentiality of individual earnings records. When we began our work, Labor was also planning to evaluate the long-term impact of JTPA training on earnings. Both our and Labor’s evaluation (contracted out to Westat, Inc.) planned to use Social Security earnings records to supplement the information collected through the NJS. In the spirit of cooperation, Labor requested and we agreed to provide aggregated earnings data to the Department, which will submit computer programs to us; we will in turn run the programs and provide the output to Labor. Only aggregated information, such as means and standards deviations, will be reported. No data will be released that could be traced to individuals, nor will we provide Labor or its contractor with individual earnings records. 14. Labor suggests that we include earnings and employment information for the third of the sample for whom only 4 years of follow-up data were available. We did not include this group in our analysis since we could not report on their earnings or employment 5 years after training. As such, any additional information provided would not address the question of whether JTPA had a long-term effect on the earnings or employment outcomes of the treatment group. 15. Labor states that we should explain that our employment measure is not the definition that is generally reported in government statistics and is not comparable to figures reported in Current Population Survey (CPS) and Bureau of Labor Statistics (BLS) reports. Our employment rate differs from measures reported in CPS and BLS reports but is appropriate for our purposes. Our employment rate is the number employed divided by all who were in the treatment or control group, which includes those workers who may have dropped out of the labor force. Since these workers presumably applied for training because they intended to keep working, we believe that all workers should be included in the denominator of the measure. Our measure also counts as employed everyone who worked during the year, even if they might have been unemployed for some portion of the year. As such, our measure may overstate the instantaneous employment outcomes of both the treatment and control groups relative to figures reported in CPS and BLS reports. 16. Labor states that Social Security data are subject to considerable revisions in the first year of availability. It believes this calls into doubt fifth-year estimates for adult men, adult women, and male youths. While data are often subject to revision, we have no reason to suspect that the data for those assigned to training in 1988 are materially less reliable than for those assigned to training in 1987. The fifth year (1993) of earnings data for those assigned to training in 1988 was extracted from SSA records in March 1995. An SSA official responsible for updates and revisions to SSA earnings data said that we could expect the accuracy and completeness of our extract to exceed 99 percent. 17. Labor states that we fail to recognize the limited relevance of our findings to current job programs. While we agree that our evaluation has limitations, we disagree that it has little relevance to current job programs. We make it quite clear that our analysis is not nationally representative of JTPA training. Additionally, we cite many flaws associated with the design and implementation of the original NJS that limit our analysis. However, no evidence exists to suggest that job training funded by JTPA and administered at the state and local level has changed so dramatically since 1989 that our findings are not relevant to the current program. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO compared the long-term earnings and employment rates for Job Training Partnership Act (JTPA) participants with those for non-participants. GAO found that: (1) JTPA participants receiving training had significantly higher earnings and employment rates than non-participants during the early stages of the program; and (2) annual earnings and employment rates for adult and youth JTPA participants increased after 5 years of participation, but the differences were not statistically significant. |
During the late 1960s and early 1970s, the government directed that all passengers and their carry-on baggage be screened for dangerous items before boarding a flight. As the volume of passengers requiring screening increased and an awareness of terrorists’ threats against the United States developed, a computerized system was implemented in 1998 to help identify passengers posing the greatest risk to a flight so that they could receive additional security attention. This system, known as CAPPS, is operated by air carriers in conjunction with their reservation systems. CAPPS enables air carriers to separate passengers into two categories: those who require additional security screening—termed “selectees”—and those who do not. Certain information contained in the passenger’s reservation is used by the system to perform an analysis against established rules and a government supplied “watch list” that contains the names of known or suspected terrorists. If the person is deemed to be a “selectee,” the boarding pass is encoded to indicate that additional security measures are required at the screening checkpoint. This system is currently used by most U.S. air carriers to prescreen passengers and prescreens an estimated 99 percent of passengers on domestic flights. For those passengers not prescreened by the system, certain air carriers manually prescreen their passengers using CAPPS criteria and the watch list. Following the events of September 11, 2001, Congress passed the Aviation and Transportation Security Act requiring that a computer-assisted passenger prescreening system be used to evaluate all passengers, TSA’s Office of National Risk Assessment has undertaken the development of a second-generation computer-assisted passenger prescreening system, known as CAPPS II. Unlike the current system that is operated by the air carriers, the government will operate CAPPS II. Further, it will perform different analyses and access more diverse data, including data from commercial and government databases, to classify passengers according to their level of risk. TSA program officials expect that CAPPS II will provide significant improvements over the existing system. First, they believe a centralized CAPPS II that will be owned and operated by the federal government will allow for more effective and efficient use of up-to-date intelligence information and make CAPPS II more capable of being modified in response to changing threats. Second, they also believe that CAPPS II will improve identity authentication and reduce the number of passengers who are falsely identified as needing additional security screening. Third, CAPPS II is expected to prescreen all passengers on flights either originating in or destined for the United States. Last, an additional expected benefit of the system is its ability to aggregate risk scores to identify higher-risk flights, airports, or geographic regions that may warrant additional aviation security measures. Key activities in the development of CAPPS II have been delayed, and TSA has not yet completed key system planning activities. TSA plans to develop CAPPS II in nine increments, with each increment providing increased functionality. (See app. I for a description of these increments.) As each increment is completed, TSA plans to conduct tests that would ensure the system meets the objectives of that increment before proceeding to the next increment. The development of CAPPS II began in March 2003 with increments 1 and 2 being completed in August and October 2003, respectively. However, TSA has not completely tested these initial two increments because it was unable to obtain the necessary passenger data for testing from air carriers. Air carriers have been reluctant to provide passenger data due to privacy concerns. Instead, the agency deferred completing these tests until increment 3. TSA is currently developing increment 3. However, due to the unavailability of passenger data needed for testing, TSA has delayed the completion of this increment from October 2003 until at least the latter part of this month and reduced the functionality that this increment is expected to achieve. Increment 3 was originally intended to provide a functioning system that could handle live passenger data from one air carrier in a test environment to demonstrate that the system can satisfy operational and functional requirements. However, TSA officials reported that they recently modified increment 3 to instead provide a functional application of the system in a simulated test environment that is not actively connected to an airline reservation system. Officials also said that they were uncertain when the testing that was deferred from increments 1 and 2 to increment 3 will be completed. TSA recognizes that system testing is a high-risk area and plans to further delay the implementation of the system to ensure that sufficient testing is completed. As a result, all succeeding increments of CAPPS II have been delayed, moving CAPPS II initial operating capability—the point at which the system will be ready to operate with one airline—from November 2003 to a date unknown. (See app. II for a timeline showing the original and revised schedule for CAPPS II increments.) Further, we found that TSA has not yet developed critical elements associated with sound project planning, including a plan for what specific functionality will be delivered, by when, and at what cost throughout the development of the system. Our work on similar systems and other best practice research have shown that the application of rigorous practices to the acquisition and development of information systems improves the likelihood of the systems’ success. In other words, the quality of information technology systems and services is governed largely by the quality of the processes involved in developing and acquiring the system. We have reported that the lack of such practices has contributed to cost, schedule, and performance problems for major system acquisition efforts. TSA established plans for the initial increments of the system, including requirements for increments 1 and 2 and costs and schedules for increments 1 through 4. However, officials lack a comprehensive plan identifying the specific functions that will be delivered during the remaining increments; for example, which government and commercial databases will be incorporated, the date when these functions will be delivered, and an estimated cost of the functions. In addition, TSA officials recently reported that the expected functionality to be achieved during early increments has been reduced, and officials are uncertain when CAPPS II will achieve initial operating capability. Project officials also said that because of testing delays, they are unable to plan for future increments with any certainty. By not completing these key system development planning activities, TSA runs the risk that CAPPS II will not provide the full functionality promised. Further, without a clear link between deliverables, cost, and schedule, it will be difficult to know what will be delivered and when in order to track development progress. Until project officials develop a plan that includes scheduled milestones and cost estimates for key deliverables, CAPPS II is at increased risk of not providing the promised functionality, not being fielded when planned, and being fielded at an increased cost. In reviewing CAPPS II, we found that TSA has not fully addressed seven of the eight issues identified by the Congress as key areas of interest related to the development and implementation of CAPPS II. Public Law 108-90 identified eight key issues that TSA must fully address before the system is deployed or implemented. These eight issues are establishing an internal oversight board, assessing the accuracy of databases, testing the system load capacity (stress testing) and demonstrating its efficacy and accuracy, installing operational safeguards to protect the system from abuse, installing security measures to protect the system from unauthorized access, establishing effective oversight of the system’s use and operations, addressing all privacy concerns, and creating a redress process for passengers to correct erroneous information. While TSA is in various stages of progress to address each of these issues, only the establishment of an internal oversight board to review the development of CAPPS II has been fully addressed. For the remaining issues, TSA program officials contend that their ongoing efforts will ultimately address each issue. However, due to system development delays, uncertainties regarding when passenger data will be obtained to test the system, and the need to finalize key policy decisions, officials were unable to identify a time frame for when all remaining issues will be fully addressed. The following briefly summarizes the status of TSA’s efforts to address each of the eight issues. Establishment of a CAPPS II oversight board has occurred. DHS created an oversight board—the Investment Review Board—to review the department’s largest capital asset programs. The Board reviewed CAPPS II in October 2003. Based on this review, the Board authorized TSA to proceed with the system’s development. However, DHA noted some areas that the program needed to address. These areas included addressing privacy and policy issues, coordinating with other stakeholders, and identifying program staffing requirements and costs, among others, and directed that these issues be addressed before the system proceeds to the next increment. Although DHS has the Board in place to provide internal oversight and monitoring for CAPPS II and other large capital investments, we recently reported that concerns exist regarding the timeliness of its future reviews. DHS officials acknowledged that the Board is having difficulty reviewing all of the critical departmental programs in a timely manner. As of January 2004, DHS had identified about 50 of the largest capital assets that would be subject to the Board’s review. As CAPPS II’s development proceeds, it will be important for the Board to oversee the program on a regular and thorough basis to provide needed oversight. In addition, on February 12, 2004, DHS announced its intentions to establish an external review board specifically for CAPPS II. This review board will be responsible for ensuring that (1) the privacy notice is being followed, (2) the appeal process is working effectively, and (3) the passenger information used by CAPPS II is adequately protected. However, in announcing the establishment of this review board, DHS did not set a date as to when the board will be activated or who would serve on the board. The accuracy of CAPPS II databases has not yet been determined. TSA has not yet determined the accuracy—or conversely, the error rate— of commercial and government databases that will be used by CAPPS II. Since consistent and compatible information on database accuracy is not available, TSA officials said that they will be developing and conducting their own tests to assess the overall accuracy of information contained in commercial and government databases. These tests are not intended to identify all errors existing within a database, but rather assess the overall accuracy of a database before determining whether it is acceptable to be used by CAPPS II. In addition to testing the accuracy of commercial databases, TSA plans to better ensure the accuracy of information derived from commercial databases by using multiple databases in a layered approach to authenticating a passenger’s identity. If available information is insufficient to validate the passenger’s identification in the first database accessed, then CAPPS II will access another commercial database to provide a second layer of data, and if necessary, still other commercial databases. However, how to better ensure the accuracy of government databases will be more challenging. TSA does not know exactly what type of information the government databases contain, such as whether a database will contain a person’s name and full address, a partial address, or no address at all. A senior program official said that using data without assessing accuracy and mitigating data errors could result in erroneous passenger assessments; consequently government database accuracy and mitigation measures will have to be developed and completed before the system is placed in operation. In mitigating errors in commercial and government databases, TSA plans to use multiple databases and a process to identify misspellings to correct errors in commercial databases. TSA is also developing a redress process whereby passengers can attempt to get erroneous data corrected. However, it is unclear what access passengers will have to information found in either government or commercial databases, or who is ultimately responsible for making corrections. Additionally, if errors are identified during the redress process, TSA does not have the authority to correct erroneous data in commercial or government databases. TSA officials said they plan to address this issue by establishing protocols with commercial data providers and other federal agencies to assist in the process of getting erroneous data corrected. Stress testing and demonstration of the system’s efficacy and accuracy have been delayed. TSA has not yet stress tested CAPPS II increments developed to date or conducted other system-related testing to fully demonstrate the effectiveness and accuracy of the system’s search capabilities, or search tools, to correctly assess passenger risk levels. TSA initially planned to conduct stress testing on an early increment of the system by August 2003. However, stress testing was delayed several times due to TSA’s inability to obtain the 1.5 million Passenger Name Records it estimates are needed to test the system. TSA attempted to obtain the data needed for testing from three different sources but encountered problems due to privacy concerns associated with its access to the data. For example, one air carrier initially agreed to provide passenger data for testing purposes, but adverse publicity resulted in its withdrawal from participation Further, as the system is more fully developed, TSA will need to conduct stress testing. For example, there is a stringent performance requirement for the system to process 3.5 million risk assessment transactions per day with a peak load of 300 transactions per second that cannot be fully tested until the system is further along in development. Program officials acknowledge that achieving this performance requirement is a high-risk area and have initiated discussions to define how this requirement will be achieved. However, TSA has not yet developed a complete mitigation strategy to address this risk. Without a strategy for mitigating the risk of not meeting peak load requirements, the likelihood that the system may not be able to meet performance requirements increases. Other system-related testing to fully demonstrate the effectiveness and accuracy of the system’s search tools in assessing passenger risk levels also has not been conducted. This testing was also planned for completion by August 2003, but similar to the delays in stress testing, TSA’s lack of access to passenger data prevented the agency from conducting these tests. In fact, TSA has only used 32 simulated passenger records—created by TSA from the itineraries of its employees and contractor staff who volunteered to provide the data—to conduct this testing. TSA officials said that the limited testing—conducted during increment 2—has demonstrated the effectiveness of the system’s various search tools. However, tests using these limited records do not replicate the wide variety of situations they expect to encounter with actual passenger data when full-scale testing is actually undertaken. As a result, the full effectiveness and accuracy of the tools have not been demonstrated. TSA’s attempts to obtain test data are still ongoing, and privacy issues remain a stumbling block. TSA officials believe they will continue to have difficulty in obtaining data for both stress and other testing until TSA issues a Notice of Proposed Rulemaking to require airlines to provide passenger data to TSA. This action is currently under consideration within TSA and DHS. In addition, TSA officials said that before the system is implemented, a final Privacy Act notice will be published. According to DHS’s Chief Privacy Officer, the agency anticipated that the Privacy Act notice would be finalized in March 2004. However, this official told us that the agency will not publish the final Privacy Act notice until all 15,000 comments received in response to the August 2003 Privacy Act notice are reviewed and testing results are available. DHS could not provide us a date as to when this will be accomplished. Further, due to the lack of test data, TSA delayed the stress and system testing planned for increments 1 and 2 to increment 3, scheduled to be completed by March 31, 2004. However, since we issued our report last month, a TSA official said that they no longer expect to conduct this testing during increment 3 and do not have an estimated date for when these tests will be conducted. Uncertainties surrounding when stress and system testing will be conducted could impact TSA’s ability to allow sufficient time for testing, resolving defects, and retesting before CAPPS II can achieve initial operating capability and may further delay system deployment. Security plans that include operational and security safeguards are not complete. Due to schedule delays and the early stage of CAPPS II development, TSA has not implemented critical elements of an information system security program to reduce opportunities for abuse and protect against unauthorized access by hackers. These elements—a security policy, a system security plan, a security risk assessment, and the certification and accreditation of the security of the system—together provide a strong security framework for protecting information technology data and assets. While TSA has begun to implement critical elements of an information security management program for CAPPS II, these elements have not been completed. Until a specific security policy for CAPPS II is completed, TSA officials reported that they are using relevant portions of the agency’s information security policy and other government security directives as the basis for its security policy. As for the system security plan, it is currently in draft. TSA expects to complete this plan by the time initial operating capability is achieved. Regarding the security risk assessment, TSA has postponed conducting this assessment because of development delays and it has not been rescheduled. The completion date remains uncertain because TSA does not have a date for achieving initial operating capability as a result of other CAPPS II development delays. As for final certification and accreditation, TSA is unable to schedule the final certification and accreditation of CAPPS II because of the uncertainty regarding the system’s development schedule. The establishment of a security policy and the completion of the system security plan, security risk assessment, and certification and accreditation process are critical to ensuring the security of CAPPS II. Until these efforts are completed, there is decreased assurance that TSA will be able to adequately protect CAPPS II information and an increased risk of operational abuse and access by unauthorized users. Policies for effective oversight of the use and operation of CAPPS II are not developed. TSA has not yet fully established controls to oversee the effective use and operation of CAPPS II. However, TSA plans to provide oversight of CAPPS II through two methods: (1) establishing goals and measures to assess the program’s strengths, weaknesses, and performance and (2) establishing mechanisms to monitor and evaluate the use and operation of the system. TSA has established preliminary goals and measures to assess the CAPPS II program’s performance in meeting its objectives as required by the Government Performance and Results Act. Specifically, the agency has established five strategic objectives with preliminary performance goals and measures for CAPPS II. While this is a good first step, these measures may not be sufficient to provide the objective data needed to conduct appropriate oversight. TSA officials said that they are working with five universities to assess system effectiveness and management and will develop metrics to be used to measure the effectiveness of CAPPS II. With this information, officials expect to review and, as necessary, revise their goals and objectives to provide management and the Congress with objective information to provide system oversight. In addition, TSA has not fully established or documented additional oversight controls to ensure that operations are effectively monitored and evaluated. Although TSA has built capabilities into CAPPS II to monitor and evaluate the system’s operation and plans to conduct audits of the system to determine whether it is functioning as intended, TSA has not written all of the rules that will govern how the system will operate. Consequently, officials do not yet know how these capabilities will function, how they will be applied to monitor the system to provide oversight, and what positions and offices will be responsible for maintaining the oversight. Until these policies and procedures for CAPPS II are developed, there is no assurance that proper controls are in place to monitor and oversee the system. TSA’s plans address privacy protection, but issues remain unresolved. TSA’s plans for CAPPS II reflect an effort to protect individual privacy rights, but certain issues remain unresolved. Specifically, TSA plans address many of the requirements of the Privacy Act, the primary legislation that regulates the government’s use of personal information. For example, in January 2003, TSA issued a notice in the Federal Register that generally describes the Privacy Act system of records that will reside in CAPPS II and asked the public to comment. While TSA has taken these initial steps, it has not yet finalized its plans for complying with the act. For example, the act and related Office of Management and Budget guidance state that an agency proposing to exempt a system of records from a Privacy Act provision must explain the reasons for the exemption in a published rule. In January 2003, TSA published a proposed rule to exempt the system from seven Privacy Act provisions but has not yet provided the reasons for these exemptions, stating that this information will be provided in a final rule to be published before the system becomes operational. As a result, TSA’s justification for these exemptions remains unclear. Until TSA finalizes its privacy plans for CAPPS II and addresses such concerns, the public lacks assurance that the system will fully comply with the Privacy Act. When viewed in the larger context of Fair Information Practices— internationally recognized privacy principles that also underlie the Privacy Act—TSA plans reflect some actions to address each of these practices. For example, TSA’s plan to not collect passengers’ social security numbers from commercial data providers and to destroy most passenger information shortly after they have completed their travel itinerary appears consistent with the collection limitation practice, which states that collections of personal information should be limited. However, to meet its evolving mission goals, TSA plans also appear to limit the application of certain of these practices. For example, TSA plans to exempt CAPPS II from the Privacy Act’s requirements to maintain only that information about an individual that is relevant and necessary to accomplish a proper agency purpose. These plans reflect the subordination of the use limitation practice and data quality practice (personal information should be relevant to the purpose for which it is collected) to other goals and raises concerns that TSA may collect and maintain more information than is needed for the purpose of CAPPS II, and perhaps use this information for new purposes in the future. Such actions to limit the application of the Fair Information Practices do not violate federal requirements. Rather, they reflect TSA’s efforts to balance privacy with other public policy interests such as national security, law enforcement, and administrative efficiency. As the program evolves, it will ultimately be up to policymakers to determine if TSA has struck an appropriate balance among these competing interests. Redress process is being developed, but significant challenges remain. TSA intends to establish a process by which passengers who are subject to additional screening or denied boarding will be provided the opportunity to seek redress by filing a complaint; however, TSA has not yet finalized this process. According to TSA officials, the redress process will make use of TSA’s existing complaint process—currently used for complaints from passengers denied boarding passes—to document complaints and provide these to TSA’s Ombudsman. Complaints relating to CAPPS II will be routed through the Ombudsman to a Passenger Advocate—a position to be established within TSA for assisting individuals with CAPPS II-related concerns—who will help identify errors that may have caused a person to be identified as a false positive. If the passengers are not satisfied with the response received from the Passenger Advocate regarding the complaint, they will have the opportunity to appeal their case to the DHS Privacy Office. A number of key policy issues associated with the redress process, however, still need to be resolved. These issues involve data retention, access, and correction. Current plans for data retention indicate that data on U.S. travelers and lawful permanent residents will be deleted from the system at a specified time following the completion of the passengers’ itinerary. Although TSA’s decision to limit the retention of data was made for privacy considerations, the short retention period might make it impossible for passengers to seek redress if they do not register complaints quickly. TSA has also not yet determined the extent of data access that will be permitted for those passengers who file a complaint. TSA officials said that passengers will not have access to any government data used to generate a passenger risk score due to national security concerns. TSA officials have also not determined to what extent, if any, passengers will be allowed to view information used by commercial data providers. Furthermore, TSA has not yet determined how the process of correcting erroneous information will work in practice. TSA documents and program officials said that it may be difficult for the Passenger Advocate to identify errors, and that it could be the passenger’s responsibility to correct errors in commercial databases at their source. To address these concerns, TSA is exploring ways to assist passengers who are consistently determined to be false positives. For example, TSA has discussed incorporating an “alert list” that would consist of passengers who coincidentally share a name with a person on a government watch list and are, therefore, continually flagged for additional screening. Although the process has not been finalized, current plans indicate that a passenger would be required to submit to an extensive background check in order to be placed on the alert list. TSA said that available remedies for all persons seeking redress will be more fully detailed in CAPPS II’s privacy policy, which will be published before the system achieves initial operating capability. In addition to facing developmental and operational challenges related to key areas of interest to the Congress, CAPPS II faces a number of additional challenges that may impede its success. We identified three issues that, if not adequately resolved, pose major risks to the successful development, implementation, and operation of CAPPS II. These issues are developing the international cooperation needed to obtain passenger data, managing the expansion of the program’s mission beyond its original purpose, and ensuring that identity theft—in which an individual poses as and uses information of another individual—cannot be used to negate the security benefits of the system. For CAPPS II to operate fully and effectively, it needs data not only on U.S. citizens who are passengers on flights of domestic origin, but also on foreign nationals on domestic flights and on flights to the United States originating in other countries. However, obtaining international cooperation for access to these data remains a substantial challenge. The European Union, in particular, has objected to its citizens’ data being used by CAPPS II, whether a citizen of a European Union country flies on a U.S. carrier or an air carrier under another country’s flag. The European Union has asserted that using such data is not in compliance with its privacy directive and violates the civil liberties and privacy rights of its citizens. DHS and European Union officials are in the process of finalizing an understanding regarding the transfer of passenger data for use by the Bureau of Customs and Border Protection. However, this understanding does not permit the passenger data to be used by TSA in the operation of CAPPS II but does allow for the data to be used for testing purposes. According to a December 16, 2003, report from the Commission of European Communities, the European Union will not be in a position to agree to the use of its citizens’ passenger data for CAPPS II until internal U.S. processes have been completed and it is clear that the U.S. Congress’s privacy concerns have been resolved. The Commission said that it would discuss the use of European Union citizen passenger data in a second, later round of discussions. Our review found that CAPPS II may be expanded beyond its original purpose and that this expansion may affect program objectives and public acceptance of the system. The primary objective of CAPPS II was to protect the commercial aviation system from the risk of foreign terrorism by screening for high-risk or potentially high-risk passengers. However, in the August 2003 interim final Privacy Act notice for CAPPS II, TSA stated that the system would seek to identify both domestic and foreign terrorists and not just foreign terrorists as previously proposed. The August notice also stated that the system could be expanded to identify persons who are subject to outstanding federal or state arrest warrants for violent crimes and that CAPPS II could ultimately be expanded to include identifying individuals who are in the United States illegally or who have overstayed their visas. DHS officials have said that such changes are not an expansion of the system’s mission because they believe it will improve aviation security and is consistent with CAPPS II’s mission. However, program officials and advocacy groups expressed concern that focusing on persons with outstanding warrants, and possibly immigration violators, could put TSA at risk of diverting attention from the program’s fundamental purpose. Expanding CAPPS II’s mission could also lead to an erosion of public confidence in the system, which program officials agreed is essential to the effective operation of CAPPS II. This expansion could also increase the costs of passenger screening, as well as the number of passengers erroneously identified as needing additional security attention because some of the databases that could be used to identify wanted felons have reliability concerns. Another challenge facing the successful operation of CAPPS II is the system’s ability to effectively identify passengers who assume the identity of another individual, known as identity theft. TSA officials said that while they believe CAPPS II will be able to detect some instances of identity theft, they recognized that the system will not detect all instances of identity theft without implementing some type of biometric indicator, such as fingerprinting or retinal scans. TSA officials said that while CAPPS II cannot address all cases of identity theft, CAPPS II should detect situations in which a passenger submits fictitious information such as a false address. These instances would likely be detected since the data being provided would either not be validated or would be inconsistent with information in the databases used by CAPPS II. Additionally, officials said that data on identity theft may be available through credit bureaus and that in the future they expect to work with the credit bureaus to obtain such data. However, the officials acknowledge that some identity theft is difficult to spot, particularly if the identity theft is unreported or if collusion, where someone permits his or her identity to be assumed by another person, is involved. TSA officials said that there should not be an expectation that CAPPS II will be 100 percent accurate in identifying all cases of identity theft. Further, the officials said that CAPPS II is just one layer in the system of systems that TSA has in place to improve aviation security, and that passengers who were able to thwart CAPPS II by committing identity theft would still need to go through normal checkpoint screening and other standard security procedures. TSA officials believe that, although not fool- proof, CAPPS II represents an improvement in identity authentication over the current system. The events of September 11, 2001, and the ongoing threat of commercial aircraft hijackings as a means of terrorist attack against the United States continue to highlight the importance of a proactive approach to effectively prescreening airline passengers. An effective prescreening system would not only expedite the screening of passengers, but would also accurately identify those passengers warranting additional security attention, including those passengers determined to have an unacceptable level of risk who would be immediately assessed by law enforcement personnel. CAPPS II, while holding the promise of providing increased benefits over the current system, faces significant challenges to its successful implementation. Uncertainties surrounding the system’s future functionality and schedule alone result in the potential that the system may not meet expected requirements, may experience delayed deployment, and may incur increased costs throughout the system’s development. Of the eight issues identified by the Congress related to CAPPS II, only one has been fully addressed. Additionally, concerns about mission expansion and identify theft add to the public’s uncertainty about the success of CAPPS II. Our recent report on CAPPS II made seven specific recommendations that we believe will help address these concerns and challenges. The development of plans identifying the specific functionality that will be delivered during each increment of CAPPS II and its associated milestones for completion and the expected costs for each increment would provide TSA with critical guidelines for maintaining the project’s focus and achieving intended system results and milestones within budget. Furthermore, a schedule for critical security activities, a strategy for mitigating the high risk associated with system and database testing, and appropriate oversight mechanisms would enhance assurance that the system and its data will be adequately protected from misuse. In addition to these steps, development of results-oriented performance goals and measures would help ensure that the system is operating as intended. Last, given the concerns regarding the protection of passenger data, the system cannot be fully accepted if it lacks a redress process for those who believe they are erroneously identified as an unknown or unacceptable risk. Our recently published report highlighted each of these concerns and challenges and contained several recommendations to address them. DHS generally concurred with our findings and has agreed to address the related recommendations. By adequately addressing these recommendations, we believe DHS increases the likelihood of successfully implementing this program. In the interim, it is crucial that the Congress maintain vigilant oversight of DHS to see that these concerns and challenges are addressed. Mr. Chairman, this concludes my statement. I would be please to answer any questions that you or other members of the Subcommittee may have at this time. For further information on this testimony, please contact Norman J. Rabkin at (202) 512-8777 or David A. Powner on (202) 512-9286. Individuals making key contributions to this testimony include J. Michael Bollinger, Adam Hoffman, and John R. Schulze. The following describes general areas of functionality to be completed during each of the currently planned nine developmental increments of the Computer –Assisted Passenger Prescreening System (CAPPS II). Increment 1. System functionality established at the central processing center. By completion of increment 1, the system will be functional at the central processing center and can process passenger data and support intelligence validation using in-house data (no use of airline data). Additionally, at this increment, validation will be completed for privacy and policy enforcement tools; the exchange of, and processing with, data from multiple commercial data sources; and processing of government databases to support multiple watch-lists. Increment 2. System functionality established to support processing airline data. At the completion of increment 2, the system is functionally and operationally able to process airline data. Additionally, the system can perform functions such as prioritizing data requests, reacting to threat level changes, and manually triggering a “rescore” for individual passengers in response to reservation changes or adjustments to the threat level. Increment 3. This increment will provide for a functional system that will use a test simulator that will not be connected to an airline’s reservation system. System hardware that includes the establishment of test and production environments will be in place and a facility capable of performing risk assessment will be established. Design and development work for system failure with a back up system and help desk infrastructure will be put in place. Increment 4. By the completion of this increment, a back up location will be functionally and operationally able to support airlines processing application, similar to the main location. A help desk will be installed to provide assistance to airlines, authenticator, and other user personnel. Increment 5. Enhanced intelligence interface. At the conclusion of this increment, the system will be able to receive from DHS the current threat level automatically and be able to adjust the system in response to changes in threat levels. The system will also be able to semi-automatically rescore and reclassify passengers that have already been authenticated. Increment 6. Enhanced passenger authentication. This increment will allow the system to perform passenger authentication using multiple commercial data sources in the instance that little information on a passenger is available from original commercial data source. Increment 7. Integration of other system users. By the completion of this increment, TSA Aviation Operations and law enforcement organizations will be integrated into CAPPS II, allowing multiple agencies and organizations to do manpower planning and resource allocations based on the risk level of the nation, region, airport, or specific flight. Increment 8. Enhanced risk assessments. This increment provides for the installation of capabilities and data sources to enhance risk assessments, which will lower the number of passengers falsely identified for additional screening. This increment also provides for a direct link to the checkpoint for passenger classification, rather than having the passenger’s score encoded on their boarding pass. Increment 9. Completion of system. Increment 9 marks the completion of the system as it moves into full operation and maintenance, which will include around-the-clock support and administration of the system, database, and network, among other things. System functionality to be achieved at revised schedule dates will be less than originally planned. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The security of U.S. commercial aviation is a long-standing concern, and substantial efforts have been undertaken to strengthen it. One such effort is the development of a new Computer-Assisted Passenger Prescreening System (CAPPS II) to identify passengers requiring additional security attention. The development of CAPPS II has raised a number of issues, including whether individuals may be inappropriately targeted for additional screening and whether data accessed by the system may compromise passengers' privacy. GAO was asked to summarize the results of its previous report that looked at (1) the development status and plans for CAPPS II; (2) the status of CAPPS II in addressing key developmental, operational, and public acceptance issues; and (3) additional challenges that could impede the successful implementation of the system. Key activities in the development of CAPPS II have been delayed, and the Transportation Security Administration (TSA) has not yet completed important system planning activities. TSA is currently behind schedule in testing and developing initial increments of CAPPS II, due in large part to delays in obtaining needed passenger data for testing from air carriers because of privacy concerns. TSA also has not established a complete plan identifying specific system functionality that will be delivered, the schedule for delivery, and estimated costs. The establishment of such plans is critical to maintaining project focus and achieving intended results within budget. Without such plans, TSA is at an increased risk of CAPPS II not providing the promised functionality, of its deployment being delayed, and of incurring increased costs throughout the system's development. TSA also has not completely addressed seven of the eight issues identified by the Congress as key areas of interest related to the development, operation, and public acceptance of CAPPS II. Although TSA is in various stages of progress on addressing each of these eight issues, as of January 1, 2004, only one--the establishment of an internal oversight board to review the development of CAPPS II--has been completely addressed. However, concerns exist regarding the timeliness of the board's future reviews. Other issues, including ensuring the accuracy of data used by CAPPS II, stress testing, preventing unauthorized access to the system, and resolving privacy concerns have not been completely addressed, due in part to the early stage of the system's development. GAO identified three additional challenges TSA faces that may impede the success of CAPPS II. These challenges are developing the international cooperation needed to obtain passenger data, managing the possible expansion of the program's mission beyond its original purpose, and ensuring that identity theft--in which an individual poses as and uses information of another individual--cannot be used to negate the security benefits of the system. GAO believes that these issues, if not resolved, pose major risks to the successful deployment and implementation of CAPPS II. |
The increased threat of terrorism is an urgent national issue. The President directed the establishment of a commission on July 25, 1996, headed by Vice President Gore, whose charter included reviewing aviation security. The commission was charged with reporting to the President within 45 days its initial findings on aviation security, including plans to (1) deploy technology capable of detecting the most sophisticated explosive devices and (2) pay for that technology. In a classified report, we made recommendations to the Vice President, in his capacity as chairman of the commission, that would enhance the effectiveness of the commission’s work. Detection technologies are also important in the effort to stem the flow of drugs into the United States. Detection technologies are typically developed for specific applications—some for aviation security, some for drug interdiction, and some for both. The major applications for the aviation security efforts of the Federal Aviation Administration (FAA) include the screening of checked baggage, passengers, cargo, mail, and carry-on items such as electronics, luggage, and bottles. FAA’s need for detection technology comes from its security responsibilities involving more than 470 domestic airports and 150 U.S. airlines, annually boarding over 500 million passengers with their checked baggage and carry-on luggage, and transporting mail and cargo. Some advanced detection technologies are commercially available to serve aviation security applications. However, only one technology is currently deployed in the United States. That technology is being operationally tested at two U.S. airports. Major applications for the drug interdiction efforts of the U.S. Customs Service include screening of cargo and containers, pedestrians, and vehicles and their occupants. Customs’ need for detection technology emanates from its responsibilities to control 301 ports of entry. Currently, over 400 million people, almost 120 million cars, and 10 million containers and trucks pass through these points each year. Currently, Customs’ screening is done manually by inspectors with relatively little equipment beyond hand-held devices for detecting false compartments in containers. The challenges in detecting explosives are significantly different than the challenges in detecting narcotics, as are the consequences in not detecting them. Customs and other drug enforcement agencies are concerned with much larger quantities than are aviation security personnel. Consequently, greater technical challenges are posed in attempting to detect explosives that might be used to bring down a commercial aircraft. Two general groups of technologies, with modifications, can be used to detect both explosives and narcotics. The first group uses X-rays, nuclear techniques involving neutron or gamma ray bombardment, or electromagnetic waves, such as radio frequency waves. These technologies show anomalies in a targeted object that might indicate concealed explosives and narcotics or detect actual explosives and narcotics. The second group, referred to as trace detection technologies, uses chemical analyses to identify particles or vapors characteristic of narcotics or explosives and deposited on, or surrounding, objects, such as carry-on electronics or surfaces of vehicles. In addition to technologies, dogs are considered a unique type of trace detector because they can be trained to respond in specific ways to smells of narcotics or explosives. Since 1978, the federal government has spent about $246 million for research and development (R&D) on explosives detection technologies, including over $7 million for ongoing demonstration testing at the Atlanta, San Francisco, and Manila airports. During the same period, the government has spent about $100 million for R&D on narcotics technologies and a little more than $20 million procuring a variety of equipment to assist Customs inspectors, such as hand-held devices for detecting false compartments. The majority of the spending has occurred since 1990. As shown in table 1, annual R&D spending on explosives detection technologies fluctuated from $23 million to $28 million during the first part of this decade, before increasing to $39 million for fiscal year 1996. The $14 million, or over 50 percent, increase from fiscal year 1995 is due principally to FAA’s funding of demonstration testing of a technology for screening checked baggage and to the funding of a counterterrorism application by the Technical Support Working Group (TSWG). Annual spending on narcotics detection technology increased during the first part of the decade from $14 million to a peak of $20 million in fiscal year 1994 and then dropped $3 million from that peak, or 15 percent. The reason for this decline is reduced spending by the Department of Defense (DOD) as it shifted emphasis from one type of narcotics detection technology to other, less costly types of technologies to satisfy Customs’ needs. The spending on detection technologies that has occurred since 1990 has been due in large part to congressional direction. The Aviation Security Improvement Act of 1990 (Public Law 101-604) directed FAA to increase the pace of its R&D. The act also set a goal of deploying explosives detection technologies by November 1993. However, it prohibited FAA from mandating deployment of a particular technology until that technology had first been certified as capable of detecting various types and quantities of explosives using testing protocols developed in conjunction with the scientific community. FAA initially concentrated its efforts on developing protocols and technologies for screening checked baggage to address one of the security vulnerabilities that contributed to the bombing of Pan Am flight 103 in December 1988. However, the goal of deploying such technology has still not been met. FAA has certified one system, and it is being operationally tested at two domestic airports and one airport overseas. Congress tasked DOD in 1990 to develop narcotics detection technologies for Customs and other drug enforcement organizations. DOD has focused on developing “non-intrusive inspection” technologies to screen containers without the need for opening them. Customs is deploying a DOD-developed technology for trucks and empty containers, but it rejected another DOD-developed technology for fully loaded containers (see p. 8). Customs has identified containerized cargo at commercial seaports as its greatest unsolved narcotics detection requirement. According to Customs, it may be necessary to explore new methods of financing the systems that are technologically feasible for seaports, but high in cost. Both aviation security and drug interdiction depend on a complex mix of intelligence, procedures, and technologies, which can partially substitute for each other in terms of characteristics, strengths, and limitations. For example, FAA evaluates information from the intelligence community in determining a level of threat and mandating security procedures appropriate to a specific time and place. These security procedures include bag matching and passenger profiling. FAA estimates that incorporating bag matching in everyday security could cost up to $2 billion, while profiling could reduce to 20 percent the number of passengers requiring additional screening. The Customs’ drug interdiction task has an analogous set of procedures and technologies and trade-offs. Relevant trade-offs in selecting detection technologies for a given application involve their characteristics and costs, including issues of their effectiveness in detecting explosives or narcotics, safety risks to users of the technology, and impacts on the flow of commerce. For example, some highly effective technologies could be deployed now, but they are expensive, raise safety concerns, or slow the flow of commerce. These trade-offs are required for each of the major detection technology applications for FAA and Customs. While areas of overlap exist, FAA’s aviation security applications generally relate to checked baggage, passengers, and carry-on items, and Customs’ drug interdiction applications generally relate to screening of cargo, containers, vehicles, and baggage. In addition to detection technologies, teams of dogs and their handlers are used for both aviation security and drug interdiction applications. A system is available today for screening checked baggage that has been certified by FAA as capable of detecting various types and quantities of explosives likely to be used to cause catastrophic damage to a commercial aircraft, as is required by the Aviation Security Improvement Act of 1990. However, the certified system is costly and has operational limitations, including a designed throughput of about 500 bags an hour with actual throughput much less than that number. Other less costly and faster systems are available, but they cannot detect all the amounts, configurations, and types of explosive material likely to be used to cause catastrophic damage to commercial aircraft. FAA’s plans for developing detection technologies for checked baggage include efforts to improve the certified system, develop new technologies, and evaluate a mix of technologies. FAA believes that an appropriate mix of systems that individually do not meet certification requirements might eventually work together to detect the amounts, configurations, and types of explosive material that are required by the act. Appendix I provides additional information about the various types of technologies available and under development for screening checked baggage, including the characteristics and limitations of those technologies, their status, the estimated range of prices for the technologies, and federal government funding for the technologies. The National Research Council recently reported that X-ray and electromagnetic technologies produce images of sufficient quality to make them effective for screening passengers for concealed explosives. Future development efforts by FAA and TSWG are generally focusing on devices that detect explosives on boarding documents passengers have handled and portals that passengers would walk through. One type of portal uses trace detection technologies that collect and analyze traces from the passengers’ clothing or vapors surrounding them. The other type uses electromagnetic waves to screen passengers for items hidden under clothing. The National Research Council also recently observed that successful deployment of these technologies is likely to depend on the public’s perception about the seriousness of the threat and the effectiveness of devices in countering the threat, which might also be considered intrusive or thought to be a health risk. (See App. II for more information about the various types of technologies available and under development for passenger screening.) Technologies available today for screening carry-ons for hidden explosives include conventional X-ray machines, an electromagnetic system, and trace detection devices. FAA has recently developed trace detection standards for inspecting carry-on electronics for explosives. In addition, FAA has “assessed as effective,” but not certified, three trace detection systems to be used during periods of heightened security. FAA expects to soon “assess as effective” three more trace detection systems. The more expensive trace technologies used for carry-on baggage are capable of detecting smaller amounts of explosives and narcotics. FAA’s future efforts are expected to include developing an enhanced X-ray device and screeners for bottles. (See app. III for more detailed information about technologies for screening carry-on items.) Tests have shown that fully loaded containers can be effectively screened for narcotics with available high energy X-ray technologies (about 8 million electron volts or the equivalent of 50 to 70 times the energy of a typical airport-passenger X-ray). However, Customs rejected a DOD-developed high energy technology because it cost $12 million to $15 million per location, required a large amount of land for shielding, and raised safety concerns. Available low-energy technologies (the equivalent of 3 to 4 times the energy of a passenger X-ray) are less costly and safer but cannot penetrate full containers, so their use is limited to screening for hidden compartments in empty containers and objects concealed in trucks and trailers. About 4 to 25 containers per hour can be processed through low- and high-energy X-ray technologies depending on their configurations. According to DOD and Customs officials, future efforts in container screening will include developing less expensive X-ray systems with higher energy levels, mobile X-ray systems, and more capable hand-held trace detection systems. Those efforts will also include evaluating nuclear-based techniques for inspecting empty tankers at truck and rail ports. (See app. IV for additional information about technologies for screening cargo and containers.) Dogs can be trained to alert their handlers upon detecting explosives and narcotics. FAA-certified dogs are trained to detect various types of explosive substances that might be concealed in aircraft, airport vehicles, baggage, cargo, and terminals. Customs’ dogs are trained to detect narcotics and in 1994 almost 6,000 drug seizures were attributable to dog teams. Currently funded projects include efforts to develop methods of bringing air samples to the dogs, or swabs from objects they are to inspect. Despite the limitations of currently available detection technologies, other countries have deployed some of these technologies to detect explosives and narcotics because of differences in their perception of the threat and their approaches to counter the threat. These countries’ experiences provide opportunities to learn lessons about operational measures taken to deploy detection technologies, such as the amount of airport modifications needed to incorporate new technologies and the types of training provided to the operators of the new equipment, as well as the actual effectiveness of the technologies. While Customs has deployed equipment such as hand-held devices, it is also deploying up to 12 low-energy X-ray systems to screen empty containers and trucks for narcotics along the Southwest border. On the other hand, some countries are using high-energy systems to screen fully loaded containers. The high-energy systems installed at ports of entry in the United Kingdom, France, Germany, and China would have similar uses at seaports here, but Customs officials told us that the systems are too new for reliable operational data. They also told us that tests have not been conducted against Customs’ requirements and the technologies would also be too expensive in the quantities needed for nationwide deployment. A high-energy nuclear system is being considered for deployment at the Euro Tunnel between France and the United Kingdom. The system would be used to screen for explosives concealed in trucks and their cargo being transported under the English Channel. This system could also be used to detect narcotics. In the United Kingdom, Germany, the Netherlands, and Belgium, we observed governments working closely with airport authorities to deploy explosives detection technologies. In two countries, airport authorities have generally embraced an approach that entails successive levels of review of checked baggage to resolve uncertainty about checked baggage. This approach can require complex systems for tracking throughout the entire baggage handling system. Instead of using only the FAA-certified system for checked baggage, these countries are using a mix of technologies. Their approach has been to implement technology that is an improvement on existing technology or procedures, rather than waiting for perfected technology. Officials in the two other countries are waiting for the next generation of explosives detection technologies. They believe that X-ray technologies have generally reached their limits in detecting explosives. All of the countries have also deployed trace detection technology for screening checked baggage or carry-on items, especially electronics. FAA officials told us they cannot mandate the types of approaches used by other countries, although airlines could voluntarily adopt them, because of the statutory prohibition against mandating technology that is not certified. With a combination of the best available technologies and procedures, including the use of the certified system for screening checked baggage, FAA estimates the incremental cost of the most effective security system for U.S. air travellers to be $6 billion over the next 10 years. On a per-passenger basis, FAA estimates the equivalent cost to be about $1.30 per one-way ticket. Customs and FAA have deployed dog teams widely. Customs has deployed about 450 dog teams to airports, seaports, and land border ports. The cost to train a Customs’ dog and handler is about $6,000. FAA’s canine explosives detection program includes 29 U.S. airports with a total of 72 FAA-trained and certified dog teams. Of the 19 largest U.S. airports, 14 have FAA-trained and certified dogs. The five airports without certified dogs are Washington-National, Washington-Dulles, Baltimore-Washington International, New York-John F. Kennedy, and Honolulu. According to an FAA official, these airports do not have FAA-certified dog teams because airport officials are concerned about cost. The cost to train an FAA dog and handler is about $17,000 and the annual operating cost of a team, including the handler’s salary, is about $60,000. Five agencies—FAA, DOD, Customs, TSWG, and ONDCP—provided comments on the technical accuracy of information contained in a draft of this report. We have incorporated their comments in this final report where appropriate. To determine the amount of federal government spending for R&D on explosives and narcotics detection technologies, we obtained funding information from Customs, FAA, DOD, ONDCP, and TSWG covering periods as far back as the information was available. Although we identified the historical and current levels of funding, we generally focused on the period 1990 to the present because most technologies were developed and deployed during this period. To obtain information on the characteristics and limitations of available and planned technologies for containers, checked baggage, passengers, and carry-on items, we requested project information from the same five agencies for each detection technology project they had undertaken since 1990. Additionally, we received briefings from developers of technology and manufacturers of equipment currently available on the market. We analyzed major categories of technologies to identify a few characteristics common to each that can be used in making comparisons. We did not attempt to evaluate the effectiveness of the technologies, nor did we assess whether the current funding level is adequate to develop reliable detection technologies. We interviewed officials and gathered data primarily from the FAA, DOD, Customs, ONDCP, and TSWG to develop information on available and planned detection technologies. We also interviewed officials and visited ports of entry in Miami, Florida; San Juan, Puerto Rico; and Otay Mesa, California; and airports in Belgium, Germany, the Netherlands, United Kingdom, and the United States. We are sending copies of this report to the Vice President of the United States; Chairmen and Ranking Minority Members of appropriate congressional committees; the Secretaries of Treasury, State, Defense, and Transportation; the Attorney General, Department of Justice; the Administrators, FAA and Drug Enforcement Administration; the Commissioner, U.S. Customs Service; and the Directors, ONDCP, Central Intelligence, and Federal Bureau of Investigation. If you or your staff have any questions concerning explosives detection technology, please contact Gerald L. Dillingham at (202) 512-2834. If you have any questions regarding narcotics detection technologies, please call David E. Cooper on (202) 512-4841. Major contributors to this report are listed in appendix V. Funding (FYs 78-96) X-ray source rotates around a bag obtaining a large number of cross-sectional images that are integrated by a computer, which displays densities of objects in the bag. $850,000 to $1 million $22.2 million (FAA) Automatically alarms when objects with high densities, characteristic of explosives, are detected. Relatively slow throughput; certified system requires two units to meet throughput requirement. Commercially available. Achieved Federal Aviation Administration (FAA) certification in December 1994. FAA currently funding operational testing at three airports and also funding projects to improve throughput rate, reduce unit cost, and improve overall capabilities. Department of Defense (DOD) recently tested technology for detecting drugs in small packages. Two different X-ray energies determine the densities and average atomic numbers of the target material. Commercially available. FAA is developing an enhanced version that may meet certification standards. The U.S. Customs Service (Customs) plans to test this technology for drug detection. $2.1 million (FAA) Currently none of the X-rays in this group meets certification standards for checked bags because they do not detect the quantities and configurations of the full range of explosives specified in the standards. (continued) Funding (FYs 78-96) Backscatter detects reflected X-ray energy, providing an additional image to highlight organic materials such as explosives and drugs near the edge of a bag. Commercially available. FAA has several projects aimed at assisting this group of X-ray devices meet certification standards. $100,000 to $140,000 $100,000 (Customs) $2.2 million (FAA) This group of X-ray devices generally does not automatically alarm and therefore requires an operator to interpret the image. Technology is based on the detection of scatter patterns as X-rays interact with crystal lattice structures of materials. FAA and Customs terminated projects due to significant technical problems. A foreign government and contractor are supporting development of this technology. $4.5 million (FAA) $270,000 (Customs) Accelerator produces gamma rays that penetrate bags to detect presence of chlorine compounds in narcotics. DOD is building a prototype to demonstrate proof-of-principle for airport baggage carousel application. Demonstration is expected in December 1996. $8.6 million (DOD) Eventual system expected to be very expensive. Six machines built and tested since 1989. FAA discontinued checked baggage portion of project in 1994, but it is now investigating carry-on application. DOD contractor now using FAA machines to test drug detection. radioactive source probe bags for presence of nitrogen or chlorine compounds. $6.6 million (FAA) $280,000 (DOD) $27,000 (Customs) Automatically alarms on explosives or narcotics. Cost, size, and false alarm rate were of concern to airline industry, President’s Commission on Terrorism and Aviation Security, and Customs. (continued) Funding (FYs 78-96) Radio frequency pulses probe bags to elicit unique responses from explosives and drugs. Nonimaging technology that provides chemically specific detection and automatically alarms on explosives or drugs. Commercially available. FAA has a prototype capable of detecting two types of explosives. Customs has a prototype capable of detecting cocaine base. $1 million (DOD) $350,000 Office of National Drug Control Policy (ONDCP) $0.7 million (FAA) $1.6 million Technical Support Working Group (TSWG) Currently does not meet FAA certification standards. Detection of certain cocaine compounds needs improvement. The Funding column indicates whether a specific technology was developed or is being developed for explosives detection, narcotics detection, or both. Generally, FAA and TSWG funding has supported explosives detection, while funding by DOD, Customs, and ONDCP has supported narcotics detection. Where a technology funding cell shows FAA or TSWG in combination with DOD, Customs, or ONDCP, that technology is generally capable of detecting both narcotics and explosives. Funding (FYs 78-96) System is nonimaging, but will automatically alarm if drug is detected in the digestive tract of a swallower. Requires about 30 seconds to screen a suspect. Prototype developed and tested at an airport. Project was terminated because system emitted radio frequencies that interfered with airport operations and Customs decided against spending additional $165,000 on needed shielding. System is now sitting idle at a Customs’ storage facility. $1.3 million (ONDCP) $123,000 (Customs) System will scan 360 degrees around a passenger and automatically pinpoint the location of all undeclared objects on the surface of the body. Under development by FAA. Factory and airport testing to occur in 1997. $110,000 to $200,000 $1.6 million (FAA) System will be capable of processing 500 passengers per hour. System provides 360- degree imaging of the human body in order to detect weapons, explosives, and drugs concealed underneath clothing. Under development by FAA. Fieldable prototype to be completed mid-1997 with airport testing to follow. $100,000 to $200,000 $5.3 million (FAA) System does not provide automatic detection, but relies on an operator to spot the contraband. System expected to process 360- 600 passengers per hour. (continued) Funding (FYs 78-96) Under development by FAA. Fieldable prototype completed in 1995. Factory and airport testing will begin in late 1996. $4.0 million (FAA) clothing collect vapor and particles while passengers are walking through the portal. System will automatically alarm if explosive is detected. Throughput is estimated to be 360 per hour. Air flow dislodges vapor or particles from passengers walking through portals to test for explosives. Two prototypes are being developed by FAA. $300,000 to $500,000 $2.5 million (FAA) Systems automatically alarm if explosive is detected. Throughput goal is 360 per hour. Trace samples collected from passengers’ hands either through a token or document. Under development by FAA. Field prototype to be available sometime in 1996. $65,000 to $85,000 $125,000 (FAA) System will automatically alarm if explosive is detected. Throughput is estimated to be 425 per hour. IMS Document Screeners Collects trace samples $65,000 to $85,000 $430,000 (TSWG) from passengers’ documents. Under development by TSWG. Project started in April 1996 and to be completed in 1998. System will automatically alarm if explosive is detected. Throughput is estimated to be 450 per hour. (Table notes on next page) The Funding column indicates whether a specific technology was developed or is being developed for explosives detection, narcotics detection, or both. Generally, FAA and TSWG funding has supported explosives detection, while funding by DOD, Customs, and ONDCP has supported narcotics detection. Where a technology funding cell shows FAA or TSWG in combination with DOD, Customs, or ONDCP, that technology is generally capable of detecting both narcotics and explosives. Funding (FYs 78-96) Measures mobility of various chemicals through a gas in an electrical field. Commercially available. For example, 125 units of a particular IMS system have been deployed overseas. $45,000 to $152,000 $2.3 million (FAA) Fast, portable, and inexpensive. Lower chemical specificity than mass spectrometry. $100,000 to $170,000 chromatography and mass spectrometry or chemiluminescence that separates mixtures using an absorbent material. Commercially available. For example, 154 units of a chemiluminescence system have been deployed overseas. $2 million (FAA) $230,000 (TSWG) High sensitivity and chemical specificity. Produces evidence acceptable in court. Expensive, slow, and bulky. Do not automatically alarm, so dependent on operator interpretation of enhanced images. Under development. $325,000 (FAA) $250,000 (TSWG) Limited penetration of target objects. (continued) Funding (FYs 78-96) Radio frequency pulses probe hags to elicit unique responses from explosives and drugs. Commercially available. A field prototype capable of handling small size packages was tested in Atlanta during the Olympics by airlines to screen electronics. This is a product derived from funding the same technology listed in appendix I. Nonimaging technology that provides chemically specific detection and automatically alarms on explosives or drugs. Detection of certain cocaine compounds needs improvement. System uses microwave technology to penetrate bottles and will discover when bottles do not contain the liquid that is expected. It is basically a discovery rather than detection system. This is an FAA in-house project working with a commercially available device. FAA is currently testing field prototypes. $19,000 to $25,000 $77,000 (FAA) System does not identify the liquid in the bottle. System throughput is expected to be 720 bottles per hour. However, system is unable to penetrate certain types of bottles. Automatically alarms if explosives detected. Prototypes are available. $75,000 to $125,000 $974,000 (FAA) Analysis time varies between 20 and 70 seconds per target. Manufacturer is working to shorten analysis time. The Funding column indicates whether a specific technology was developed or is being developed for explosives detection, narcotics detection, or both. Generally, FAA and TSWG funding has supported explosives detection, while funding by DOD, Customs, and ONDCP has supported narcotics detection. Where a technology funding cell shows FAA or TSWG in combination with DOD, Customs, or ONDCP, that technology is generally capable of detecting both narcotics and explosives. Funding (FYs 78-96) An accelerator generates gamma rays to penetrate the object to be screened. The gamma rays are preferentially absorbed by nitrogen nuclei. A significant decrease in the number of detected gamma rays indicates the possible presence of explosives. Project was originally intended for checked bags and has been inactive since 1993. FAA may reactivate project for screening air cargo containers. $12.1 million (FAA) System requires less shielding than other nuclear technologies. An accelerator generates neutrons for bombarding target; induced gamma rays are measured to detect presence of narcotics or explosives. System automatically alarms based on 3 dimensional images of elemental ratios of hydrogen, oxygen, nitrogen, and carbon. DOD completed the project, but the system was not transitioned to Customs due to Customs’ concern with cost, size, operational, and safety issues. FAA conducted limited testing for checked baggage application in 1993 and it is now considering a new project for screening air cargo. TSWG is funding a counterterrorism application. $8 to $10 million $19 million (DOD) $ 5.3 million (FAA) $6.2 million (TSWG) System takes 20 minutes per analysis and would typically be combined with an X-ray system to speed throughput. Requires a large amount of space and shielding, a radiation permit, and an FDA permit for use on food. Also uses an accelerator to generate fast neutrons to probe bags; measurement of the transmitted neutron spectrum is used to detect explosives. FAA has two ongoing projects and now believes technology might be more suitable for screening air cargo or containerized checked baggage than individual bags. $3.5 million (FAA) (continued) Funding (FYs 78-96) System is designed for propane and other gas or liquid tanker trucks but is adaptable to scan railcars. Prototype being evaluated by DOD and Customs. $382,000 (ONDCP) While open and unsheltered, system requires a radiation permit to operate. Systems are designed to scan loaded trucks/containers and have throughput of 12-25 per hour depending on configurations. Commercially available. DOD completed the project in Tacoma, Washington, but system was not transitioned to Customs due to Customs’ concerns with cost, safety, and operational issues. $12 to $15 million $15 million (DOD) $224,000 (Customs) Required extensive shielding, radiation permit, and FDA permit if used on food. System relies on operator’s interpretation of the X-ray images. System is designed to scan empty trucks or containers. Throughput is about six trucks per hour. Commercially available. Customs has deployed one machine at Otay Mesa, California, and plans to deploy up to 11 more along the Southwest border. $3.7 million (DOD) Relies on operator’s interpretation of the X-Ray images. Systems are designed to scan empty or loaded trucks and containers depending on the energy level and to complement the fixed-site X-ray systems. DOD is testing 450 KeV system and still developing machines at other energy levels. $1.75 to $6 million $10.8 million (DOD) A 1 MeV system is designed for aircraft size cargo containers. May also be useful for scanning passenger vehicles. (continued) Funding (FYs 78-96) Radio frequency wave probes objects, except that a magnet aligns hydrogen atoms prevalent in liquids. Abandoned machine is in storage at major Southeastern seaport. $130,000 (Customs) Abandoned FAA prototype for checked baggage was modified for Customs to scan frozen shrimp packages. Machine short-circuited during storm and Customs decided against spending for machine repair. Systems are based on gas chromatography, chemiluminescence, mass spectroscopy, surface acoustic wave, ion mobility spectroscopy, and biosensor technologies. Many commercially available. DOD is developing some prototypes for use by Customs. $2,500 to $170,000 $240,000 (Customs) $2.4 million (TSWG) $4.7 million (DOD) Sample collection steps are highly critical for the effectiveness of systems. Most existing systems use vacuum or wiping with a swab. Most existing systems are not currently capable of detecting the extremely low vapor pressures of cocaine and heroin. (continued) Funding (FYs 78-96) System differs from other vapor detectors in that it draws air sample from a barometric chamber into which the object to be inspected has been shaken and subjected to heat cycles. Under development by FAA. A fieldable prototype is expected to be tested by October 1996. $1.8 million (TSWG) System automatically alarms if explosive is detected. System may not work on a tightly sealed object. System concentrates 400 litres of air to .5 cc of liquid. Under development by FAA. $35,000 to $42,000 $1.3 million (FAA) Biosensor specifically identifies the explosives detected. System is suitable for use in cargo holds and interiors of aircraft, etc. The Funding column indicates whether a specific technology was developed or is being developed for explosives detection, narcotics detection, or both. Genrally, FAA and TSWG funding has supported explosives detection, while funding by DOD, Customs, and ONDCP has supported narcotics detection. Where a technology funding cell shows FAA or TSWG in combination with DOD, Customs, or ONDCP, that technology is generally capable of detecting both narcotics and explosives. Thomas F. Noone Matthew E. Hampton Marnie S. Shaul Gerald L. Dillingham The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on explosives and narcotics detection technologies that are available or under development, focusing on: (1) funding for those technologies; (2) characteristics and limitations of available and planned technologies; and (3) deployment of these technologies by the United States and foreign countries. GAO found that: (1) aviation security and drug interdiction depend on a complex and costly mix of intelligence, procedures, and technologies; (2) since 1978, federal agencies have spent about $246 million for research and development on explosives detection technologies and almost $100 million on narcotics detection technologies; (3) most of this spending has occurred since 1990, in response to congressional direction, and has been for technologies to screen checked baggage, trucks, and containers; (4) difficult trade-offs must be made when considering whether to use detection technologies for a given application; (5) chief among those trade-offs are the extent to which intelligence-gathering and procedures can substitute for technology or reduce the need for expensive technology; (6) decisionmakers also need to evaluate technologies in terms of their characteristics and limitations; (7) some technologies are very effective and could be deployed now, but they are expensive, slow the flow of commerce, and raise issues of worker safety; (8) other technologies could be more widely used, but they are less reliable; (9) still others may not be available for several years at the current pace of development; (10) despite the limitations of the currently available technology, some countries have already deployed advanced explosives and narcotics detection equipment because of differences in their perception of the threat and their approaches to counter the threat; (11) should the United States start deploying the currently available technologies, lessons can be learned from these countries regarding their approaches, as well as capabilities of technology in operating environments; and (12) the Federal Aviation Administration estimates that use of the best available procedures and technology for enhancing aviation security could cost as much as $6 billion over the next 10 years or alternatively about $1.30 per one-way ticket, if the costs were paid through a surcharge. |
The nation’s transportation system is a vast, interconnected network of diverse modes. Key modes of transportation include aviation; highways; motor carrier (i.e., trucking); motor coach (i.e., intercity bus); maritime; pipeline; rail (passenger and freight); and transit (e.g., buses, subways, ferry boats, and light rail). The transportation modes work in harmony to facilitate mobility through an extensive network of infrastructure and operators, as well as through the vehicles and vessels that permit passengers and freight to move within the system. For example, the nation’s transportation system moves over 30 million tons of freight and provides approximately 1.1 billion passenger trips each day. The diversity and size of the transportation system make it vital to our economy and national security, including military mobilization and deployment. Given the important role the transportation system plays in our economy, security, and every-day life, the transportation system is considered a critical infrastructure. The USA PATRIOT Act defines critical infrastructure as those “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economy security, national public health or safety, or combination of those matters.” In the National Strategy for Homeland Security, the administration identifies the transportation system as one of the 13 critical infrastructure sectors that must be protected. The administration’s National Strategy for the Physical Protection of Critical Infrastructure and Key Assets defines the administration’s plan for protecting our critical infrastructures and key assets, including the transportation system, from terrorist attacks. This strategy also outlines the guiding principles that will underpin the nation's efforts to secure the infrastructures vital to national security, governance, the economy and public confidence. The strategy is designed to serve as a foundation for building and fostering the necessary cooperation between government, private industry and citizens in protecting critical infrastructures. Private industry, state and local governments, and the federal government all have roles and responsibilities in securing the transportation system. Private industry owns and operates a large share of the transportation system. For example, almost 2,000 pipeline companies and 571 railroad companies own and operate the pipeline and freight railroad systems, respectively. Additionally, 83 passenger air carriers and 640,000 interstate motor coach and motor carrier companies operate in the United States. State and local governments also own significant portions of the highways, transit systems, and airports in the country. For example, state and local governments own over 90 percent of the total mileage of highways. State and local governments also administer and implement regulations for different sectors of the transportation system and provide protective and emergency response services through various agencies. Although the federal government owns a limited share of the transportation system, it issues regulations, establishes policies, provides funding, and/or sets standards for the different modes of transportation. The federal government uses a variety of policy tools, including grants, loan guarantees, tax incentives, regulations, and partnerships, to motivate or mandate state and local governments or the private sector to help address security concerns. Prior to September 11, DOT was the primary federal entity involved in transportation security matters. However, in response to the attacks on September 11, Congress passed the Aviation and Transportation Security Act (ATSA), which created TSA within DOT and defined its primary responsibility as ensuring security in all modes of transportation. The act also gives TSA regulatory authority over all transportation modes. Since its creation in November 2001, TSA has focused primarily on meeting the aviation security deadlines contained in ATSA. With the passage of the Homeland Security Act on November 25, 2002, TSA, along with over 20 other agencies, was transferred to the new Department of Homeland Security (DHS). Throughout the world, all modes of transportation have been targets of terrorist attacks. For example, aviation has long been an attractive target for terrorists. Aircraft hijackings became a regular occurrence in the 1970s, leading to the first efforts in aviation security. In 1988, a Pan Am flight was bombed over Lockerbie, Scotland, killing all 259 on board. In 1995, a plot to bomb as many as 11 U.S. airliners was discovered. Most recently, U.S. aircraft were hijacked on September 11, 2001, and crashed into the World Trade Center in New York City, the Pentagon in Washington, D.C., and a field in Pennsylvania, killing about 3,000 people and destroying billions of dollars’ worth of property. Public surface transportation systems have also been a common target for terrorist attacks around the world. For example, the first large-scale terrorist use of a chemical weapon occurred in 1995 on the Tokyo subway system. In this attack, a terrorist group released sarin gas on a subway train, killing 11 people and injuring 5,500. According to the Mineta Transportation Institute, surface transportation systems were the target of more than 195 terrorist attacks from 1997 through 2000. The United States maintains the world’s largest and most complex national transportation system. Improving the security of such a system is fraught with challenges for both public and private entities. To provide safe transportation for the nation, these entities must overcome issues common to all modes of transportation as well as issues specific to the individual modes of transportation. Although each mode of transportation is unique, they all face some common challenges in trying to enhance security. Common challenges stem from the extensiveness of the transportation system, the interconnectivity of the system, funding security improvements, and the number of stakeholders involved in transportation security. The size of the transportation system makes it difficult to adequately secure. The transportation system’s extensive infrastructure crisscrosses the nation and extends beyond our borders to move millions of passengers and tons of freight each day. (See fig. 1 for maps of the different transportation modes.) The extensiveness of the infrastructure as well as the sheer volume of freight and passengers moved through the system creates an infinite number of targets for terrorists. Furthermore, as industry representatives and transportation security experts repeatedly noted, the extensiveness of the infrastructure makes it impossible to equally protect all assets. Protecting transportation assets from attack is made more difficult because of the tremendous variety of transportation operators. Some are multibillion-dollar enterprises, while others have very limited facilities and very little traffic. Some are public agencies, such as state departments of transportation, while some are private businesses. The type of freight moved through the different modes is similarly varied. For example, the maritime, motor carrier, and rail operators haul freight as diverse as dry bulk (grain) and hazardous materials. Additionally, some transportation operators carry passengers while others haul freight. Additional challenges are created by the interconnectivity and interdependency among the transportation modes and between the transportation sector and nearly every other sector of the economy. The transportation system is interconnected or intermodal because passengers and freight can use multiple modes of transportation to reach a destination. For example, from its point of origin to its destination, a piece of freight, such as a shipping container, can move from ship to train to truck. (See fig. 2.) The interconnective nature of the transportation system creates several security challenges. First, events directed at one mode of transportation can have ripple effects throughout the entire system. For example, when the port workers in California, Oregon, and Washington went on strike in 2002, the railroads saw their intermodal traffic decline by almost 30 percent during the first week of the strike, compared with the year before. Second, the interconnecting modes can contaminate each other—that is, if a particular mode experiences a security breach, the breach could affect other modes. An example of this would be if a shipping container that held a weapon of mass destruction arrived at a U.S. port where it was placed on a truck or train. In this case, although the original security breach occurred in the port, the rail or trucking industry would be affected as well. Thus, even if operators within one mode established high levels of security they could be affected because of the security efforts, or lack thereof, of the other modes. Third, intermodal facilities where a number of modes connect and interact—such as ports—are potential targets for attack because of the presence of passenger, freight, employees, and equipment at these facilities. (See fig. 3.) Interdependencies also exist between transportation and nearly every other sector of the economy. Consequently, an event that affects the transportation sector can have serious impacts on other industries. For example, when the war in Afghanistan began in October 2001, the rail industry stated that it restricted the movement of many hazardous materials, including chlorine, because of a heightened threat of a terrorist attack. However, within days, many major water treatment facilities reported that they were running out of chlorine, which they use to treat drinking water, and would have to shut down operations if chlorine deliveries were not immediately resumed. Additionally, the transportation system can be affected by other sectors. For example, representatives of the motor coach industry told us that the drop in the tourism industry has negatively affected motor coach profits. Securing the transportation system is made more difficult because of the number of stakeholders involved. As illustrated in figure 4, numerous entities at the federal, state, and local levels, including over 20 federal entities and thousands of private sector businesses, play a key role in transportation security. For example, the Departments of Energy, Transportation, and Homeland Security, state governments, and about 2,000 pipeline operators are all responsible for securing the pipeline system. The number of stakeholders involved in transportation security can lead to communication challenges, duplication, and conflicting guidance. Representatives from several state and local government and industry associations told us that their members are receiving different messages from the various federal agencies involved in transportation security. For instance, one industry representative noted that both TSA and DOT asked the industry to implement additional security measures when the nation’s threat condition was elevated to orange at the beginning of the Iraq War; however, TSA and DOT were not consistent in what they wanted done—that is, they were asking for different security measures. Moreover, many representatives commented that the federal government needs to better coordinate its security efforts. These representatives noted that dealing with multiple agencies on the same issues and topics is frustrating and time consuming for the transportation sector. The number of stakeholders also makes it difficult to achieve the needed cooperation and consensus to move forward with security efforts. As we have noted in past reports, coordination and consensus-building is critical to successful implementation of security efforts. Transportation stakeholders can have inconsistent goals or interests, which can make consensus-building challenging. For example, from a safety perspective, vehicles that carry hazardous materials should be required to have placards that identify the contents of a vehicle so that emergency personnel know how best to respond to an incident. However, from a security perspective, identifying placards on vehicles that carry hazardous materials make them a potential target for attack. According to transportation security experts and state and local government and industry representatives we contacted, funding is the most pressing challenge to securing the nation’s transportation system. While some security improvements are inexpensive, such as removing trash cans from subway platforms, most require substantial funding. Additionally, given the large number of assets to protect, the sum of even relatively less expensive investments can be cost prohibitive. For example, reinforcing shipping containers to make them more blast resistant is one way to improve security, which would cost about $15,000 per container. With several million shipping containers in use, however, this tactic would cost billions of dollars if all of them were reinforced. The total cost of enhancing the security of the entire transportation system is unknown; however, given the size of the system, it could amount to tens of billions of dollars. The magnitude of the potential cost is illustrated by several examples: The President’s fiscal year 2004 budget request for TSA includes about $4.5 billion for aviation security. According to TSA, this funding will be used for security screeners, air marshals, aviation related research and development, and surveillance detection techniques, among other things. The total estimated cost of the identified security improvements at eight mass transit agencies we visited was about $711 million. The Coast Guard estimates the cost of implementing the new International Maritime Organization security code and the security provisions in the Maritime Transportation Security Act of 2002 to be approximately $1.5 billion for the first year and $7.4 billion over the succeeding decade. The American Association of State Highway and Transportation Officials (AASHTO) estimates that enhancing highway and transit security will cost $2 billion annually in capital costs and $1 billion in operating costs. The current economic environment makes this a difficult time for the private industry or state and local governments to make security investments. According to industry representatives and experts we contacted, most of the transportation industry operates on a very thin profit margin, making it difficult to pay for additional security measures. The sluggish economy has further weakened the transportation industry’s financial condition by decreasing ridership and revenues. For example, airlines are in the worst fiscal crisis in their history and several have filed for bankruptcy. Similarly, the motor coach and motor carrier industries and Amtrak report decreased revenues because of the slow economy. In addition, nearly every state and local government are facing a large budget deficit for fiscal year 2004. For example, the National Governors Association estimates that states are facing a total budget shortfall of $80 billion this upcoming year. Given the tight budget environment, state and local governments and transportation operators must make difficult trade- offs between transportation security investments and other needs, such as service expansion and equipment upgrades. According to the National Association of Counties, many local governments are planning to defer some maintenance of their transportation infrastructure to pay for some security enhancements. Further exacerbating the problem of funding security improvements is the additional costs the transportation sector incurs when the federal government elevates the national threat condition. Industry representatives stated that operators tighten security, such as increasing security patrols, when the national threat condition is raised or intelligence information suggests an increased threat against their mode. However, these representatives stated that these additional measures drain resources and are not sustainable. For example, Amtrak estimates that it spends an additional $500,000 per month for police overtime when the national threat condition is increased. Transportation industry representatives also noted that employees are diverted from their regular duties to implement additional security measures, such as guarding entranceways, in times of increased security, which hurts productivity. The federal government has provided additional funding for transportation security since September 11, but demand has far outstripped the additional amounts made available. For example, Congress appropriated a total of $241 million for grants for ports, motor carriers, and Operation Safe Commerce in 2002. However, as table 1 shows, the grant applications received by TSA for these security grants totaled $1.8 billion—7 times more than the amount available. Due to the costs of security enhancements and the transportation industries’ and state and local governments’ tight budget environments, the federal government is likely to be viewed as a source of funding for at least some of these enhancements. However, given the constraints on the federal budget as well as competing claims for federal assistance, requests for federal funding for transportation security enhancements will likely continue to exceed available resources. Another challenge is balancing the potential economic impacts of security enhancements with the benefits of such measures. While there is broad support for greater security, this task is a difficult one because the nation relies heavily on a free and expeditious flow of goods. Particularly with “just in time” deliveries, which require a smooth and expeditious flow through the transportation system, delays or disruptions in the supply chain could have serious economic impacts. As the Coast Guard Commandant stated about the flow of goods through ports, “even slowing the flow long enough to inspect either all or a statistically significant random selection of imports would be economically intolerable.” Furthermore, security measures may have economic and competitive ramifications for individual modes of transportation. For instance, if the federal government imposed a particular security requirement on the rail industry and not on the motor carrier industry, the rail industry might incur additional costs and/or lose customers to the motor carrier industry. Striking the right balance between increasing security and protecting economic vitality of the national economy and individual modes will remain an important and difficult task. In addition to the overarching challenges that transportation stakeholders will face in attempting to improve transportation security, they also face a number of challenges specific to the aviation, maritime, and land transportation modes. Although aviation security has received a significant amount of attention and funding since September 11, more work is needed. In general, transportation security experts believe that the aviation system is more secure today than it was prior to September 11. However, aviation experts and TSA officials noted significant vulnerabilities remain, including: Perimeter security: Terrorists could launch attacks, such as launching shoulder-fired missiles, from a location just outside an airport’s perimeter. Since September 11, airport operators have increased their patrols of airport perimeter areas, but industry officials state that they do not have enough resources to completely protect against these attacks. Air cargo security: Although TSA has focused much effort and funding on ensuring that bombs and other threat items are not carried onto planes by passengers or in their luggage, vulnerabilities exist in securing the cargo carried aboard commercial passenger and all-cargo aircraft. For example, employees of shippers and freight forwarders are not universally subject to a background check. Theft is also a major problem in air cargo shipping, signifying that unauthorized personnel may still be gaining access to air cargo shipments. Air cargo shipments pass through several hands in going from sender to recipient, making it challenging to implement a system that provides adequate security for air cargo. According to TSA officials, TSA is developing a strategic plan to address air cargo security and has undertaken a comprehensive outreach process to strengthen security programs across the industry. General aviation security: While TSA has taken several actions related to general aviation since September 11, this segment of the industry remains potentially more vulnerable than commercial aviation. For example, general aviation pilots are not screened prior to taking off and the contents of a plane are not examined at any point. According to TSA, solutions that can be implemented relatively easily at the nation’s commercial airports are not practical at the 19,000 general aviation airports. It would be very difficult to prevent a general aviation pilot who is intent on committing a terrorist attack with his or her aircraft from doing so. The vulnerability of the system was illustrated in January 2002, when a Florida teenage flight student crashed his single-engine airplane into a Tampa skyscraper. TSA is working with the appropriate stakeholders to close potential security gaps and to raise the security standards across this diverse segment of the aviation industry. Maritime and land transportation systems have their own unique security vulnerabilities. For example, maritime and land transportation systems generally have an open design, meaning the users can access the system at multiple points. The systems are open by design so that they are accessible and convenient for users. In contrast, the aviation system is housed in closed and controlled locations with few entry points. The openness of the maritime and land transportation systems can leave them vulnerable because transportation operators cannot monitor or control who enters or leaves the systems. However, adding security measures that restrict the flow of passengers or freight through the systems could have serious consequences for commerce and the public. Individual maritime and land transportation modes also have unique challenges and vulnerabilities. For example, representatives from the motor carrier industry noted that the high turnover rate (about 40 to 60 percent) of drivers means that motor carrier operators must be continually conducting background checks on new drivers, which is expensive and time consuming. Additionally, representatives from the motor coach industry commented that the number of used motor coaches on the market coupled with the lack of guidance or requirements on buying or selling these vehicles is a serious vulnerability. In particular, there are approximately 5,000 used motor coaches on the market; however, there is very little information on who is selling and buying them, nor is there any consistency among motor coach operators in whether they remove their logos from the vehicles before they are sold. These vehicles could be used as a weapon or to transport a weapon. Federal Motor Carrier Safety Administration officials told us they have not issued guidance to the industry on this potential vulnerability because TSA is responsible for security and therefore would be responsible for issuing such guidance. Since September 11, transportation operators and state and local governments have been working to strengthen security, according to associations we contacted. Although security was a priority before September 11, the terrorist attacks elevated the importance and urgency of transportation security for transportation operators and state and local governments. The industry has been consistently operating at a heightened state of security since September 11. State and local governments have also made transportation security investments since September 11. According to representatives from a number of industry associations we interviewed, transportation operators have implemented new security measures or increased the frequency or intensity of existing activities. Some of the most common measures cited include: Conducted vulnerability or risk assessments: Many transportation operators conducted assessments of their systems to identify potential vulnerabilities, critical infrastructure or assets, and corrective actions or needed security improvements. For example, the railroad industry conducted a risk assessment, that identified over 1,300 critical assets and served as a foundation for the industry’s security plan. Tightened access control: Many transportation operators have tightened access control to their facilities and equipment by installing fences and requiring employees to display identification cards, among other things. For example, some motor carrier operators have installed fences around truck yards and locked inventory at night. Intensified security presence: Some transportation operators have increased the number of police or security who patrol their systems. For example, transit agencies have placed surveillance equipment, alarms, or security personnel at access points to subway tunnels, bus yards, and other nonpublic places and required employees to wear brightly colored vests for increased visibility. Increased emergency drills: Many transportation operators have increased the frequency of emergency drills. For example, Amtrak reported that it has conducted two full-scale emergency drills in New York City and is currently trying to arrange a drill at Union Station in Washington, D.C. The purpose of emergency drilling is to test emergency plans, identify problems, and develop corrective actions. Figure 5 is a photograph from an annual emergency drill conducted by the Washington Metropolitan Area Transit Authority. Developed or revised security plans: Transportation operators developed security plans or reviewed existing plans to determine, what changes, if any, needed to be made. For example, DOT’s Office of Pipeline Safety worked with the industry to develop performance oriented security guidance. The Office of Pipeline Safety also encouraged all pipeline operators to develop security plans and directed operators with critical facilities to develop security plans for these facilities. Provided additional training: Many transportation operators have either participated in and/or conducted additional training on security or antiterrorism. For example, the United Motorcoach Association is developing an online security training program for motor coach operators, using funds from the Intercity Bus Security Grant Program. Similarly, many transit agencies attended seminars conducted by FTA or by the American Public Transportation Association. Some transportation industries have also implemented more innovative security measures, according to associations we contacted. For example, the natural gas industry modeled the impact of pipeline outages on the natural gas supply in the Northeast, which helped to identify vulnerabilities and needed improvements. The motor carrier industry developed a program called the Highway Watch Program, supported by the American Trucking Associations. The program is a driver-led, state-organized safety system that since September 11 has included a security component. Specifically, drivers are provided terrorism awareness training and are encouraged to report suspicious activities they witness on the road to a Highway Watch Program call center, which is operated 24 hours a day, 7 days a week. The call center then directs the call to appropriate authorities. As we have previously reported, state and local governments are critical stakeholders in the nation’s homeland security efforts. This is equally true in securing the nation’s transportation system. State and local governments play a critical role, in part, because they own a significant portion of the transportation infrastructure, such as airports, transit systems, highways, and ports. For example, state and local governments own over 90 percent of the total mileage of the highway system. Even when state and local governments are not the owners or operators, they nonetheless are directly affected by the transportation modes that run through their jurisdictions. Consequently, the responsibility for protecting this infrastructure and responding to emergencies involving the transportation infrastructure often falls to state and local governments. Security efforts of local and state governments have included developing counter terrorist plans, participating in training and security-related research, participating in transportation operators’ emergency drills and table-top exercises, conducting vulnerability assessments of transportation assets, and participating in emergency planning sessions with transportation operators. Some state and local governments have also hired additional law enforcement personnel to patrol transportation assets. Much of the funding for these efforts has been covered by the state and local governments, with a bulk of the expenses going to personnel costs, such as additional law enforcement officers and overtime. The Congress, DOT, TSA, and other federal agencies, took numerous steps to enhance transportation security since September 11. The roles of the federal agencies in securing the nation’s transportation system, however, are in transition. Prior to September 11, DOT had primary responsibility for the security of the transportation system. In the wake of September 11, Congress created TSA and gave it responsibility for the security of all modes of transportation. However, DOT and TSA have not yet formally defined their roles and responsibilities in securing all modes of transportation. Furthermore, TSA is moving forward with plans to enhance transportation security. For example, TSA plans to issue security standards for all modes. DOT modal administrations are also continuing their security efforts for different modes of transportation. Congress has acted to enhance the security of the nation’s transportation system since September 11. In addition to passing the Aviation and Transportation Security Act (ATSA), Congress passed numerous pieces of legislation aimed at improving transportation security. For example, Congress passed the USA PATRIOT Act of 2001, which mandates federal background checks of individuals operating vehicles carrying hazardous materials and the Homeland Security Act, which created DHS and moved TSA to the new department. Congress also provided funding for transportation security enhancements through various appropriations acts. For example, the 2002 Supplemental Appropriations Act, in part, provided (1) $738 million for the installation of explosives detection systems in commercial service airports, (2) $125 million for port security activities, and (3) $15 million to enhance the security of intercity bus operations. (See app. IV for a listing of the key pieces of transportation security-related legislation that has been passed since September 11.) Federal agencies, notably TSA and DOT, have also taken steps to enhance transportation security since September 11. In its first year of existence, TSA worked to establish its organization and focused primarily on meeting the aviation security deadlines contained in ATSA. In January 2002, TSA had 13 employees to tackle securing the nation’s transportation system—1 year later, TSA had about 65,000 employees. TSA reports that it met over 30 deadlines during 2002 to improve aviation security, including two of its most significant deadlines—to deploy federal passenger screeners at airports across the nation by November 19, 2002, and to screen every piece of checked baggage for explosives by December 31, 2002. According to TSA, other completed TSA activities included the following: recruiting, hiring, training, and deploying about 56,000 federal screeners. awarding grants for port security; and implementing performance management system and strategic planning activities to create a results-oriented culture. As TSA worked to establish itself and improve the security of the aviation system, DOT modal administrations acted to enhance security of air, land, and maritime transportation. As table 2 shows, the actions taken by DOT modal administrations varied. For example, FTA launched a multipart initiative for mass transit agencies, which provided grants for emergency drills, offered free security training, conducted security assessments at 36 transit agencies, provided technical assistance, and invested in research and development. The Federal Motor Carrier Safety Administration developed three courses for motor coach drivers. The response of various DOT modal agencies to the threat of terrorist attacks on the transportation system has varied due to differences in authority and resource limitations. In addition to TSA and DOT modal administrations, other federal agencies have also taken actions to improve security. For example, the Bureau of Customs and Border Protection (CBP), previously known as the U.S. Customs Service, has played a key role in improving port security. Since September 11, the agency has launched a number of initiatives to strengthen the security of the U.S. border, including ports. The initiatives are part of a multilayered approach, which rely on partnerships between foreign nations and the U.S. to identify problems at their source, cooperation from the global trade community to secure the flow of goods, and collaboration between federal, state, and local law enforcement and intelligence agencies to ensure that information is analyzed and used to target scarce resources on the highest risk issues. Some of the specific initiatives that CBP has implemented to interdict high risk cargo before it reaches the U.S. include the following: Developing and deploying of a strategy for the detection of nuclear and radiological weapons and materials. The elements of this strategy— equipment, training, and intelligence—are focused on providing inspectors with the tools to detect weapons of mass destruction in cargo containers and vehicles. In the maritime environment, this includes the deployment of radiation portal monitors, personal radiation detectors, large-scale nonintrusive inspection technology, such as truck and container x-rays and mobile x-ray vans. Much of the development of this equipment has been done in partnership with the Department of Energy. Figure 6 shows new mobile gamma ray imaging devices at ports to help inspectors examine the contents of cargo containers and vehicles. Establishing the Customs Trade Partnership Against Terrorism (C- TPAT), which is a joint government business initiative aimed at securing the supply chain of global trade against terrorist exploitation. According to CBP, this initiative has leveraged the cooperation of the owners of the global supply chain by working with this community to implement and share standard security best practices. The members of C-TPAT include importing businesses, freight forwarders, carriers, and U.S. port authorities and terminal operators. According to CBP, C-TPAT members bring 96 percent of all containers coming into the U.S. After the initial application and training phase of this program, CBP conducts foreign and domestic validations to verify that the supply chain security measures contained in C-TPAT participants’ security profiles are reliable, accurate, and effective. C-TPAT members are strongly encouraged to self-police such areas as personnel screening, physical security procedures and personnel, and the security of service providers. Launching the Container Security Initiative (CSI), which is designed specifically to secure the ocean-going sea container. The key elements of CSI include using advance information to identify high-risk containers; inspecting containers identified through the prescreening process as high-risk before they are shipped to the U.S.; using detection technology to quickly inspect containers identified as high-risk; and developing and using smarter, more secure containers. According to CBP, the U.S. has signed agreements with 18 of the countries with the world’s largest seaports, which allows for the deployment of U.S. inspectors and equipment to these foreign seaports, and is beginning the expansion of CSI to other global ports with significant volume or strategic locations. TSA is moving forward with efforts to secure the entire transportation system. TSA has adopted a systems approach—that is, a holistic rather than a modal approach—to securing the transportation approach. In addition, TSA is using risk management principles to guide its decision- making. To help TSA make risk-based decisions, TSA is developing standardized criticality, threat, and vulnerability assessment tools. TSA is also planning to establish security standards for all modes of transportation and is launching a number of new security efforts for the maritime and land transportation modes. TSA is taking a systems approach to securing the transportation system. Using this approach, TSA plans to address the security of the entire transportation system as a whole, rather than focusing on individual modes of transportation. According to TSA officials, using a systems approach to security is appropriate for several reasons. First, the transportation system is intermodal, interdependent, and international. Given the intermodalism of the system, incidents in one mode of transportation could affect other modes. Second, it is important not to drive terrorism from one mode of transportation to another mode because of perceived lesser security—that is, make a mode of transportation a more attractive target because another mode is “hardened” with additional security measures. Third, it is important that security measures for one mode of transportation are not overly stringent or too economically challenging compared with others. Fourth, it is important that the attention on one aspect of transportation security (e.g., cargo, infrastructure, or passengers) does not leave the other aspects vulnerable. The systems approach is reflected in the organizational structure of TSA’s Office of Maritime and Land Security, which is responsible for the security of the maritime and land modes of transportation. Rather than organize around the different modes of transportation, such as DOT’s modal administrations, the office is organized around cross-modal issues. As figure 7 shows, the Office of Maritime and Land Security has six divisions, including Cargo Security and Passenger Security. The director of each division will be responsible for a specific aspect of security of multiple modes. For example, the Director of Cargo Security will be responsible for cargo security for all surface modes of transportation. TSA has adopted a risk management approach for its efforts to enhance the security of the nation’s transportation system. A risk management approach is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions in order to link resources with prioritized efforts. Table 3 describes this approach. As figure 8 illustrates, the highest priorities emerge where the three elements of risk management overlap. For example, transportation infrastructure that is determined to be a critical asset, vulnerable to attack, and a likely target would be at most risk and therefore would be a higher priority for funding compared with infrastructure that was only vulnerable to attack. According to TSA officials, risk management principles will drive all decisions—from standard setting to funding priorities to staffing. Using risk management principles to guide decision-making is a good strategy, given the difficult trade-offs TSA will likely have to make as it moves forward with its security efforts. We have advocated using a risk management approach to guide federal programs and responses to better prepare against terrorism and other threats and to better direct finite national resources to areas of highest priority. As representatives from local government and industry associations and transportation security experts repeatedly noted, the size of the transportation system precludes all assets from being equally protected; moreover, the risks vary by transportation assets within modes and by modes. In addition, requests for funding for transportation security enhancements will likely exceed available resources. Risk management principles can help TSA determine security priorities and identify appropriate solutions. Other transportation stakeholders are also using risk management principles. For example, the rail industry conducted a comprehensive risk analysis of its infrastructure, which included an assessment of threats, vulnerabilities, and criticality. The results of the risk analysis formed the basis for the rail industry’s security management plan, which identified countermeasures for the different threat levels. Similarly, the pipeline industry is using a risk management approach in securing its infrastructure. The Office of Pipeline Safety and industry associations noted that the pipeline industry had adopted a risk management approach for safety prior to September 11. As a result, the industry extended this approach to its security efforts after September 11. TSA Is Developing Standard Assessment Tools to Help Make Risk- Based Decisions To help TSA make risk based decisions, TSA’s Office of Threat Assessment and Risk Management is developing two assessment tools that will help assess threats, criticality, and vulnerabilities. The first tool will assess the criticality of a transportation asset or facility. TSA is working with DHS’ Information Analysis and Infrastructure Protection (IAIP) Directorate to ensure that TSA’s criticality tool will be consistent with the IAIP’s approach for managing critical infrastructure. TSA’s criticality tool will incorporate multiple factors, such as fatalities, economic importance, and socio- political importance, to arrive at a criticality score. The score will enable TSA, in conjunction with transportation stakeholders, to rank assets and facilities within each mode. According to TSA, by identifying and prioritizing assets and facilities, TSA can focus resources on that which is deemed most important. The second tool is referred to as the Transportation Risk Assessment and Vulnerability Evaluation Tool (TRAVEL). This tool will assess threats and analyze vulnerabilities for all transportation modes. According to TSA officials, TSA has worked with a number of organizations in developing TRAVEL, including the Department of Defense, Sandia National Laboratories, and AASHTO. TSA is also working with economists on developing the benefit/cost component of this model. TSA officials believe that a standard threat and vulnerability assessment tool is needed so that TSA can identify and compare threats and vulnerabilities across the modes. If different methodologies are used in assessing the threats and vulnerabilities, comparisons can be problematic. A standard assessment tool would ensure consistent methodology. Using TRAVEL, TSA plans to gather comparable threat and vulnerability information across all modes of transportation, which would inform TSA’s risk-based decision-making. TSA plans to issue national security standards for all modes of transportation. The federal government has historically set security standards for the aviation sector. For instance, prior to the passage of ATSA, FAA set security standards that the airlines were required to follow in several areas including screening equipment, screener qualifications, and access control systems. In contrast, prior to the September 11 attacks, limited statutory authority existed to require measures to ensure the security of the maritime and land transportation systems. According to a TSA report, the existing regulatory framework leaves the maritime and land transportation systems unacceptably vulnerable to terrorist attack. For example, the rail, transit, and motor coach transportation systems are subject to no mandatory security requirements, resulting in little or no screening of passengers, baggage, or crew. Additionally, seaborne passenger vessel and seaport terminal operators have inconsistent levels and methods of screening, and are largely free to set their own rules about the hiring and training of security personnel. Hence, TSA will set standards to ensure consistency among modes and across the transportation system and to reduce the transportation system’s vulnerability to attacks. TSA plans to begin rolling out the standards starting summer 2003. According to TSA officials and documents, TSA’s standards will be performance-, risk-, and threat-based, and mandatory. More specifically: Standards will be performance-based. Rather than prescriptive standards, TSA standards will be performance-based, which will allow transportation operators to determine how best to achieve the desired level of security. TSA officials believe that performance-based standards provide for operator flexibility, allow for operators to use their professional judgment in enhancing security, and encourage technology advancement. Standards will be risk-based. Standards will be set for areas for which assessments of the threats, vulnerabilities, and criticality indicate that an attack would have a national impact. A number of factors could be considered in determining “national impact,” such as fatalities and economic damage. Standards will be threat-based. The standards will be tied to the national threat condition and/or local threats. As the threat condition escalates, the standards will require transportation operators to implement additional countermeasures. Standards may be mandatory. The standards will be mandatory when the risk level is too high or unacceptable. TSA officials stated that in these cases, mandatory standards are needed to ensure accountability. In addition, according to TSA officials, voluntary requirements put security-conscious transportation operators that implement security measures at a competitive disadvantage—that is, they have spent money that their competitors may have not spent. This creates a disincentive for transportation operators to implement voluntary requirements. TSA officials believe that mandatory standards will reduce this problem. In determining whether mandatory standards are needed, TSA will review the results of criticality and vulnerability assessments, current best practices, and voluntary compliance opportunities in conjunction with the private sector and other government agencies. Although TSA officials expect some level of resistance to the standards by the transportation industry, they believe that their approach of using risk-, threat-, and performance-based standards will increase the acceptance of the standards. For example, performance-based standards allow for more operator flexibility in implementing the standards, compared with rigid, prescriptive standards. Moreover, TSA plans to issue only a limited number of standards—that is, standards will be issued only when assessments of the threats, vulnerabilities, and criticality indicate that the level of risk is too high or unacceptable. TSA also expects some level of resistance to the standards from DOT modal administrations. Although TSA will establish the security standards, TSA expects that they will be administered and implemented by existing agencies and organizations. DOT modal administrations may be reluctant to assume this role because it could alter their relationships with the industry. Historically, DOT surface transportation modal administrations’ missions have largely focused on maintaining operations and improving service and safety, not regulating security. Moreover, the authority to regulate security varies by DOT modal administration. For example, FTA has limited authority to regulate and oversee security at transit agencies. In contrast, FRA has regulatory authority for rail security, and DOT’s Office of Pipeline Safety has responsibility for writing safety and security regulations on liquefied natural gas storage facilities. In addition, DOT modal administrations may be reluctant to administer and implement standards because of resource concerns. FHWA officials commented that, given the current uncertainty about the standards and their impacts, FHWA is reluctant to commit, in advance, to staff or funding to enforce new security standards. Because transportation stakeholders will be involved in administering, implementing, and/or enforcing TSA standards, stakeholder buy-in is critical to the success of this initiative. Compromise and consensus on the part of stakeholders is also necessary. However, achieving such consensus and compromise may be difficult, given the conflicts between some stakeholders’ goals and interests. Stakeholders Are Concerned About Pending Standards Transportation stakeholders expressed concerns about TSA’s plan to issue mandatory security standards for all modes of transportation. A common concern raised by associations was that standards represent unfunded mandates, unless the federal government pays for the standards that it promulgates. According to the industry and state and local government associations we spoke to, unfunded mandates create additional financial burdens for transportation operators, who are already experiencing financial difficulties. TSA officials said they hope to provide grants to implement the standards; however, it is unclear at this time if grants will be available. Another common concern expressed by transportation security experts and industry associations is that TSA does not have the necessary expertise or knowledge to develop appropriate security standards for the industry. In a 2003 report to Congress, TSA recognizes that each transportation mode has unique characteristics that make various security measures more or less feasible or appropriate. However, a number of industry associations, transportation security experts, and DOT modal administrations expressed concern that TSA does not have a good understanding of the unique challenges of the modes, such as the need to maintain accessibility in transit systems, or the possible negative ramifications—both operationally and financially—of standards. Officials from one DOT modal administration noted that industry representatives left a meeting with TSA officials with serious concerns regarding TSA officials’ understanding of their industry. Senior TSA officials stated that TSA employees have extensive subject matter expertise in transportation and security issues. Moreover, TSA officials stated that they will draw on the expertise and knowledge of the transportation industry and other DHS agencies, such as the Coast Guard, as well as all stakeholders in developing the standards. A number of representatives from industry associations also expressed concerns that TSA may issue mandatory or regulatory standards, especially since their industries have taken proactive steps to enhance security since September 11. Industry associations also noted that the majority of transportation infrastructure in some modes is privately owned. As such, transportation operators have an economic incentive to ensure the security of their infrastructure; hence, operators are voluntarily implementing increased security measures. For example, the pipeline industry worked with DOT’s Office of Pipeline Safety to develop industry-wide security guidelines. These guidelines are risk-based and identify countermeasures that pipeline operators should implement at different threat levels. The pipeline guidelines are also voluntary. According to pipeline industry associations, the pipeline industry is implementing these security guidelines. Representatives from industry associations stated that TSA should wait to see if industry-developed, voluntary measures are working before issuing mandatory standards. TSA officials noted that TSA will review the results of criticality and vulnerability assessments, current best practices, and voluntary compliance opportunities in conjunction with the private sector and other government agencies before issuing mandatory standards. Finally, industry representatives expressed concern that TSA has not adequately included the transportation industry in its development of standards. Many industry representatives and some DOT officials we met with were unsure of whether TSA was issuing standards, what the standards would entail, or the time frames for issuing the standards. The uncertainty about the pending standards can lead to confusion and/or inaction. For example, Amtrak officials noted that they are reluctant to spend money to implement certain security measures because they are worried that TSA will subsequently issue standards that will require Amtrak to redo its efforts. TSA officials repeatedly told us they understand the importance of gaining stakeholder buy-in and partnering with the industry. They also stated that they have conducted outreach to transportation stakeholders and plan to continue their outreach efforts in the future. TSA is developing a strategy that will serve as its framework for communicating with transportation stakeholders and obtaining stakeholders’ input in TSA’s decision-making. TSA plans to finalize this strategy in July 2003. TSA is also working on a number of additional security efforts, such as establishing the Transportation Workers Identification Card (TWIC) program, developing the next generation of the Computer Assisted Passenger Pre-Screening System, developing a national transportation system security plan, and exploring methods to integrate operations and security, among other things. The TWIC program is intended to improve access control for the 12 million transportation workers that require unescorted physical or cyber access to secure areas of the nation’s transportation modes by establishing a uniform, nationwide standard for secure identification of transportation workers. Specifically, TWIC will combine standard background checks and biometrics so that a worker can be positively matched to his/her credential. Once the program is fully operational, the TWIC would be the standard credential for transportation workers and would be accepted by all modes of transportation. According to TSA, developing a uniform, nationwide standard for identification will minimize redundant credentialing and background checks. As TSA moves forward with new security initiatives, DOT modal administrations are also continuing their security efforts and, in some cases, launching new security initiatives. For example, FHWA is coordinating a series of workshops this year on emergency response and preparedness for state departments of transportation and other agencies. FTA also has a number of current initiatives under way in the areas of public awareness, research, training, technical assistance, and intelligence sharing. For example, FTA developed a list of the top 20 security actions transit agencies should implement and is currently working with transit agencies to assist them in implementing these measures. FTA’s goal is to have the largest 30 agencies implement at least 80 percent of these measures by the end of fiscal year 2003. FAA is also continuing its efforts to enhance cyber security in the aviation system. Although the primary responsibility for securing the aviation system was transferred to TSA, FAA remains responsible for protecting the nation’s air traffic control system—both the physical security of its air traffic control facilities and the computer systems. The air traffic control system’s computers help the nation’s air traffic controllers safely direct and separate traffic—sabotaging this system could have disastrous consequences. FAA is moving forward with efforts to increase the physical security of its air traffic control facilities and ensure that contractors who have access to the air traffic control system undergo background checks. The roles and responsibilities of TSA and DOT in transportation security have yet to be clearly delineated, which creates the potential for duplicating or conflicting efforts as both entities move forward with their security efforts. DOT modal administrations were primarily responsible for the security of the transportation system prior to September 11. In November 2001, Congress passed ATSA, which created TSA and gave it primary responsibility for securing all modes of transportation. However, during TSA’s first year of existence, TSA’s main focus was on aviation security—more specifically, on meeting ATSA deadlines. While TSA was primarily focusing on aviation security, DOT modal administrations launched various initiatives to enhance the security of the maritime and land transportation modes. With the immediate crisis of meeting many aviation security deadlines behind it, TSA has been able to focus more on the security of all modes of transportation. Legislation has not defined TSA’s role and responsibilities in securing all modes of transportation. In particular, ATSA does not specify TSA’s role and responsibilities in securing the maritime and land transportation modes in detail as it does for aviation security. For instance, the act does not set deadlines for TSA to implement certain transit security requirements. Instead, the act simply states that TSA is responsible for ensuring security in all modes of transportation. The act also did not eliminate DOT modal administrations’ existing statutory responsibilities for securing the different transportation modes. Moreover, recent legislation indicates that DOT still has security responsibilities. In particular, the Homeland Security Act of 2002 states that the Secretary of Transportation is responsible for the security as well as the safety of rail and the transport of hazardous materials by all modes. To clarify their roles and responsibilities in transportation security, DOT modal administrations and TSA were planning to develop memorandums of agreement. The purpose of these documents was to define the roles and responsibilities of the different agencies as they relate to transportation security and address a variety of issues, including separating safety and security activities, interfacing with the transportation industry, and establishing funding priorities. TSA and the DOT modal administrations worked for months to develop the memorandums of agreement. The draft agreements were presented to senior DOT and TSA management for review in early spring of this year. According to DOT’s General Counsel, with the exception of the memorandum of agreement between FAA and TSA, the draft memorandums were very generic and did not provide much clarification. Consequently, DOT and TSA decided not to execute or sign the memorandums of agreement, except for the memorandum of agreement between FAA and TSA, which was signed on February 28, 2003. The General Counsel suggested several reasons why the majority of draft memorandums of agreement were too general. First, as TSA’s departure date approached—that is, the date that TSA transferred from DOT to DHS, TSA and DOT modal administration officials may have grown concerned about formally binding the organizations to specific roles and responsibilities. Second, the working relationships between TSA and most of the DOT modal administrations is still very new; as a result, all of the potential issues, problem areas, or overlap have yet to be identified. Thus, identifying items to include in the memorandums of agreement was more difficult. Rather than execute memorandums of agreement, the Secretary of Transportation and the Administrator of TSA exchanged correspondence that commits each entity to continued coordination and collaboration on security measures. In the correspondence, the Secretary and Administrator also agreed to use the memorandum of agreement between TSA and FAA as a framework for their interactions on security matters for all other modes. TSA and DOT officials stated that they believe memorandums of agreement are a good strategy for delineating roles and responsibilities and they would be open to using memorandums of agreement in the future. Transportation security experts and representatives of state and local government and industry associations we contacted generally believe that the transportation system is more secure today than it was prior to September 11. Transportation stakeholders have worked hard to strengthen the security of the system. Nevertheless, transportation experts, industry representatives, and federal officials all recommend that more work be done. Transportation experts and state and local government and industry representatives identified a number of actions that, in their view, should be implemented to enhance security, including clarifying federal roles and coordinating federal efforts, developing a transportation security strategy, funding security enhancements, investing in research and development, and providing better intelligence information and related guidance. The experts and representatives generally believe that these actions are the responsibility of the federal government. Clear federal roles and responsibilities is a core issue in transportation security, according to transportation experts and associations that we contacted. The lack of clarity about the roles and responsibilities of federal actors in transportation security creates the potential for confusion, duplication, and conflicts. Understanding roles, responsibilities, and whom to call is crucial in an emergency. However, representatives from several associations stated that their members were unclear of which agency to contact for their various security concerns and which agency has oversight for certain issues. Furthermore, they do not have contacts within these agencies. As mentioned earlier, several industry representatives reported that their members are receiving different messages from various federal agencies involved in transportation security, which creates confusion and frustration among the industry. They said the uncertainty about federal roles and the lack of coordination is straining intergovernmental relationships, draining resources, and raising the potential for problems in responding to terrorism. One industry association told us, for instance, that it has been asked by three different federal agencies to participate in three separate studies of the same issue. According to transportation experts and associations we contacted, a national transportation strategy is essential to moving forward with transportation security. It is crucial for helping stakeholders identify priorities, leveraging resources, establishing stakeholder performance expectations, and creating incentives for stakeholders to improve security. Currently, local government associations view the absence of performance expectations—coupled with limited threat information—as a major obstacle in focusing their people and resources on high priority threats, particularly at elevated threat levels. The experts also noted that modal strategies—no matter how complete—cannot address the complete transportation security problem and will leave gaps in preparedness. As mentioned earlier, TSA is in the process of developing a national transportation system security plan, which according to the Deputy Administrator of TSA, will provide an overarching framework for the security of all modes. Transportation security experts and association representatives we contacted believe that the federal government should provide funding for needed security improvements. While an overall security strategy is a prerequisite to investing wisely, providing adequate funding also is essential. Setting security goals and strategies without adequate funding diminishes stakeholders’ commitment and willingness to absorb initial security investments and long-term operating costs, an expert emphasized. Industry and state and local government associations also commented that federal funding should accompany any federal security standards; otherwise these standards will be considered unfunded mandates that the industry and state and local governments have to absorb. The federal government needs to play a strong role in investing in and setting a research and development agenda for transportation security, according to most transportation security experts and associations we contacted. They view this as an appropriate role for the federal government, since the products of research and development endeavors would likely benefit the entire transportation system, not just individual modes or operators. TSA is actively engaged in research and development projects, such as the development of the next generation explosive detection systems for baggage, hardening of aircraft and cargo/baggage containers, biometrics and other access control methods, and human factors initiatives to identify methods to improve screener performance, at its Transportation Security Laboratory in Atlantic City, New Jersey. However, TSA noted that continued adequate funding for research and development is paramount in order for TSA to be able to meet security demands with up-to-date and reliable technology. Transportation security experts and representatives from state and local government and industry associations stated that the federal government needs to play a vital role in sharing information—specifically, intelligence information and related guidance. Representatives from numerous associations commented that the federal government needs to provide timely, localized, actionable intelligence information. General threat warnings are not helpful. Rather, transportation operators want more specific intelligence information so that they can understand the true nature of a potential threat and implement appropriate security measures. Without more localized and actionable intelligence, stakeholders said they run the risk of wasting resources on unneeded security measures or not providing an adequate level of security. Moreover, local government officials often are not allowed to receive specific intelligence information because they do not have appropriate federal security clearances. Also, there is little federal guidance on how local authorities should respond to a specific threat or general threat warnings. For example, San Francisco police were stationed at the Golden Gate Bridge to respond to the elevated national threat condition. However, without information about the nature of the threat to San Francisco's large transportation infrastructure or clear federal expectations for a response, it is difficult to judge whether actions like this are the most effective use of police protection, according to representatives from a local government association. During TSA’s first year of existence, TSA met a number of challenges, including successfully meeting many congressional deadlines for aviation security. With the immediate crisis of meeting key aviation security deadlines behind TSA, it can now examine the security of the entire transportation system. As TSA becomes more active in securing the maritime and land transportation modes, it will become even more important that the roles of TSA and DOT modal administrations are clearly defined. Lack of clearly defined roles among the federal entities could lead to duplication and confusion. More importantly, it could hamper the transportation sector’s ability to prepare for and respond to attacks. To clarify and define the roles and responsibilities of TSA and DOT modal administrations in transportation security matters, we recommend that the Secretary of Transportation and Secretary of Homeland Security use a mechanism, such as a memorandum of agreement to clearly delineate their roles and responsibilities. At a minimum, this mechanism should establish the responsibilities of each entity in setting, administering, and implementing security standards and regulations, determining funding priorities, and interfacing with the transportation industry as well as define each entity’s role in the inevitable overlap of some safety and security activities. After the roles and responsibilities of each entity are clearly defined, this information should be communicated to all transportation stakeholders. We provided DOT, DHS, and Amtrak with a draft of this report for review and comment. Amtrak generally agreed with our findings and recommendation and provided some technical comments, which we have incorporated into this report where appropriate. DOT and DHS generally agreed with the report’s findings. However, they disagreed with the conclusion and recommendation that their roles and responsibilities need to be clarified and defined. The two departments stated that the roles and responsibilities of each entity is clear—that is, DHS has primary responsibility for transportation security and DOT will play a supporting role in such matters. We agree that the Aviation and Transportation Security Act (ATSA) gave TSA primary responsibility for securing all modes of transportation. However, neither this act, nor other legislation defined TSA’s roles and responsibilities in securing all modes of transportation. Specifically, ATSA does not specify TSA’s role and responsibilities in securing the maritime and land transportation modes in detail as it does for aviation security. The act also did not eliminate DOT modal administrations’ existing statutory responsibilities for securing the different modes of transportation. Moreover, recent legislation clarifies that DOT still has transportation security responsibilities. In particular, the Homeland Security Act of 2002 states that the Secretary of Transportation is responsible for the security as well as the safety of rail and the transport of hazardous materials by all modes. In addition, although DOT and DHS believe their roles and responsibilities are clearly defined, transportation security stakeholders we contacted are not as certain. For example, representatives from several associations stated that their members were unclear as to which agency to contact for their various security concerns and which agency has oversight for certain issues. Representatives from several associations also told us that their members are receiving different messages from the various federal agencies involved in transportation security. Furthermore, as noted in the report, both TSA and DOT are moving forward with transportation security efforts. As both entities continue with their security efforts, it is important that the roles and responsibilities of each entity are coordinated and clearly defined. The lack of clarity can lead to duplication, confusion, and/or gaps in preparedness. We therefore continue to recommend that DOT and DHS use a mechanism, such as a memorandum of agreement, to clarify and define DOT modal administration’s and TSA’s roles and responsibilities in transportation security. After the roles and responsibilities of each entity are clearly defined, this information should be communicated to all transportation stakeholders. DOT and DHS also noted that the title of the draft report, Transportation Security: More Federal Coordination Needed to Help Address Security Challenges, as well as our conclusions and recommendations place too much emphasis on coordination. To better capture our conclusions and recommendations—that is, that the roles and responsibilities of TSA and DOT in security matters should be clearly delineated and communicated to all transportation security stakeholders—we have changed the report’s title to Transportation Security: Federal Action Needed to Help Address Security Challenges. However, we disagree that the report places too much emphasis on the lack of coordination between DOT and DHS. As noted above, representatives from several associations told us that their members have received conflicting messages from the federal agencies involved in transportation security. Moreover, there appears to be a break down in communication between TSA and DOT about current security initiatives. For example, although TSA officials stated that they have informed DOT about their plans to issue security standards, some DOT officials we met with were unsure as to whether TSA was issuing standards, what the standards would entail, or the time frames for issuing the standards. In addition to their written comments, DHS and DOT provided technical comments to our draft, which we have incorporated into the report where appropriate. See appendixes II and III for DOT’s and DHS’ comments and our responses. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the Secretary of Transportation, the Secretary of Homeland Security, the Administrator of the Transportation Security Administration, the President and Chief Executive Officer of Amtrak, the Director of the Office of Management and Budget, and interested congressional committees. We will make copies available to others upon request. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-2834 or at guerrerop@gao.gov. Individuals making key contributions to this report are listed in appendix VI. To address our four objectives, we conducted structured interviews with officials from TSA, Amtrak, and DOT, representatives from the major transportation industry associations and state and local government associations, and select transportation security experts. We selected transportation security experts based on their knowledge/expertise and reputation as being an expert in the transportation security arena. We also consulted with the National Academy of Sciences in identifying appropriate transportation security experts. Table 4 shows the federal agencies, industry associations, transportation security experts, and state and local government associations that were interviewed. Through these structured interviews we collected information on the challenges that exist in securing the transportation system, vulnerabilities of different modes, actions that transportation stakeholders—including the federal, state, and local governments and the operators—have taken to enhance security since September 11, TSA’s and DOT’s ongoing and planned security efforts, roles and responsibilities of TSA and DOT in securing the transportation system, and future security actions that industry associations and security experts believe are needed. We synthesized and analyzed the information from the structured interviews. In addition to the structured interviews, we analyzed the administration’s National Strategy for Homeland Security and the National Strategy for the Physical Protection of Critical Infrastructure and Key Assets and the Federal Bureau of Investigation’s The Terrorist Threat to the U.S. Homeland: An FBI Assessment. We also reviewed current transportation security-related research as well as transportation security-related reports and documents from TSA, Amtrak, and DOT, including strategic planning documents, memorandums, program descriptions, and budget and financial documents. We also analyzed security-related documents from industry associations, including action plans, operational information, and reports, and the U.S. Code and the Code of Federal Regulations. We also incorporated the findings of previous GAO reports on port, transit, aviation, and homeland security. We conducted our work from February 2003 through May 2003, in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Transportation letter dated June 10, 2003. 1. We agree that the title of the report should be changed. Our conclusions and recommendation call for the roles and responsibilities of TSA and DOT in security matters to be clearly delineated and communicated to all transportation security stakeholders. To more fully capture our conclusions and recommendations, we have changed the report’s title to Transportation Security: Federal Action Needed To Help Address Security Challenges. However, we disagree that our recommendation advances an “overly simplistic conclusion that ‘more Federal coordination’ is somehow a meaningful problem or a key to meeting transportation security challenges.” Although coordination does not solve all security challenges, it is a key element in meeting transportation security challenges. As we have noted in previous reports, coordination among all levels of the government and the private industry is critical to the success of security efforts. The lack of coordination can lead to problems such as duplication and/or conflicting efforts, gaps in preparedness, and confusion. Moreover, the lack of coordination can strain intergovernmental relationships, drain resources, and raise the potential for problems in responding to terrorism. The administration’s National Strategy for Homeland Security and the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets also emphasize the importance of and need for coordination in security efforts. In particular, the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets notes that protecting critical infrastructure, such as the transportation system, “requires a unifying organization, a clear purpose, a common understanding of roles and responsibilities, accountability, and a set of well-understood coordinating processes.” (Italics added for emphasis.) 2. We disagree that the commitment of TSA and DOT to broad and routine consultations through numerous formal and informal mechanisms is working. As we noted throughout the report, representatives from several associations told us that they have received conflicting messages from the federal agencies involved in transportation security. Representatives from several associations also stated that their members were unclear as to which agency to contact for their various security concerns and which agency has oversight for certain issues. Moreover, there appears to be a break down in communication between TSA and DOT about current security initiatives. For example, although TSA officials stated that they have informed DOT about their plans to issue security standards, some DOT officials we met with were unsure as to whether TSA was issuing standards, what the standards would entail, or the time frames for issuing the standards. 3. We do not believe the correspondence exchanged by Secretary Mineta and Admiral Loy adequately defines the roles and responsibilities of TSA and DOT in security issues. Rather than delineate the roles and responsibilities of each entity in security matters, such as determining funding priorities and interfacing with stakeholders, the correspondence primarily commits each entity to continued coordination and collaboration on security measures. In the correspondence, the Secretary and Administrator also agreed to use the memorandum of agreement between TSA and the Federal Aviation Administration (FAA) as a framework for their interactions on security matters for all other modes. Given the complexities and unique challenges in securing the different modes of transportation, we do not believe using the memorandum of agreement between TSA and FAA as a framework is sufficient. The lack of clearly defined roles and responsibilities can lead to duplication, confusion, conflicts, and most importantly, gaps in preparedness. Although designating a DOT liaison to TSA is a step in the right direction, the roles and responsibilities of each entity and the coordinating processes need to be documented. Departures of key individuals within each entity, such as the designated DOT liaison to TSA, have the potential to erode informal networks. Given the importance of security efforts, coordinating processes between TSA and DOT need to be documented so that they span the terms of various administrations and individuals. 4. We agree that the Aviation and Transportation Security Act (ATSA) gave TSA primary responsibility for securing all modes of transportation. However, neither this act, nor other legislation, has defined TSA roles and responsibilities in securing all modes of transportation. Specifically, ATSA does not specify TSA’s roles and responsibilities in securing the maritime and land transportation modes in detail as it does for aviation security. The act also did not eliminate DOT modal administrations’ existing statutory responsibilities for securing the different modes of transportation. Moreover, recent legislation clarifies that DOT still has transportation security responsibilities. In particular, the Homeland Security Act of 2002 states that the Secretary of Transportation is responsible for the security as well as the safety of rail and the transport of hazardous materials by all modes. To clarify and define DOT’s and TSA’s roles and responsibilities in transportation security, we believe that these entities should establish a mechanism, such as a memorandum of agreement. Using such a mechanism would serve to clarify, delineate, and document the roles and responsibilities of each entity. It would also serve to hold each entity accountable for its transportation security responsibilities. Finally, it could serve as a vehicle to communicate the roles and responsibilities of each entity to transportation security stakeholders. The mechanism—whether it is a memorandum of agreement or other document—used to clarify and define DOT’s and TSA’s roles and responsibilities should not be static. Rather, it should be a living document that changes as each entity’s roles and responsibilities in transportation security matters evolve and events occur. 5. We disagree that all of DOT’s ongoing security efforts are nonpolicy making activities. For example, the Research and Special Programs Administration issued regulations in March 2003 that requires shippers and carriers of hazardous materials to develop and implement security plans and to include a security component in their employee training programs. While DOT’s role in security efforts may decrease in the future, it seems unlikely that DOT will be devoid of any security responsibilities in the future. For example, as noted in the report, the Homeland Security Act of 2002 states that the Secretary of Transportation is responsible for the security as well as the safety of rail and the transport of hazardous materials by all modes. In addition, the Maritime Transportation Security Act of 2002 authorizes the Secretary of Transportation to train and certify maritime security professionals and establish a grant program to fund the implementation of Area Maritime Transportation Security Plans and facility security plans. Further, although the primary responsibility for securing the aviation system was transferred to TSA, FAA remains responsible for protecting the nation’s air traffic control system—both the physical security of its air traffic control facilities and computer systems. Although DOT recognizes that DHS has the lead in transportation security matters, it could be difficult to distinguish its role in maintaining transportation operations and improving transportation service and safety from DHS’ role in securing the transportation system. Security is often intertwined with transportation operations and safety. For example, installing a fence around truck yards could be considered both a safety and security measure. Further security measures that restrict the flow of passengers or freight through the transportation system could have serious consequences on transportation operations. Because of these interactions and overlap, the roles and responsibilities of DOT and DHS in transportation safety and security can be blurred. Consequently, we continue to believe the entities should establish a mechanism to help clarify and delineate their roles and responsibilities in security matters. The following are GAO’s comments on the Department of Homeland Security letter dated June 11, 2003. 1. We disagree that the report overstates the lack of coordination between DHS and DOT and that mechanisms to ensure coordination of responsibilities is unnecessary. Although DHS and DOT report that they are coordinating on security matters, based on our discussions with representatives from state and local government and industry associations, it appears that there is a need to improve such efforts. As we noted throughout the report, representatives from several associations told us that they have received conflicting messages from the federal agencies involved in transportation security. Representatives from several associations also stated that their members were unclear as to which agency to contact for their various security concerns and which agency has oversight for certain issues. Moreover, there appears to be a break down in communication between TSA and DOT about current security initiatives. For example, although TSA officials stated that they have informed DOT about their plans to issue security standards, some DOT officials we met with were unsure as to whether TSA was issuing standards, what the standards would entail, or the time frames for issuing the standards. We agree that the Aviation and Transportation Security Act (ATSA) gave TSA primary responsibility for securing all modes of transportation. However, neither this act, or other legislation, has defined TSA’s roles and responsibilities in securing all modes of transportation. Specifically, ATSA does not specify TSA’s role and responsibilities in securing the maritime and land transportation modes in detail as it does for aviation security. The act also did not eliminate DOT modal administrations’ existing statutory responsibilities for securing the different modes of transportation. Moreover, recent legislation clarifies that DOT still has transportation security responsibilities. In particular, the Homeland Security Act of 2002 states that the Secretary of Transportation is responsible for the security as well as the safety of rail and the transport of hazardous materials by all modes. To clarify and define DOT’s and TSA’s roles and responsibilities in transportation security, we believe that these entities should establish a mechanism, such as a memorandum of agreement. Using such a mechanism would serve to clarify, delineate, and document the roles and responsibilities of each entity. It would also serve to hold each entity accountable for its transportation security responsibilities. Finally, it could serve as a vehicle to communicate the roles and responsibilities of each entity to transportation security stakeholders. The mechanism—whether it is a memorandum of agreement or other document—used to clarify and define DOT’s and TSA’s roles and responsibilities should not be static. Rather, it should be a living document that changes as each entity’s roles and responsibilities in transportation security matters evolve and events occur. 2. We disagree that the report suggests that the continuation of security efforts by the DOT modal administrations represents a lack of coordination. The report credits TSA for meeting a number of aviation security deadlines during its first year of existence and highlights the efforts of DOT modal administrations and other federal agencies to improve the security of all modes since September 11. We also note that TSA is beginning to assert a greater role in securing all modes of transportation and DOT modal administrations are continuing or launching new security efforts. We did not suggest that the continuation of such efforts by DOT modal administrations represents a lack of coordination. Rather, we noted that as both entities move forward with security efforts, it is increasingly important that the roles of TSA and DOT modal administrations are clearly defined. The lack of clearly defined roles and responsibilities can lead to duplication, confusion, conflicts, and most importantly, gaps in preparedness. 3. Our intention is not to suggest that the federal government’s efforts to secure the non-aviation modes of transportation have been insufficient. To the contrary, we highlight the efforts by DOT modal administrations and other federal agencies to secure the maritime and land modes of transportation. We also recognize that TSA’s aviation security focus during its first year of existence was primarily due to the ATSA deadlines. 4. We agree that the newly created DHS brings a number of agencies responsible for transportation security under one roof, which could ultimately improve coordination and streamline and strengthen security efforts. However, this does not solve all the potential coordination problems we highlight in the report because important transportation stakeholders—specifically, the DOT modal administrations—are housed in another department. Because both DHS agencies and DOT modal administrations are moving forward with transportation security initiatives, it is critical that the roles and responsibilities of each entity are clearly delineated and communicated to all stakeholders and that they coordinate their security efforts. The lack of such clarification, communication, and coordination could create problems, such as duplication of efforts and gaps in preparedness. Admin. In addition to those named above, Steven Calvo, Nikki Clowers, Michelle Dresben, Glenn Dubin, Scott Farrow, Libby Halperin, David Hooper, Hiroshi Ishikawa, Ray Sendejas, and Glen Trochelman made key contributions to this report. Transportation Security Research: Coordination Needed in Selecting and Implementing Infrastructure Vulnerability Assessments, GAO-03-502 (Washington, D.C.: May 1, 2003). Coast Guard: Challenges during the Transition to the Department of Homeland Security, GAO-03-594T (Washington, D.C.: April 1, 2003). Transportation Security: Post-September 11th Initiatives and Long-Term Challenges, GAO-03-616T (Washington, D.C.: April 1, 2003). Aviation Security: Measures Needed to Improve Security of Pilot Certification Process, GAO-03-248NI (Washington, D.C.: February 3, 2003). (Not for Public Dissemination) Major Management Challenges and Program Risks: Department of Transportation, GAO-03-108 (Washington, D.C.: January 1, 2003) High Risk Series: Protecting Information Systems Supporting the Federal Government and the Nation’s Critical Infrastructure, GAO-03-121 (Washington, D.C.: January 1, 2003). Aviation Safety: Undeclared Air Shipments of Dangerous Goods and DOT’s Enforcement Approach, GAO-03-22 (Washington, D.C.: January 10, 2003). Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System, GAO-03-344 (Washington, D.C.: December 20, 2002). Mass Transit: Federal Action Could Help Transit Agencies Address Security Challenges, GAO-03-263 (Washington, D.C.: December 13, 2002). Aviation Security: Registered Traveler Program Policy and Implementation Issues, GAO-03-253 (Washington, D.C.: November 22, 2002). Computer Security: Progress Made, But Critical Federal Operations and Assets Remain at Risk, GAO-03-303T (Washington, D.C.: November 19, 2002). Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges, GAO-03-297T (Washington, D.C.: November 18, 2002). Coast Guard: Strategy Needed for Setting and Monitoring Levels of Effort for All Missions, GAO-03-155 (Washington, D.C.: November 12, 2002). Mass Transit: Challenges in Securing Transit Systems, GAO-02-1075T (Washington, D.C.: September 18, 2002). Pipeline Safety and Security: Improved Workforce Planning and Communication Needed, GAO-02-785 (Washington, D.C.: August 26, 2002). Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful, GAO-02-993T (Washington, D.C.: August 5, 2002). Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges, GAO-02-971T (Washington, D.C.: July 25, 2002). Critical infrastructure Protection: Significant Challenges Need to Be Addressed, GAO-02-961T (Washington, D.C.: July 24, 2002). Combating Terrorism: Preliminary Observations on Weaknesses in Force Protection for DOD Deployments Through Domestic Seaports, GAO-02- 955TNI (Washington, D.C.: July 23, 2002). (Not for Public Dissemination) Information Concerning the Arming of Commercial Pilots, GA0-02-822R (Washington, D.C.: June 28, 2002). Aviation Security: Deployment and Capabilities of Explosive Detection Equipment, GAO-02-713C (Washington, D.C.: June 20, 2002). (Classified) Coast Guard: Budget and Management Challenges for 2003 and Beyond, GAO-02-538T (Washington, D.C.: March 19, 2002). Aviation Security: Information on Vulnerabilities in the Nation’s Air Transportation System, GAO-01-1164T (Washington, D.C.: September 26, 2001). (Not for Public Dissemination) Aviation Security: Information on the Nation’s Air Transportation System Vulnerabilities, GAO-01-1174T (Washington, D.C.: September 26, 2001). (Not for Public Dissemination) Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations, GAO-01-1171T (Washington, D.C.: September 25, 2001). Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities, GAO-01-1165T (Washington, D.C.: September 21, 2001). Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security, GAO-01-1166T (Washington, D.C.: September 20, 2001). Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports, GAO-01-1162T (Washington, D.C.: September 20, 2001). Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues, GAO-03-715T (Washington, D.C.: May 8, 2003). Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture, GAO-03-190 (Washington, D.C.: January 17, 2003). Homeland Security: Management Challenges Facing Federal Leadership, GAO-03-260 (Washington, D.C.: December 20, 2002). Homeland Security: Information Technology Funding and Associated Management Issues, GAO-03-250 (Washington, D.C.: December 13, 2002). Homeland Security: Information Sharing Activities Face Continued Management Challenges, GAO-02-1122T (Washington, D.C.: October 1, 2002). National Preparedness: Technology and Information Sharing Challenges, GAO-02-1048R (Washington, D.C.: August 30, 2002). Homeland Security: Effective Intergovernmental Coordination Is Key to Success, GAO-02-1013T (Washington, D.C.: August 23, 2002). Critical Infrastructure Protection: Federal Efforts Require a More Coordinated and Comprehensive Approach for Protecting Information Systems, GAO-02-474 (Washington, D.C.: July 15, 2002). Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed, GAO-02-918T (Washington, D.C.: July 9, 2002). Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success, GAO-02-901T (Washington, D.C.: July 3, 2002). Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting, GAO-02-893T (Washington, D.C.: June 28, 2002). National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy, GAO-02-811T (Washington, D.C.: June 7, 2002). Homeland Security: Responsibility and Accountability for Achieving National Goals, GAO-02-627T (Washington, D.C.: April 11, 2002). National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts is Critical to an Effective National Strategy for Homeland Security, GAO-02-621T (Washington, D.C.: April 11, 2002). Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness, GAO-02-550T (Washington, D.C.: April 2, 2002). Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy, GAO-02-549T (Washington, D.C.: March 28, 2002). Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness, GAO-02-548T (Washington, D.C.: March 25, 2002). Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness, GAO-02-547T (Washington, D.C.: March 22, 2002). Homeland Security: Progress Made; More Direction and Partnership Sought, GAO-02-490T (Washington, D.C.: March 12, 2002). Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness, GAO-02-473T (Washington, D.C.: March 1, 2002). Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs, GAO-02-160T (Washington, D.C.: November 7, 2001). Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts, GAO-02-208T (Washington, D.C.: October 31, 2001). Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness, GAO-02-162T (Washington, D.C.: October 17, 2001). Information Sharing: Practices That Can Benefit Critical Infrastructure Protection, GAO-02-24 (Washington, D.C.: October 15, 2001). Homeland Security: Key Elements of a Risk Management Approach, GAO-02-150T (Washington, D.C.: October 12, 2001). Chemical and Biological Defense: Improved Risk Assessment and Inventory Management Are Needed, GAO-01-667 (Washington, D.C.: September 28, 2001). Critical Infrastructure Protection: Significant Challenges in Safeguarding Government and Privately Controlled Systems from Computer-Based Attacks, GAO-01-1168T (Washington, D.C.: September 26, 2001). Homeland Security: A Framework for Addressing the Nation’s Efforts, GAO-01-1158T (Washington, D.C.: September 21, 2001). Combating Terrorism: Selected Challenges and Related Recommendations, GAO-01-822 (Washington, D.C.: September 20, 2001). The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The economic well being of the U.S. is dependent on the expeditious flow of people and goods through the transportation system. The attacks on September 11, 2001, illustrate the threats and vulnerabilities of the transportation system. Prior to September 11, the Department of Transportation (DOT) had primary responsibility for the security of the transportation system. In the wake of September 11, Congress created the Transportation Security Administration (TSA) within DOT and gave it primary responsibility for the security of all modes of transportation. TSA was recently transferred to the new Department of Homeland Security (DHS). GAO was asked to examine the challenges in securing the transportation system and the federal role and actions in transportation security. Securing the nation's transportation system is fraught with challenges. The transportation system crisscrosses the nation and extends beyond our borders to move millions of passengers and tons of freight each day. The extensiveness of the system as well as the sheer volume of passengers and freight moved makes it both an attractive target and difficult to secure. Addressing the security concerns of the transportation system is further complicated by the number of transportation stakeholders that are involved in security decisions, including government agencies at the federal, state, and local levels, and thousands of private sector companies. Further exacerbating these challenges are the financial pressures confronting transportation stakeholders. For example, the sluggish economy has weakened the transportation industry's financial condition by decreasing ridership and revenues. The federal government has provided additional funding for transportation security since September 11, but demand has far outstripped the additional amounts made available. It will take a collective effort of all transportation stakeholders to meet existing and future transportation challenges. Since September 11, transportation stakeholders have acted to enhance security. At the federal level, TSA primarily focused on meeting aviation security deadlines during its first year of existence and DOT launched a variety of security initiatives to enhance the other modes of transportation. For example, the Federal Transit Administration provided grants for emergency drills and conducted security assessments at the largest transit agencies, among other things. TSA has recently focused more on the security of the maritime and land transportation modes and is planning to issue security standards for all modes of transportation starting this summer. DOT is also continuing their security efforts. However, the roles and responsibilities of TSA and DOT in securing the transportation system have not been clearly defined, which creates the potential for overlap, duplication, and confusion as both entities move forward with their security efforts. |
The livestock and poultry industry is vital to our nation’s economy, supplying meat, milk, eggs, and other animal products. However, the past several decades have seen substantial changes in America’s animal production industries. As a result of domestic and export market forces, technological changes, and industry adaptations, food animal production that was integrated with crop production has given way to fewer, larger farms that raise animals in confined situations. These large-scale animal production facilities are generally referred to as animal feeding operations. CAFOs are a subset of animal feeding operations and generally operate on a much larger scale. Most agricultural activities are considered to be nonpoint sources of pollution because the pollution that occurs is in conjunction with soil erosion caused by water and surface runoff of rain or snowmelt from diffuse areas such as farms or rangeland. However, the Clean Water Act specifically designates point sources of pollution to include CAFOs, which means that under the act, CAFOs that discharge into federally regulated waters are required to obtain a National Pollutant Discharge Elimination System (NPDES) permit. These permits generally allow a point source to discharge specified pollutants into federally regulated waters under specific limits and conditions. EPA, or the states that EPA has authorized to administer the Clean Water Act, are responsible for issuing these permits. In accordance with the Clean Water Act’s designation of CAFOs as point sources, EPA defined which poultry and livestock facilities constituted a CAFO and established permitting requirements for CAFOs. According to EPA regulations, first issued in 1976, to be considered a CAFO a facility must first be considered an animal feeding operation. Animal feeding operations are agricultural operations where the following conditions are met: animals are fed or maintained in a confined situation for a total of 45 days or more in any 12-month period, and crops, vegetation, forage growth, or post harvest residues are not sustained during normal growing seasons over any portion of the lot. If an animal feeding operation met EPA’s criteria and met or exceed minimum size thresholds based on the type of animal being raised, EPA considered the operation to be a CAFO. For example, an animal feeding operation would be considered a CAFO if it raised 1,000 or more beef cattle, 2,500 pigs weighing more than 55 pounds, or 125,000 chickens. In addition, EPA can designate an animal feeding operation of any size as a CAFO if it meets certain criteria, such as being a significant contributor of pollutants to federally regulated waters. In January 2003, we reported that although EPA believed that many animal feeding operations degrade water quality, it had placed little emphasis on its permit program and that exemptions in its regulations allowed as many as 60 percent of the largest operations to avoid obtaining permits. In its response to our 2003 report, EPA acknowledged that the CAFO program was hampered by outdated regulations. The agency subsequently revised its permitting regulations for CAFOs to eliminate the exemptions that allowed most animal feeding operations to avoid regulation. The revisions, issued in February 2003, also known as the 2003 CAFO rule, resulted, in part, from the settlement of a 1989 lawsuit by the Natural Resources Defense Council and Public Citizen. These groups alleged that EPA had failed to comply with the Clean Water Act. EPA’s 2003 CAFO Rule included the following key provisions: Duty to apply. All CAFOs were required to apply for a permit under the Clean Water Act unless the permitting authority determined that the CAFO had no potential to discharge to federally regulated waters. Expanded CAFO definitions. All types of poultry operations, as well as all stand-alone operations raising immature animals, were included in the 2003 CAFO Rule. More stringent design standard for new facilities in the swine, poultry, and veal categories. The 2003 rule established a no-discharge standard for new facilities that could be met if they were designed, constructed, and operated to contain the runoff from a 100-year, 24-hour storm event. Best management practices. Operations were required to implement best management practices for applying manure to cropland and for animal production areas. Nutrient management plans. CAFO operations were required to develop a plan for managing the nutrient content of animal manure as well as the wastewater resulting from CAFO operations, such as water used to flush manure from barns. Compliance schedule. The 2003 rule required newly defined CAFOs to apply for permits by April 2006 and existing CAFOs to develop and implement nutrient management plans by December 31, 2006. According to EPA officials, the 2003 rule was expected to ultimately lead to better water quality because the revised regulations would extend coverage to more animal feeding operations that could potentially discharge and contaminate water bodies and subject these operations to periodic inspections. Three laws provide EPA with certain authorities related to air emissions from animal feeding operations, but, unlike the Clean Water Act, they do not specifically cite CAFOs as regulated entities. The Clean Air Act regulates any animal feeding operation, regardless of size, that exceeds established air emission thresholds for certain pollutants. For example, in certain specific situations, hydrogen sulfide, ammonia, or particulate matter may be regulated. In addition, Section 103 of CERCLA and Section 304 of EPCRA require owners or operators of a facility to report to federal, state, or local authorities when a “reportable quantity” of certain hazardous substances, such as hydrogen sulfide or ammonia, is released into the environment. Together, CERCLA’s and EPCRA’s reporting requirements provide government authorities, emergency management agencies, and citizens the ability to know about the source and magnitude of hazardous releases. EPA also works with USDA to address the impacts of animal feeding operations on air and water quality and human health. In 1998, EPA entered into a memorandum of understanding with USDA that calls for the agencies to coordinate on air quality issues related to agriculture and share information. In addition, in 1999, the two agencies issued a unified national strategy aimed at having the owners and operators of animal feeding operations take actions to minimize water pollution from confinement facilities and land application of manure. To help minimize water pollution from animal feeding operations and meet EPA’s regulatory requirements, USDA, through its Natural Resources Conservation Service, provides financial and technical service to CAFO operators in developing and implementing nutrient management plans. Because no federal agency collects accurate and consistent data on the number, size, and location of CAFOs, it is difficult to determine precise trends in CAFOs. According to USDA officials, the data USDA collects for large farms raising animals can be used as a proxy for estimating trends in CAFOs nationwide. Using these data, we determined the following: Between 1982 and 2002, the number of large farms raising animals increased from about 3,600 to almost 12,000, or by about 234 percent. Growth rates varied dramatically by animal type. For instance, broiler chickens farms showed the largest increase, almost 1,200 percent, followed by hogs at more than 500 percent. In comparison, beef cattle farms grew by only 2 percent and layer chicken farms actually declined by 2 percent. The size of these farms also increased between 1982 and 2002. The layer and hog sectors had the largest increases in the median number of animals raised per farm, both growing by 37 percent between 1982 and 2002. In contrast, large farms that raised either broilers or turkeys only increased slightly in size, by 3 and 1 percent, respectively, from 1982 to 2002. The number of animals raised on large farms increased from over 257 million in 1982 to over 890 million in 2002—an increase of 246 percent. Moreover, most of the beef cattle, hogs, and layers raised in the United States in 2002 were raised on large farms. Specifically, 77 percent of beef cattle and 72 percent of both hogs and layers were raised on large farms. We also found that EPA does not systematically collect nationwide data to determine the number, size, and location of CAFOs that have been issued permits nationwide. Instead, since 2003, the agency has compiled quarterly estimates obtained from its regional offices or the states on the number and types of CAFOs that have been issued permits. However, these data are inconsistent and inaccurate and therefore do not provide EPA with the reliable data that it needs to identify permitted CAFOs nationwide. Without a systematic and coordinated process for collecting and maintaining accurate and complete information on the number, size, and location of CAFOs nationwide, EPA does not have the information it needs to effectively monitor and regulate these operations. In our report, we recommended that EPA develop a national inventory of permitted CAFOs and incorporate appropriate internal controls to ensure the quality of the data it collects. In response to our recommendation, EPA stated that it is currently working with its regional offices and states to develop and implement a new national data system to collect and record facility- specific information on permitted CAFOs. The amount of manure a large farm that raises animals can generate primarily depends on the types and numbers of animals raised on that farm, but can range from over 2,800 tons to more than 1.6 million tons a year. To further put this in perspective, the amount of manure produced by large farms that raise animals can exceed the amount of sanitary waste produced by some large U.S. cities. For example: A dairy farm meeting EPA’s large CAFO threshold of 700 dairy cows can create about 17,800 tons of manure annually, which is more than the about 16,000 tons of sanitary waste generated per year by the almost 24,000 residents of Lake Tahoe, California. A large farm with 800,000 hogs could produce over 1.6 million tons of manure per year, which is one and a half times more than the annual sanitary waste produced by the city of Philadelphia, Pennsylvania—about 1 million tons—with a population of almost 1.5 million. Although manure is considered a valuable commodity, especially in states with large amounts of farmland, like Iowa, where it is used as fertilizer for field crops, in some parts of the country, large farms that raise animals can be clustered in a few contiguous counties. Because this collocation can result in the separation of animal from crop production, there is less cropland on which manure can be applied as a fertilizer. A USDA report first identified this concern as early as 2000, when it found that between 1982 and 1997, as livestock production became more spatially concentrated, when manure was applied to cropland, crops were not fully using the nutrients in manure, and this could result in ground and surface water pollution from the excess nutrients. According to the report, the number of counties where farms produced more manure nutrients, primarily nitrogen and phosphorus, than could be applied to the land without accumulating nutrients in the soil increased. As a result, the potential for runoff and leaching of these nutrients from the soil was high, and water quality could be impaired. Agricultural experts and government officials who we spoke to during our review echoed the findings of USDA’s report and provided several examples of more recent clustering trends that have resulted in degraded water quality. For example, according to North Carolina agricultural experts, excessive manure production from CAFOs in five contiguous counties has contributed to the contamination of some of the surface and well water in these counties and the surrounding areas. USDA officials acknowledge that regional clustering of large animal feeding operations has occurred, but they told us that they believe producers’ implementation of nutrient management plans and use of new technologies, such as calibrated manure spreaders and improved animal feeds, have resulted in animal feeding operations more effectively using the manure being generated and reducing the likelihood that pollutants from manure are entering ground and surface water. However, USDA could not provide us with information on the extent to which these techniques are being used or their effectiveness in reducing water pollution from animal waste. Since 2002, at least 68 government-sponsored or peer-reviewed studies have been completed on air and water pollutants from animal feeding operations. Of these 68 studies, 15 directly linked pollutants from animal waste generated by animal feeding operations to specific health or environmental impacts. Eight of these 15 studies were water quality studies and 7 were air emissions studies. Academic experts and industry and EPA officials told us that only a few studies directly link CAFOs with health or environmental impacts because the same pollutants that CAFOs discharge also often come from other sources, including smaller livestock operations; row crops using commercial fertilizers; and wastes from humans, municipalities, or wildlife, making it difficult to distinguish the actual source of the pollution. 7 found no impacts on human health or the environment from pollutants emitted by CAFOs. Four of these 7 studies were water quality studies and 3 were air emissions studies. According to EPA and academic experts we spoke with, the concentrations of air and water pollutants discharged by animal feeding operations can vary for numerous reasons, including the type of animal being raised, feed being used and the manure management system being employed, as well as the climate and time of day when the emissions occur. 12 made indirect linkages between air and water pollutants and health and environmental impacts. While these studies found that animal feeding operations were the likely cause of human health or environmental impacts occurring in areas near the operations, they could not conclusively link waste from animal feeding operations to the impacts, often because other sources of pollutants could also be contributing. 34 of the studies focused on measuring the amounts of water or air pollutants discharged by animal feeding operations that are known to cause human health or environmental impacts at certain concentrations. Of the 34 studies, 19 focused on water pollutants and another 15 focused on measuring air emissions from animal feeding operations. While EPA recognizes the potential impacts that water and air pollutants from animal feeding operations can have on human health and the environment, it lacks the data necessary to assess how widespread the impacts are and has limited plans to collect the data that it needs. For example, with regard to water quality, EPA officials acknowledged that the potential human health and environmental impacts of some CAFO water pollutants, such as nitrogen, phosphorus, and pathogens, are well known. However, they also stated that EPA does not have data on the number and location of CAFOs nationwide and the amount of discharges from these operations. Without this information and data on how pollutant concentrations vary by type of operation, it is difficult to estimate the actual discharges occurring and to assess the extent to which CAFOs may be contributing to water pollution. Although EPA has recently taken some steps that may help provide some of these data, agency officials told us that EPA currently has no plans to conduct a national study to collect information on CAFO water pollutant discharges because of a lack of resources. Similarly, with regard to air quality, more recently, EPA has recognized concerns about the possible health and environmental impacts from air emissions produced by animal feeding operations. In this regard, prompted in part by public concern, EPA and USDA commissioned a 2003 study by the National Academy of Sciences (NAS) to evaluate the scientific information needed to support the regulation of air emissions from animal feeding operations. The NAS report identified several air pollutants from animal feeding operations, such as ammonia and hydrogen sulfide, that can impair human health. The NAS report also concluded that in order to determine the human health and environmental effects of air emissions from animal feeding operations, EPA and USDA would first need to obtain accurate estimates of emissions and their concentrations from animal feeding operations with varying characteristics, such as animal type, animal feed, manure management techniques, and climate. In 2007, the 2-year National Air Emissions Monitoring Study was initiated to collect data on air emissions from animal feeding operations as part of a series of consent agreements that EPA entered into with individual CAFOs. This study, funded by industry and approved by EPA, is intended to help the agency determine how to measure and quantify air emissions from animal feeding operations. The data collected will in turn be used to estimate air emissions from animal feeding operations with varying characteristics. According to agency officials, until EPA can determine the actual level of air pollutants being emitted by CAFOs, it will be unable to assess the extent to which these emissions are affecting human health and the environment. The National Air Emissions Monitoring Study is intended to provide a scientific basis for estimating air emissions from animal feeding operations and to help EPA develop protocols that will allow it to determine which operations do not comply with applicable federal laws. According to EPA, although it has the authority to require animal feeding operations to monitor their emissions and come into compliance with the Clean Air Act on a case-by-case basis, this approach has proven to be time and labor intensive. As an alternative to the case-by-case approach, in January 2005, EPA offered animal feeding operations an opportunity to sign a voluntary consent agreement and final order, known as the Air Compliance Agreement. Almost 13,900 animal feeding operations were approved for participation in the agreement, representing the egg, broiler chicken, dairy, and swine industries. Some turkey operations volunteered but were not approved because there were too few operations to fund a monitoring site, and the beef cattle industry chose not to participate. In return for participating in this agreement and meeting certain requirements, EPA agreed not to sue participating animal feeding operations for certain past violations or violations occurring during the National Air Emissions Monitoring Study. Although EPA told us that the National Air Emissions Monitoring Study is the first step in developing comprehensive protocols for quantifying air emissions from animal feeding operations, we found that the study may not provide EPA with the data that it needs for the following three reasons. The monitoring study may not be representative of the vast majority of participating animal feeding operations and will not account for differences in climatic conditions, manure-handling methods, and density of operations because it does not include the 16 combinations of animal types and geographic regional pairings recommended by EPA’s expert panel. EPA approved only 12 of the 16 recommended combinations, excluding southeastern broiler, eastern layer, midwestern turkey, and southern dairy operations. Selection of monitoring sites has been a concern since the selection plan was announced in 2005. At that time, many agricultural experts, environmental groups, and industry and state officials disagreed with the site selection methodology. They stated that the study did not include a sufficient number of monitoring sites to establish a statistically valid sample. Without such a sample, we believe that EPA will not be able to accurately estimate emissions for all types of operations. More recently, in June 2008, the state of Utah reached an agreement with EPA to separately study animal feeding operations in the state because of the state’s continuing concerns that the National Air Emissions Monitoring Study will not collect information on emissions from operations in Rocky Mountain states and therefore may not be meaningful for those operations that raise animals in arid areas. Agricultural experts also have raised concerns that the National Air Emissions Monitoring Study does not include other sources that can contribute significantly to emissions from animal feeding operations. For example, the monitoring study will not capture data on ammonia emissions from feedlots and manure applied to fields. According to these experts, feedlots and manure on fields, as well as other excluded sources account for approximately half of the total ammonia emissions emitted by animal feeding operations. Furthermore, USDA’s Agriculture Air Quality Task Force has recently raised concerns about the quantity and quality of the data being collected during the early phases of the study and how EPA will eventually use the information. In particular, the task force expressed concern that the technologies used to collect emissions data were not functioning reliably. At its May 2008 task force meeting, the members requested that the Secretary of Agriculture ask EPA to review the first 6 months of the study’s data to determine if the study needs to be revised in order to yield more useful information. EPA acknowledged that emissions data should be collected for every type of animal feeding operation and practice, but EPA officials stated that such an extensive study is impractical. Furthermore, they stated that the selected sites provide a reasonable representation of the various animal sectors. EPA has also indicated that it plans to use other relevant information to supplement the study data and has identified some potential additional data sources. However, according to agricultural experts, until EPA identifies all the supplemental data that it plans to use, it is not clear if these data, together with the emissions study data, will enable EPA to develop comprehensive air emissions protocols. EPA has also indicated that completing the National Air Emissions Monitoring Study is only the first part of a multiyear effort to develop a process-based model for predicting overall emissions from animal feeding operations. A process-based model would capture emissions data from all sources and use these data to assess the interaction of all sources and the impact that different manure management techniques have on air emissions for the entire operation. For example, technologies are available to decrease emissions from manure lagoons by, among other things, covering the lagoon to capture the ammonia. However, if an operation spreads the lagoon liquid as fertilizer for crops, ammonia emissions could increase on the field. According to NAS, a process-based model is needed to provide scientifically sound estimates of air emissions from animal feeding operations that can be used to develop management and regulatory programs. Although EPA plans to develop a process-based model after 2011, it has not yet established a timetable for completing this model and, therefore, it is uncertain when EPA will have more sophisticated approaches that will more accurately estimate emissions from animal feeding operations. Moreover, two recent EPA decisions suggest that the agency has not yet determined how it intends to regulate air emissions from animal feeding operations. Specifically: In December 2007, EPA proposed exempting releases to the air of hazardous substances from manure at farms that meet or exceed the reportable quantities from both CERCLA and EPCRA notification requirements. According to EPA, this decision was in part a response to language in congressional committee reports related to EPA’s appropriations legislation for 2005 and 2006 that directed the agency to promptly and expeditiously provide clarification on the application of these laws to poultry, livestock, and dairy operations. In addition, the agency received a petition from the several poultry industry organizations seeking an exemption from the CERCLA and EPCRA reporting requirements for ammonia emissions from poultry operations on the grounds that ammonia emissions from poultry operations pose little or no risk to public health, and emergency response is inappropriate. In proposing the exemption, EPA noted that the agency would not respond to releases from animal wastes under CERCLA or EPCRA nor would it expect state and local governments to respond to such releases because the source and nature of these releases are such that emergency response is unnecessary, impractical, and unlikely. It also noted that it had received 26 comment letters from state and local emergency response agencies supporting the exemption for ammonia from poultry operations. However, during the public comment period ending on March 27, 2008, a national association representing state and local emergency responders with EPCRA responsibilities questioned whether EPA had the authority to exempt these operations until it had data from its monitoring study to demonstrate actual levels of emissions from animal feeding operations. This national association further commented that EPA should withdraw the proposal because it denied responders and the public the information necessary to protect themselves from dangerous releases. Furthermore, the proposal also seems to be a departure from EPA’s past regulatory enforcement actions that have included charges of failing to comply with the release reporting requirements when bringing claims against producers for violating several environmental laws and is also contrary to one of the stated goals of the Air Compliance Agreement. We believe that the timing of this proposed exemption, before the National Air Emissions Monitoring Study has been completed, calls into question the basis for EPA’s decision. EPA has also recently stated that it will not make key regulatory decisions on how certain federal air regulations apply to animal feeding operations until after 2011, when the National Air Emissions Monitoring Study is completed. For example, according to EPA, the agency will not issue guidance for several more years defining the scope of the term “source” as it relates to animal agriculture and farm activities. According to EPA, it has not yet decided if it will aggregate the emissions occurring on an animal feeding operation as one source or if the emissions from the barns, lagoons, feed storage, and fields will each be considered as a separate source when determining if an operation has exceeded air emissions’ reportable quantities. Depending on the approach EPA takes, how emissions are calculated could differ significantly. For example, according to preliminary data EPA has received from an egg-laying operation in Indiana, individual chicken barns may exceed the CERCLA reportable quantities for ammonia. Moreover, if emissions from all of the barns on the operation are aggregated, they might be more than 500 times the CERCLA reportable quantities. To address the various concerns that we identified with the ongoing air emission monitoring study, we recommended that EPA (1) reassess the study to ensure that it will provide valid data which the agency can use to develop air emissions protocols and (2) provide stakeholders with information on the additional data that it plans to use to supplement the study. In addition, we recommended that EPA establish a strategy and timetable for developing a process-based model that will provide more sophisticated air emissions estimating methodologies for animal feeding operations. EPA responded that it has developed a quality assurance plan for the study but did not address other issues that we identified in our report, such as the validity of the study’s sample and the omission of other sources that can contribute significantly to the air emission from animal feeding operations. Furthermore, although EPA concurred with the need to identify supplemental data and establish a strategy and timetable for developing a process-based model and described actions that it has underway, the agency provided no indication of when it will complete its plans to either identify the data it will use to augment the monitoring study or develop a process-based model. Two federal court decisions—Waterkeeper Alliance Inc. v. EPA and Rapanos v. United States—have affected EPA and some states’ abilities to regulate CAFOs for water pollutants. In its 2005 Waterkeeper decision, the U.S. Court of Appeals for the Second Circuit set aside a key provision of EPA’s 2003 CAFO rule requiring every CAFO to apply for a permit. Under the 2003 rule, large numbers of previously unregulated CAFOs were required to apply for permits and would have been subject to monitoring and reporting requirements imposed by the permit as well as periodic inspections. According to EPA, the 2003 rule would have expanded the number of regulated CAFOs from an estimated 12,500 to an estimated 15,300, an increase of about 22 percent, and would have provided EPA with more comprehensive information on the number and location of CAFOs, enabling the agency to more effectively locate and inspect these operations nationwide. However, in 2003, both environmental and agricultural groups challenged EPA’s 2003 rule. The court agreed with the environmental groups’ arguments that, among other things, EPA’s 2003 rule did not adequately provide for public review and comment on a CAFO’s nutrient management plan and instructed EPA to revise the rule accordingly. The court also agreed with the agricultural groups’ arguments that EPA had exceeded its authority under the Clean Water Act by requiring CAFOs that were not discharging pollutants into federally regulated water to apply for permits or demonstrate that they had no potential to discharge and therefore set aside the rule’s permitting requirements for those CAFOs that did not discharge. The Waterkeeper decision, in effect, returned EPA’s permitting program to one in which CAFO operators are not required to apply for a NPDES permit unless they discharge, or propose discharging, into federally regulated waters. As a result, EPA must identify and prove that an operation has discharged or is discharging pollutants in order to require the operator to apply for a permit. To help identify unpermitted discharges from CAFOs, EPA officials told us that they have to rely on other methods that are not necessarily all-inclusive, such as citizens’ complaints, drive-by observations, aerial flyovers, and state water quality assessments that identify water bodies impaired by pollutants associated with CAFOs. According to EPA officials, these methods have helped the agency identify some CAFOs that may be discharging as well as targeting inspections to such CAFOs. As a result of the Waterkeeper decision, EPA proposed a new rule in June 2006 requiring that (1) only CAFO operators that discharge, or propose to discharge, apply for a permit, (2) permitting authorities review CAFO nutrient management plans and incorporate the terms of these plans into the permits, and (3) permitting authorities provide the public with an opportunity to review and comment on the nutrient management plans. According to EPA officials, the final rule is currently being reviewed by the Office of Management and Budget, but at the time we issued our report, these officials were uncertain when this review would be completed and the final rule issued. State water pollution control officials have expressed some concerns that EPA’s new 2006 rule will place a greater administrative burden on states than the 2003 rule would have. In an August 2006 letter to EPA, the Association of State and Interstate Water Pollution Control Administrators noted that the “reactive” enforcement that EPA will now follow will require permitting authorities to significantly increase their enforcement efforts to achieve the level of environmental benefit that would have been provided by the 2003 rule. These officials believe that requiring EPA and the states to identify CAFOs that actually discharge pollutants into federally regulated water bodies will consume more resources than requiring all CAFOs to apply for a permit. Moreover, although the Waterkeeper decision has affected EPA’s ability to regulate CAFOs’ water pollutant discharges, state officials we contacted indicated that this decision has not had the same impact on their ability to regulate these operations. As table 1 shows, the impacts of the Waterkeeper decision have ranged from having little impact on state regulation to impairing state CAFO programs. Although the Rapanos case arose in the context of a different permit program, the scope of EPA’s pollutant discharge program originates in the same Clean Water Act definition that was at issue in the case. As a result, the decision has complicated the agency’s enforcement of CAFO regulations. According to EPA enforcement officials, the agency will now be less likely to seek enforcement against a CAFO that it believes is discharging pollutants into a water body because it may be more difficult to prove that the water body is federally regulated. According to EPA officials, as a result of the Rapanos decision, EPA must spend more resources developing an enforcement case because the agency must gather proof that the CAFO has not only illegally discharged pollutants, but that those pollutants have entered federally regulated waters. The difficulties EPA has experienced were highlighted in a March 4, 2008, memorandum in which EPA’s Assistant Administrator for Enforcement and Compliance Assurance stated that the Rapanos decision and national guidance issued by EPA to ensure “nationwide consistency, reliability, and predictability in their administration of the statute” in light of the Supreme Court’s decision has resulted in significant adverse impacts to the clean water enforcement program. According to the memorandum, the Rapanos decision and guidance negatively affected approximately 500 enforcement cases, including as many as 187 cases involving NPDES permits. In conclusion, Mr. Chairman, EPA has regulated CAFOs under the Clean Water Act for more than 30 years, and during this time it has amassed a significant body of knowledge about the pollutants discharged by animal feeding operations and the potential impacts of these pollutants on human health and the environment. Nevertheless, EPA still lacks comprehensive and reliable data on the number, location, and size of the operations that have been issued permits and the amounts of discharges they release. As a result, EPA has neither the information it needs to assess the extent to which CAFOs may be contributing to water pollution, nor the information it needs to ensure compliance with the Clean Water Act. More recently, EPA has also begun to address concerns about air pollutants that are emitted by animal feeding operations. The nationwide air emissions monitoring study, along with EPA’s plans to develop air emissions estimating protocols, are important steps in providing much needed information on the amount of air pollutants emitted from animal feeding operations. However, questions about the sufficiency of the sites selected for the air emissions study and the quantity and quality of the data being collected could undermine EPA’s efforts to develop air emissions protocols by 2011 as planned. A process-based model that more accurately predicts the total air emissions from an animal feeding operation is still needed. While EPA has indicated it intends to develop such a model, it has not yet established a strategy and timeline for this activity. Mr. Chairman, this concludes my prepared testimony. I would be happy to respond to questions that you or Members of the Subcommittee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Anu Mittal, Director, Natural Resources and Environment (202) 512-3841 or mittala@gao.gov. Key contributors to this testimony were Sherry McDonald, Assistant Director; Kevin Bray, Paul Hobart; Holly Sasso; Carol Herrnstadt Shulman; James Turkett; and Greg Wilmoth. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Concentrated animal feeding operations (CAFO) are large livestock and poultry operations that raise animals in a confined situation. CAFOs may improve the efficiency of animal production, but the large amounts of manure they produce can, if improperly managed, degrade air and water quality. The Environmental Protection Agency (EPA) regulates CAFOs and requires CAFOs that discharge certain pollutants to obtain a permit. This testimony summarizes the findings of a September 4, 2008 GAO report (GAO-08-944) on (1) trends in CAFOs, (2) amounts of waste they generate, (3) findings of key research on CAFOs' health and environmental impacts, (4) progress made in developing CAFO air emissions protocols, and (5) the effect of recent court decisions on EPA's regulation of CAFO water pollutants. GAO analyzed U.S. Department of Agriculture's (USDA) data from 1982 through 2002 for large farms as a proxy for CAFOs; reviewed studies, EPA documents, laws, and regulations, and obtained the views of federal and state officials. Because no federal agency collects accurate and consistent data on the number, size, and location of CAFOs, GAO could not determine the exact trends for these operations. However, using USDA data for large farms that raise animals as a proxy for CAFOs, it appears that the number of these operations increased by about 230 percent, from about 3,600 in 1982 to almost 12,000 in 2002. The number of animals raised on large farms also increased during this 20-year period, but the rate of increase varied by animal type. Moreover, EPA does not have comprehensive, accurate data on the number of permitted CAFOs nationwide. As a result, the agency does not have the information that it needs to effectively regulate these CAFOs. EPA is currently working with the states to establish a new national data base. The amount of manure generated by large farms that raise animals depends on the type and number of animals raised, but these operations can produce from 2,800 tons to 1.6 million tons of manure a year. Some large farms that raise animals can generate more manure annually than the sanitary waste produced by some U.S. cities. Manure can be used beneficially to fertilize crops; but according to some agricultural experts, when animal feeding operations are clustered in certain geographic areas, the manure they produce may not be effectively used as fertilizer on adjacent cropland and could increase the potential of pollutants reaching nearby waters and degrading water quality. Since 2002, at least 68 government-sponsored or peer-reviewed studies have been completed that examined air and water quality issues associated with animal feeding operations and 15 have directly linked air and water pollutants from animal waste to specific health or environmental impacts. EPA has not yet assessed the extent to which pollutants from animal feeding operations may be impairing human health and the environment because it lacks key data on the amount of pollutants being discharged by these operations. Considered a first step in developing air emission protocols for animal feeding operations, a 2-year nationwide air emission monitoring study, largely funded by the industry, was initiated in 2007. However, the study, as currently structured, may not provide the scientific and statistically valid data it was intended to provide and that EPA needs to develop these protocols. In addition, EPA has not yet established a strategy or timetable for developing a more sophisticated process-based model that considers the interaction and implications of all emission sources at an animal feeding operation. Two recent federal court decisions have affected EPA's ability to regulate water pollutants discharged by CAFOs. The 2005 Waterkeeper decision required EPA to abandon the approach that it had proposed for regulating CAFOs in 2003. Similarly, the Rapanos decision has complicated EPA's enforcement of CAFO discharges because EPA believes that it must now gather more evidence to establish which waters are subject to the Clean Water Act's permitting requirements. |
Over the past decade, the number of acres burned annually by wildland fires in the United States has substantially increased. Federal appropriations to prepare for and respond to wildland fires, including appropriations for fuel treatments, have almost tripled. Increases in the size and severity of wildland fires, and in the cost of preparing for and responding to them, have led federal agencies to fundamentally reexamine their approach to wildland fire management. For decades, federal agencies aggressively suppressed wildland fires and were generally successful in decreasing the number of acres burned. In some parts of the country, however, rather than eliminating severe wildland fires, decades of suppression contributed to the disruption of ecological cycles and began to change the structure and composition of forests and rangelands, thereby making lands more susceptible to fire. Increasingly, federal agencies have recognized the role that fire plays in many ecosystems and the role that it could play in the agencies’ management of forests and watersheds. The agencies worked together to develop a federal wildland fire management policy in 1995, which for the first time formally recognized the essential role of fire in sustaining natural systems; this policy was subsequently reaffirmed and updated in 2001. The agencies, in conjunction with Congress, also began developing the National Fire Plan in 2000. To align their policies and to ensure a consistent and coordinated effort to implement the federal wildland fire policy and National Fire Plan, Agriculture and Interior established the Wildland Fire Leadership Council in 2002. In addition to noting the negative effects of past successes in suppressing wildland fires, the policy and plan also recognized that continued development in the wildland- urban interface has placed more structures at risk from wildland fire at the same time that it has increased the complexity and cost of wildland fire suppression. Forest Service and university researchers estimated in 2005 that about 44 million homes in the lower 48 states are located in the wildland-urban interface. To help address these trends, current federal policy directs agencies to consider land management objectives—identified in land and fire management plans developed by each local unit, such as a national forest or a Bureau of Land Management district—and the structures and resources at risk when determining whether or how to suppress a wildland fire. When a fire starts, the land manager at the affected local unit is responsible for determining the strategy that will be used to respond to the fire. A wide spectrum of strategies is available, some of which can be significantly more costly than others. For example, the agencies may fight fires ignited close to communities or other high-value areas more aggressively than fires on remote lands or at sites where fire may provide ecological or fuel-reduction benefits. In some cases, the agencies may simply monitor a fire, or take only limited suppression actions, to ensure that the fire continues to pose little threat to important resources, a practice known as “wildland fire use.” Federal firefighting agencies need a cohesive strategy for reducing fuels and addressing wildland fire issues. Such a strategy should identify the available long-term options and associated funding for reducing excess vegetation and responding to wildland fires if the agencies and the Congress are to make informed decisions about an effective and affordable long-term approach for addressing problems that have been decades in the making. We first recommended in 1999 such a strategy be developed to address the problem of excess fuels and their potential to increase the severity of wildland fires and cost of suppression efforts. By 2005, the agencies had yet to develop such a strategy, and we reiterated the need for a cohesive strategy and broadened our recommendation’s focus to better address the interrelated nature of fuel reduction efforts and wildland fire response. The agencies said they would be unable to develop a cohesive strategy until they have completed certain key tasks. We therefore recommended that the agencies develop a tactical plan outlining these tasks and the time frames needed for completing each task and a cohesive strategy. These tasks include (1) finishing data systems that are needed to identify the extent, severity, and location of wildland fire threats in our national forests and rangelands; (2) updating local fire management plans to better specify the actions needed to effectively address these threats; and (3) assessing the cost-effectiveness and affordability of options for reducing fuels and responding to wildland fire problems. First, federal firefighting agencies have made progress in developing a system to help them better identify and set priorities for lands needing treatment to reduce accumulated fuels. Many past studies have identified fuel reduction as important for containing wildland fire costs because accumulated fuels can contribute to more-severe and more costly fires. The agencies are developing a geospatial data and modeling system, called LANDFIRE, intended to produce consistent and comprehensive maps and data describing vegetation, wildland fuels, and fire regimes across the United States. The agencies will be able to use this information to help identify fuel accumulations and fire hazards across the nation, help set nationwide priorities for fuel-reduction projects, and assist in determining an appropriate response when wildland fires do occur. LANDFIRE data are nearly complete for most of the western United States, with data for the remainder of the country scheduled to be completed in 2009. The agencies, however, have not yet finalized their plan for ensuring that collected data are routinely updated to reflect changes to fuels, including those from landscape-altering events, such as hurricanes, disease, or wildland fires themselves. The agencies expect to submit a plan to the Wildland Fire Leadership Council for approval later this month. Second, we reported in 2006 that 95 percent of the agencies’ individual land management units had completed fire management plans in accordance with agency direction issued in 2001. As of January 2007, however, the agencies did not require regular updates to ensure that new data (from LANDFIRE, for example) were incorporated into the plans. In addition, in the wake of two court decisions—each holding that the Forest Service was required to prepare an environmental assessment or environmental impact statement under the National Environmental Policy Act (NEPA) to accompany the relevant fire management plan—the Forest Service decided to withdraw the two plans instead of completing them. It is unclear whether the agency would withdraw other fire management plans successfully challenged under NEPA; nor is it clear whether or to what extent such agency decisions could undermine the interagency policy directing that every burnable acre have a fire management plan. Without such plans, however, current agency policy does not allow use of the entire range of wildland fire response strategies, including less aggressive, and potentially less costly, strategies. Moreover, in examining 17 fire management plans, a May 2007 review of large wildland fires managed by the Forest Service in 2006 identified several shortcomings, including that most of the plans examined did not contain current information on fuel conditions, many did not provide sufficient guidance on selecting firefighting strategies, and only 1 discussed issues related to suppression costs. Third, over the past several years, the agencies have been developing a Fire Program Analysis (FPA) system, which was proposed and funded to help the agencies determine national budget needs by analyzing budget alternatives at the local level—using a common, interagency process for fire management planning and budgeting—and aggregating the results; determine the relative costs and benefits for the full scope of fire management activities, including potential trade-offs among investments in fuel reduction, fire preparedness, and fire suppression activities; and identify, for a given budget level, the most cost-effective mix of personnel and equipment to carry out these activities. We have said for several years—and the agencies have concurred—that FPA is critical to helping the agencies contain wildland fire costs and plan and budget effectively. Recent design modifications to the system, however, raise questions about the agencies’ ability to fully achieve key FPA goals. A midcourse review of the developing system resulted in the Wildland Fire Leadership Council’s approving in December 2006 modifications to the system’s design. FPA and senior Forest Service and Interior officials told us they believed the modifications would allow the agencies to meet the key goals. The officials said they expected to have a prototype developed for the council’s review in June 2007 and to substantially complete the system by June 2008. We have yet to systematically review the modifications, but after reviewing agency reports on the modifications and interviewing knowledgeable officials, we have concerns that the modifications may not allow the agencies to meet FPA’s key goals. For example, under the redesigned system, local land managers will use a different method to analyze and select various budget alternatives, and it is unclear whether this method will identify the most cost-effective allocation of resources. In addition, it is unclear how the budget alternatives for local units will be meaningfully aggregated on a nationwide basis, a key FPA goal. Although the agencies have made progress on these three primary tasks, as of April 2007, they had yet to complete a joint tactical plan outlining the critical steps, together with related time frames, that the agencies would take to complete a cohesive strategy, as we recommended in our 2005 report. We continue to believe that, until a cohesive strategy can be developed, it is essential that the agencies create a tactical plan for developing this strategy, so Congress understands the steps and time frames involved in completing the strategy. As we testified before the Senate Committee on Energy and Natural Resources in January 2007, the steps the Forest Service and Interior agencies have taken to date to contain wildland fire costs lack several key elements fundamental to sound program management, such as clearly defining cost-containment goals, developing a strategy for achieving those goals, and measuring progress toward achieving them. First, the agencies have not clearly articulated the goals of their cost-containment efforts. For cost-containment efforts to be effective, the agencies need to integrate cost-containment goals with the other goals of the wildland fire program— such as protecting life, property, and resources. For example, the agencies have established the goal of suppressing wildland fires at minimum cost, considering firefighter and public safety and values being protected, but they have not defined criteria by which these often-competing objectives are to be weighed. Second, although the agencies are undertaking a variety of steps designed to help contain wildland fire costs, the agencies have not developed a clear plan for how these efforts fit together or the extent to which they will assist in containing costs. Finally, the agencies are developing a statistical model of fire suppression costs that they plan to use to identify when the cost for an individual fire may have been excessive. The model compares a fire’s cost to the costs of suppressing previous fires with similar characteristics. However, such comparisons with previous fires’ costs may not fully consider the potential for managers to select less aggressive—and potentially less costly—suppression strategies. In addition, the model is still under development and may take a number of years to fully refine. Without clear program goals and objectives, and corresponding performance measures to evaluate progress, the agencies lack the tools to be able to determine the effectiveness of their cost-containment efforts. Our forthcoming report on federal agencies’ efforts to contain wildland fire costs includes more- detailed findings and recommendations to the agencies to improve the management of their cost-containment efforts; this report is expected to be released at a hearing before the Senate Committee on Energy and Natural Resources scheduled for June 26, 2007. Complex conditions have contributed to increasing wildland fire severity. These conditions have been decades in the making, and will take decades to resolve. The agencies must develop an effective and affordable strategy for addressing these conditions in light of the large federal deficit and the long-term fiscal challenges facing our nation. To make informed decisions about an effective and affordable long-term approach to addressing wildland fire problems, the agencies need to develop a cohesive strategy that identifies the available long-term options and associated funding for reducing excess vegetation and responding to wildland fires. Because the agencies cannot develop such a strategy until they complete certain key tasks, we continue to believe that in the interim the agencies must create a tactical plan for developing this strategy so that Congress can monitor the agencies’ progress. While the agencies continue to work toward developing a cohesive strategy, they have initiated a number of efforts intended to contain wildland fire costs, but the agencies cannot demonstrate the effectiveness of these cost containment efforts, in part because the agencies have no clearly defined cost-containment goals and objectives. Without clear goals, the agencies cannot develop consistent standards by which to measure their performance. Further, without these goals and objectives, federal land and fire managers in the field are more likely to select strategies and tactics that favor suppressing fires quickly over those that seek to balance the benefits of protecting the resources at risk and the costs of protecting them. Perhaps most important, without a clear vision of what they are trying to achieve and a systematic approach for achieving it, the agencies—and Congress and the American people— have little assurance that their cost-containment efforts will lead to substantial improvement. Moreover, because cost-containment goals should be considered in relation to other wildland fire program goals— such as protecting life, resources, and property—the agencies must integrate cost-containment goals within the overall cohesive strategy for responding to wildland fires that we have consistently recommended. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. David P. Bixler, Assistant Director; Ellen W. Chu; Jonathan Dent; Janet Frisch; Chester Joy; and Richard Johnson made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Increasing wildland fire threats to communities and ecosystems, combined with rising costs of addressing those threats--trends that GAO and others have reported on for many years--have not abated. On average, the acreage burned annually by wildland fires from 2000 to 2005 was 70 percent greater than the acreage burned annually during the 1990s. Annual appropriations to prepare for and respond to wildland fires have also increased substantially over the past decade, totaling about $3 billion in recent years. The Forest Service within the Department of Agriculture and four agencies within the Department of the Interior (Interior) are responsible for responding to wildland fires on federal lands. This testimony summarizes several key actions that federal agencies need to complete or take to strengthen their management of the wildland fire program, including the need to (1) develop a long-term, cohesive strategy to reduce fuels and address wildland fire problems and (2) improve the management of their efforts to contain the costs of preparing for and responding to wildland fires. The testimony is based on several previous GAO reports and testimonies addressing wildland fire issues. The Forest Service and Interior agencies need to complete several actions to strengthen their overall management of the wildland fire program. First, because a substantial investment and decades of work will be required to address wildland fire problems that have been decades in the making, the agencies need a cohesive strategy that addresses the full range of wildland fire management activities. Such a strategy should identify the available long-term options and associated funding for reducing excess vegetation and responding to wildland fires if the agencies and the Congress are to make informed decisions about an effective and affordable long-term approach for addressing wildland fire problems. GAO first recommended in 1999 that such a strategy be developed to address the problem of excess fuels and their potential to increase the severity of wildland fires and cost of suppression efforts. By 2005, the agencies had yet to develop such a strategy, and GAO reiterated the need for a cohesive strategy and broadened the recommendation's focus to better address the interrelated nature of fuel reduction efforts and wildland fire response. Further, because the agencies said they would be unable to develop a cohesive strategy until they have completed certain key tasks, GAO recommended that the agencies develop a tactical plan outlining these tasks and the time frames needed for completing each task and a cohesive strategy. Although the agencies concurred with GAO's recommendations, as of April 2007, they had yet to develop a tactical plan. Second, as GAO testified before the Senate Committee on Energy and Natural Resources in January 2007, the steps the Forest Service and Interior agencies have taken to date to contain wildland fire costs lack several key elements fundamental to sound program management, such as clearly defining cost-containment goals, developing a strategy for achieving those goals, and measuring progress toward achieving them. For cost-containment efforts to be effective, the agencies need to integrate cost-containment goals with the other goals of the wildland fire program--such as protecting life, resources, and property--and to recognize that trade-offs will be needed to meet desired goals within the context of fiscal constraints. Further, because cost-containment goals need to be considered in relation to other wildland fire program goals, it is important that the agencies integrate cost-containment goals within an overall cohesive strategy. GAO's forthcoming report on federal agencies' efforts to contain wildland fire costs includes more-detailed findings and recommendations to the agencies to improve the management of their cost-containment efforts; this report is expected to be released at a Senate Committee on Energy and Natural Resources hearing scheduled for June 26, 2007. |
This section describes (1) utility-scale electricity generation in the United States, (2) federal and state regulation of electricity markets, and (3) federal actions that have supported utility-scale electricity generation projects. Developers of utility-scale electricity generation projects build new projects to meet the growing electricity demands of U.S. retail customers. Developers include: (1) utilities that build projects to serve their own retail customers and (2) nonutilities, which includes both developers that build and sell projects and independent power producers that build and own projects and then sell the electricity generated by the project. In the later case, the independent power producers sell electricity to utilities or other retail service providers—entities that compete with each other to provide electricity to retail customers by offering electricity plans with differing prices, terms, and incentives. Developers are either for-profit or nonprofit entities. For-profit developers include investor- owned utilities—which serve 75 percent of the U.S. population—that are owned by private investors and provide the services of a utility, and independent power producers. Nonprofit developers include municipally- owned utilities and electric cooperatives. Across the United States, the development of new renewable and traditional utility-scale electricity generation projects varied by state from 2004 through 2013 (see fig. 1). From 2004 through 2013, around 2,000 new renewable and about 500 new traditional utility-scale electricity generation projects were built in the United States. However, according to our analysis of SNL Financial data, renewable projects were significantly smaller than traditional ones. For example, utility-scale solar projects averaged about 10 MW of generating capacity, whereas gas projects averaged 285 MW of generating capacity. Specifically, renewable projects added about 69,000 MW of new generating capacity, and traditional projects added about 157,000 MW of new generating capacity (see fig. 2). The electricity industry has historically been characterized by investor- owned utilities that were integrated and provided the four functions of electricity service—generation, transmission, distribution, and system operations—to all retail customers in a specified area. These integrated utilities were allowed to operate in monopoly service territories, but the rates they could charge retail customers were regulated by state regulatory commissions, often called public utility commissions. These commissions were charged with ensuring that, in the absence of competition, the services these integrated utilities provided were adequate, and the rates they charged were reasonable and compensated them for approved costs they incurred. In most states, this regulatory approach continues. These states are referred to as traditionally regulated. During the last 2 decades, some states and the federal government have taken steps to restructure traditionally regulated electricity markets with the goal of increasing competition. Broadly speaking, these efforts by the states have resulted in areas where electricity generation and distribution services are no longer integrated. These are referred to as restructured states. Utilities in restructured states still generally provide transmission, distribution, and system operations to retail customers in their service areas, but they do not own all the generation facilities in those areas. In restructured states, retail customers may purchase electricity from any qualified retail service provider, and the price for electricity is determined largely by supply and demand. The responsibility for regulating electricity in these states is divided between states and the federal government. States continue to regulate the provision of electricity service by retail service providers, and the Federal Energy Regulatory Commission oversees electricity that is traded in wholesale markets prior to being sold to retail customers. The federal government’s support of the development of utility-scale electricity generation projects generally falls into the following three categories: Providing funds: The federal government provides funds through outlays such as grants or incentive payments that directly cover some of the developer’s project costs. These outlays do not need to be repaid and, therefore, represent a direct cost to the government. Assuming risk: The federal government assumes risk and potential costs associated with risk in a number of ways, including by making direct loans and by guaranteeing loans. When making direct loans, the federal government disburses funds to nonfederal borrowers under contracts requiring the repayment of such funds either with or without interest. When making loan guarantees, the federal government provides a guarantee, insurance, or other pledge regarding the payment of all or a part of the principal or interest on any debt obligation of a nonfederal borrower to a lender. For both loans and loan guarantees, the cost to the government is estimated using the credit subsidy cost—the cost to the government, in net present value terms, over the entire period the loans are outstanding to cover interest subsidies, defaults, and delinquencies (not including administrative costs). Forgoing revenues: The federal government may choose to forgo certain revenues through various measures in the tax code, broadly known as tax expenditures. Tax expenditures are tax provisions— including tax deductions and credits—that are exceptions to the normal structure of income tax requirements necessary to collect federal revenue. Tax expenditures can have the same effects on the federal budget as spending programs—namely that the government has less money available to use for other purposes. As we have previously reported, some of these federal supports may be combined, resulting in support from multiple programs going to the same recipient for the development of a single project. For example, in the last decade, project developers may have combined the support of more than one tax expenditure with grants or loan guarantees from DOE or USDA. Key state supports, in the form of state policies, aided the development of utility-scale electricity generation projects—particularly renewable energy projects—for fiscal years 2004 through 2013. For example, most states have a renewable portfolio standard (RPS) that mandates that retail service providers obtain a certain percentage or amount of the electricity they sell from renewable energy sources, which helped create additional demand for renewable energy, according to many stakeholders we interviewed. In addition, most states remain traditionally regulated, and regulatory policies in these states provided important state-level support for renewable or traditional projects by allowing regulated utilities to recover costs incurred while purchasing power from existing electricity generation facilities or building new generating capacity themselves. Respondents to our survey of state regulatory commissions and some stakeholders cited other state supports for new renewable energy projects, such as state implementation of the Public Utility Regulatory Policies Act of 1978, and state tax incentives including property tax exemptions and tax credits. According to many stakeholders we interviewed and most respondents to our survey of state regulatory commissions, state RPSs provided important state-level support for new renewable projects built by utilities and independent power producers from 2004 through 2013 (see app. II for a list of stakeholders we interviewed and app. III for a copy of our survey). Of the regulatory commissions that answered our survey questions about the importance of state-level supports, 17 of 19 (89 percent) responded that RPSs were either very or extremely important for renewable projects built by utilities, and 21 of 24 (88 percent) responded that RPSs were either very or extremely important for renewable projects built by independent power producers. According to many stakeholders we interviewed, RPSs provided important support because they mandated the purchase or generation of electricity from renewable energy sources, which helped create additional demand for renewable energy. As of September 2014, 30 states and the District of Columbia had established RPSs, and an additional 8 states had established a voluntary or nonbinding renewable portfolio goal (RPG). The characteristics of state RPSs and RPGs varied by state. For example, Timelines for meeting RPSs or RPGs and the amounts of electricity required to be obtained from renewable energy sources varied, according to survey respondents. For example, Michigan’s RPS required retail service providers to generate 10 percent of the electricity they sold from renewable energy sources by 2015. In contrast, Hawaii’s RPS required utilities to obtain 40 percent of the electricity they sell from renewable energy sources by 2030. Types of entities subject to RPSs and RPGs also varied. For example, 14 state regulatory commissions confirmed that their RPSs or RPGs applied specifically to investor-owned utilities. Another commission in a restructured state noted that the state’s RPS did not apply to utilities; instead, it applied to the retail suppliers that provided electricity in utilities’ service areas. Types of energy sources that could satisfy RPSs or RPGs also varied. For example, the California Energy Commission’s guidebook on RPS eligibility identifies a variety of renewable energy sources—such as solar photovoltaic, wind, and biomass—that can satisfy California’s RPS. In contrast, under Pennsylvania law, the state’s RPS allows “alternative energy sources” to satisfy the RPS and defines alternative to include waste coal and coal mine methane. Additionally, some state RPSs include provisions that require a certain percentage of the electricity generated or produced to be derived from specific types of renewable energy. For example, according to Lawrence Berkeley National Laboratory, 17 states plus the District of Columbia have special provisions encouraging solar or other energy sources. See appendix IV for additional information about individual state RPSs and RPGs. In addition to creating additional demand for renewable energy, state RPSs and RPGs were an important factor in determining where projects were built. More specifically, our analysis of utility-scale electricity generation project data and responses to our survey found that 91 percent of new renewable projects from 2004 through 2013 were built in states with RPSs or RPGs, and that these projects accounted for 94 percent of new renewable generating capacity. Several stakeholders explained that state RPSs made it possible for developers to secure power purchase agreements (PPA)—contracts in which a utility agrees to purchase power, generally over a term of 20 to 25 years. In addition, most stakeholders said that PPAs were essential to moving a project forward because they provided the developer and potential investors an expectation of stable revenue for projects. See figure 3 for additional information about where new renewable generating capacity was added from 2004 through 2013. Most states remain traditionally regulated, and regulatory policies in these states provided important state-level support by allowing regulated utilities to recover costs incurred while purchasing power from existing renewable or traditional electricity generation facilities or building new generating capacity themselves. Specifically, 29 of 46 (63 percent) respondents to our survey reported that regulatory commissions regulated the electricity generation services provided by investor-owned utilities (see app. V for more detail about what states reported about their regulatory status). As previously discussed, in these states, state regulatory commissions set retail customers’ electricity rates to compensate regulated utilities for the costs they incur serving these customers—including expenses incurred while purchasing power or building new electricity generation capacity. Of the 19 traditionally regulated states that answered a survey question about the importance of commission-approved rates of return for building new traditional projects, 17 (90 percent) reported that they were either very or extremely important. In addition, of the 22 traditionally regulated states that answered a survey question about the importance of commission-approved rates of return for building new renewable projects, 19 (86 percent), reported that they were very or extremely important. In addition, according to at least 36 regulatory commissions we surveyed, utilities subject to this type of regulation did not build projects from fiscal year 2004 through 2013 without seeking approval to recover their costs and earn a return on their investments. All 29 regulatory commissions in traditionally regulated states also reported that, when utilities purchased power for fiscal years 2004 through 2013, the utility was allowed to recover associated costs by passing these costs on to retail customers. State regulators and some stakeholders cited other state supports for the development of new renewable projects for fiscal years 2004 through 2013. For example, in some cases, state regulatory commissions allowed regulated utilities to offer their retail customers “green power”— the option to purchase renewably produced electricity to meet their electricity needs. According to DOE, in 2012, the most recent year for which data were available, more than 860 traditionally regulated utilities, which served more than half of all U.S. retail customers, offered such an option. Another state policy that supported the development of new renewable projects was state implementation of the Public Utility Regulatory Policies Act of 1978. Specifically, 17 of 21 of the regulatory commissions that answered one of our survey questions about how developers earned revenues reported that developers earned revenues through PPAs obtained as a result of state implementation of the Public Utility Regulatory Policies Act. Finally, several stakeholders also told us that state tax incentives, such as property tax exemptions and tax credits, were helpful for developing new renewable projects. For example, some solar developers have used the New Mexico Renewable Energy Production Tax Credit, which allows companies that generate electricity from solar energy to receive a tax credit ranging from $0.015 to $0.04 per kilowatt-hour over a 10-year period. The program also provides a tax credit against corporate income taxes of $0.01 per kilowatt-hour for companies that generate electricity from wind or biomass. For fiscal years 2004 through 2013, programs at DOE, Treasury, and USDA aided the development of new electricity generating capacity through outlays, loan programs, and tax expenditures. Most of this support was directed at renewable projects, with federal support for traditional projects largely directed toward reducing the cost of fuel rather than the development of new projects. As shown in table 1, one program—Treasury’s temporary Payments for Specified Energy Property in Lieu of Tax Credits (payments-in-lieu-of-tax-credits program)— accounted for most of the $16.8 billion in total outlays, which supported over 29,000 MW of new generating capacity. Federal loan programs accounted for an estimated $1.2 billion in credit subsidy costs that supported nearly 10,000 MW of new generating capacity. Federal tax expenditures—which reduce a taxpayer’s tax liability by providing, for example, credits toward or deferrals of tax liability—accounted for an estimated $15.1 billion in forgone revenue to the government, but limited data hinder an understanding of their contributions to new generating capacity and ultimately, their effectiveness. In total, $16.8 billion in outlays supported over 29,000 MW of new generating capacity through eight federal programs, and Treasury’s temporary payments-in-lieu-of-tax-credits program accounted for 99 percent of the total outlays. Treasury’s program was enacted in the American Recovery and Reinvestment Act of 2009 (Recovery Act), and provided cash payments of up to 30 percent of the total eligible costs of qualifying renewable energy facilities. These cash payments were available in lieu of the Energy Investment Credit, also known as the Investment Tax Credit (ITC) or the Energy Production Credit, also known as the Production Tax Credit (PTC). During the first 5 years of Treasury’s program, developers of 1,073 utility-scale electricity generation projects developed 28,309 MW of generating capacity across the United States, according to the data submitted by developers in their applications for payments in lieu of tax credits. In addition to Treasury’s program, seven other federal programs at DOE, Treasury, and USDA, supported 128 projects for fiscal years 2004 through 2013 through grants, incentive payments, or other mechanisms, for a total of $241 million in additional federal outlays. For example, USDA’s Rural Energy for America Program provides outlays in the form of grants to farmers, ranchers, and small businesses in rural areas to assist with purchasing and installing renewable energy systems. These grants supported 50 projects that added 139 MW of electric generating capacity for total outlays of nearly $16 million. DOE’s outlay program that supported the greatest number of projects was its now-discontinued Renewable Energy Production Incentive program, which provided production-based cash payments to nonprofit owners of qualified renewable energy projects for 10 years after the project was placed in service. According to DOE officials, this program was designed to provide incentives for entities that do not pay income taxes—such as electric cooperatives and municipally-owned utilities—similar to those provided to for-profit developers through the PTC. Unlike the PTC, this program was subject to annual appropriations by Congress. The Renewable Energy Production Incentive program provided $26 million in incentive payments to 59 projects with 704 MW of generating capacity but, according to agency officials, was discontinued in 2010. Projects that received support through federal outlays may have also received support through loan programs or tax expenditures. See appendixes VI and VII for program descriptions and outlays for all federal programs. Six federal loan programs—providing both direct loans and loan guarantees—at DOE and USDA accounted for an estimated $1.2 billion in credit subsidy costs that supported 70 projects for a total of 9,748 MW of new generating capacity. DOE administered two of the six loan programs that were authorized to support the development of projects for fiscal years 2004 through 2013, but only one DOE loan program actually awarded loan guarantees for utility-scale electricity generation projects during this timeframe. Specifically, DOE’s loan guarantee program for innovative technologies was authorized to support the development of these projects, but did not award loan guarantees to any utility-scale electricity generation projects during these years. DOE’s now-expired Recovery Act loan guarantee program—which supported both innovative and commercial technologies—authorized loans for 21 utility-scale electricity generation projects with 3,976 MW of generating capacity and is estimated to have provided over $1.2 billion in federal support through payments of credit subsidy costs as of the close of fiscal year 2013. (For an explanation of how the credit subsidy costs for DOE’s loan guarantees were calculated, see app. I.) Under the Recovery Act loan guarantee program, the credit subsidy cost was paid with appropriated funds, whereas under the loan guarantee program for innovative technologies, borrowers generally had to pay for their own credit subsidy costs. USDA administered four programs that provided either loans or loan guarantees for both traditional and renewable projects for fiscal years 2004 through 2013, and earned revenues for the government. In aggregate, USDA’s loan programs resulted in a negative credit subsidy cost—that is, they yielded revenue rather than incurring a cost to the government—of $14 million. According to a USDA official, this is because USDA’s Direct and Guaranteed Electric Loans program had a low rate of default and earned revenues from borrowers’ annual fees and interest. This program, which provided both loans and loan guarantees to establish and improve electric service in rural areas, supported 32 projects that added 5,714 MW in new generating capacity. The other three loan programs at USDA supported 17 projects and added 58 MW of additional generating capacity. Projects that received support through federal loan programs might also have received support through outlays, including Treasury’s payments in lieu of tax credits, or tax expenditures. See appendixes VI and VII for program descriptions and credit subsidy costs for all federal loan programs. Seven tax expenditures administered by the Internal Revenue Service (IRS) at Treasury accounted for an estimated $15.1 billion in forgone revenue for fiscal years 2004 through 2013, but IRS does not collect or report key data on the two largest tax expenditures supporting new utility- scale electricity generation projects. Tax expenditures supported the development of both renewable and traditional projects, and the majority of the forgone revenue (91 percent) supported renewable projects. Of the seven tax expenditures, the following four accounted for nearly 97 percent of the forgone revenue ($14.6 billion): PTC. The PTC accounted for an estimated $8.1 billion in forgone revenue and, as of the end of 2013, provided an income tax credit of 2.3 cents per kilowatt-hour for energy produced from wind and certain other renewable energy sources. Since it was first made available in 1992, the PTC has expired and been extended by Congress six times—in 1999, 2001, 2003, 2012, 2013, and 2014. Most recently, the PTC was extended for certain qualified facilities for projects that began construction before January 1, 2015. Because the credit is taken over a 10-year period once a project is placed in service, the PTC will continue to result in forgone revenue for years to come. ITC. The ITC accounted for an estimated $3.4 billion in forgone revenue, and it provided an income tax credit up to 30 percent for the development of certain renewable projects. Developers of certain qualifying facilities could choose to take the ITC in lieu of the PTC if the project met certain criteria; however, developers could not claim both tax credits for the same project. The ITC was first established in 1978 at 10 percent of eligible investment costs and was temporarily increased in 2005 to 30 percent for solar and certain other technologies. Subsequent legislation extended the ITC at 30 percent for these technologies through December 31, 2016. After December 31, 2016, the ITC is scheduled to return to 10 percent of eligible investment costs for solar projects. Accelerated Depreciation for Renewable Energy Property. Accelerated Depreciation Recovery Periods for Specific Energy Property: Renewable Energy (accelerated depreciation for renewable energy property) accounted for an estimated $1.7 billion in forgone revenue. This provision is similar to accelerated depreciation provisions available for a wide range of investments in other sectors. Accelerated depreciation for renewable energy property allows developers of certain renewable energy properties to deduct larger amounts from their taxable income sooner than they would normally be able to do under the straight-line depreciation method. Specifically it allows them to recover investments by deducting the cost of the investment from their taxable income over a 5-year period. Unlike the ITC and PTC, which have expiration dates and have been subject to congressional review as part of efforts to expand, extend, or reauthorize them, accelerated depreciation for energy property—like other accelerated depreciation provisions—does not have a specific expiration date and, as such, is not subject to periodic review by Congress. Credit for Investment in Clean Coal Facilities. This credit for traditional fuel sources is estimated to have accounted for $1.4 billion in forgone revenue to support the development of clean coal projects. The credit provides up to 30 percent of qualified investments in clean coal facilities greater than 400 MW in size. Unlike the ITC, PTC, and accelerated depreciation for renewable energy property—for which all eligible taxpayers may claim the tax expenditure—the Credit for Investment in Clean Coal Facilities is subject to a specified amount authorized by Congress. As such, developers must submit an application that includes a description of the project, the project’s financing structure, and the proposed technology to apply for the credit. According to IRS officials, DOE’s NREL reviewed the applications to determine whether the projects met the required technical criteria, and then made recommendations to IRS about whether the projects should receive an allocation of the tax credit. According to IRS officials, 12 awards have been made for projects included in our scope, and 4 of those projects had been placed in service as of September 2014. Additionally, according to the officials, further allocations of the credit are available. IRS announced a 2015 reallocation round on March 9, 2015 and the agency plans to send acceptance letters by April 30, 2015. Projects that received support through federal tax expenditures may also have received support through outlay or loan programs. See appendixes VI and VII for program descriptions and details on estimates of forgone revenue for all federal tax expenditures that supported the development of utility-scale electricity generation projects. While some project-level data are collected for the projects supported through outlays, loan programs, and the Credit for Investment in Clean Coal Facilities, basic information such as projects supported or MW of generating capacity added is not collected or available about the ITC and PTC. The ITC and PTC—which accounted for an estimated $11.5 billion in forgone revenue from 2004 to 2013—are the two largest tax expenditures supporting these projects and in the past 5 years, the estimated forgone revenues from them have more than tripled from $870 million to $2.8 billion. Key information is not available because the IRS does not collect certain project-level data, such as the total generating capacity added. Specifically, For the ITC, the IRS requires all developers to report the total amount of the credit they are claiming for all eligible projects aggregated as a single line item; therefore, the IRS does not know the total number of projects for which an individual developer is claiming the credit. Developers who were eligible for the PTC but instead elected to claim the ITC must also submit supporting documentation that includes project-level data, such as the generating capacity and technology of each specific facility for which they are claiming the credit; however, the IRS does not require such project-level information from developers eligible only for the ITC. Consistent with the Internal Revenue Code, in general, the IRS is not allowed to make individual taxpayer information available for analysis, but IRS can and does make available certain aggregated data. However, IRS does not make available the project-level data it collects for the ITC. For the PTC, as with the ITC, the IRS requires all developers to report the total amount of the credit they are claiming for all eligible projects aggregated by technology as a single line item; therefore, the IRS does not know the total number of projects for which an individual developer is claiming the PTC. Unlike the ITC, the IRS requires developers to report the technology type (e.g. wind, geothermal, solar) for which they are claiming the PTC. IRS does not require developers to submit any project-level data, such as generating capacity, when they claim the PTC. The IRS is not required to collect or evaluate data other than those which are required for administration of the tax code unless it is legislatively mandated to collect additional information. IRS officials stated that, given a number of factors, IRS is unlikely to collect additional information on these tax expenditures without being directed to do so by Congress. IRS has not evaluated the costs of collecting these data. As we have previously found, collecting additional data to identify users and specific properties would require changes in IRS forms and information processing procedures. To some extent, the increasing number of taxpayers filing electronically could make it easier for IRS to collect additional data without expensive transcription costs. In considering additional data requirements, it is important that Congress weigh the need for more information with IRS’s other priorities because such requirements likely would increase, to some degree, the administrative costs for IRS and the compliance burden on taxpayers. If policymakers conclude that additional data would facilitate examining a particular tax expenditure, additional considerations on what data are needed, who should provide the data, who should collect the data, how to collect the data, what it would cost to collect the data, and whether the benefits of collecting additional data warrant the cost of doing so would be important. Nonetheless, since 1994, we have encouraged greater scrutiny of tax expenditures to help policymakers make more informed decisions about using such mechanisms as a means of supporting policies. For example, since 1994, we have found that substantial revenues are forgone through tax expenditures, yet policymakers have had few opportunities to make explicit comparisons or evaluate trade-offs between tax expenditures and federal spending programs. Based on these and other findings, we recommended that Congress explore opportunities to exercise more scrutiny over indirect spending through tax expenditures, and Congress took action by subjecting certain tax expenditures to closer examination. In addition, in 2005, we found that tax expenditures may not always be efficient, effective, or equitable and, consequently, we concluded that information on tax expenditures could help policymakers make more informed decisions as they adapt current policies in light of fiscal challenges and other overarching trends. We also concluded that reviews of tax expenditures could help establish whether these programs are relevant to today’s needs and, if so, how well tax expenditures have worked to achieve specific objectives and whether the benefits from particular tax expenditures are greater than their costs. We have also previously concluded that limited data about specific tax expenditures can hinder analysis of their effectiveness. For example, in 2008, we determined that the data the IRS collected were insufficient for examining efforts to use a tax expenditure to encourage economic development on Indian reservations. As a result, we suggested that Congress consider requiring IRS to collect additional information about the tax expenditure. Similarly, in examining a broad range of tax expenditures in 2013, we concluded that it was becoming more pressing to determine whether tax expenditures were achieving specific objectives. Additionally, the Government Performance and Results Act Modernization Act of 2010 established a framework for providing a more crosscutting and integrated approach to focusing on results and improving government performance. This act makes clear that tax expenditures are to be included in identifying the range of federal agencies and activities that contribute to crosscutting goals, and guidance from the Office of Management and Budget directs agencies to do so for their agency priority goals. Such information can be used to inform congressional decisions about authorizing or reauthorizing provisions in the tax code. In requesting this report, Congress asked us to evaluate federal supports for the development of utility-scale electricity generation projects, for example, by providing information about how many projects were built, the technologies supported, and the amount of generating capacity added. The absence of project-level data for the ITC and PTC—such as is available for projects that took Treasury’s payments in lieu of these tax credits—precluded us from examining and providing this information. Without it, Congress and others do not have basic information about what has been supported, including how many projects used these tax expenditures or how much generating capacity was added. According to the Congressional Research Service, the ITC and PTC were designed to encourage the commercialization of renewable energy technologies. Basic information is required for any evaluation of these tax credits, such as determining whether or not they were effective at encouraging development of new renewable projects. Developers combined state and federal supports to secure financing for renewable projects, and these supports reduced the price paid for renewable electricity by retail customers. Reducing state or federal supports would likely reduce the development of renewable projects unless PPA prices increased to compensate for the reduction in federal support. Debt and Equity Project financing through private markets generally takes two forms—debt and equity. Similar to a home mortgage, debt is incurred when a developer borrows funds with prescribed repayment terms—such as an interest rate and a specified number of payments. The lender has no ownership in the property but may be able to take over the property if the borrower does not make payments as agreed. In addition, in the event of a bankruptcy or other loan default, the lender typically has the first right to any assets. Equity is invested funds that give the investor an ownership interest in the operations and assets of a business and a right to a portion of any income remaining after payment of operating costs and payments on debt. The investor is not entitled to repayment if the project fails. Because investors consider debt to be less risky than equity, it is typically the cheaper form of private financing. However, lenders typically will not lend the total costs of a project and they often place limits on the amount of money they will lend by limiting the amount of the payment on the loan to a specified percentage of the expected income of the project. Developers combined state and federal supports to finance renewable projects. As previously noted, state supports in the form of RPSs and RPGs mandated that retail service providers obtain a certain percentage or amount of the electricity they sell from renewable sources. These supports created additional demand for electricity from renewable sources. Retail service providers comply with this requirement by either generating their own electricity from renewable sources or by purchasing this electricity from a third party, such as an independent power producer. To purchase renewable electricity, retail service providers often issue solicitations seeking bids for PPAs—long-term contracts in which the retail service provider agrees to purchase power and which provides the developer with an expectation of stable revenue. In response to these solicitations, developers bid for these PPAs. Once bids are selected and developers are awarded PPAs, developers generally then attempt to secure debt and equity to finance their projects through private markets. In seeking project financing, developers combine the value of the revenues guaranteed in their PPAs and the value of the federal supports to secure favorable financing terms. Tax Equity Partnerships In several cases, developers of renewable projects had to enter into complex financial partnerships—tax equity partnerships—to use certain tax expenditures. For example, the use of tax expenditures like the Investment Tax Credit and Production Tax Credit required developers’ tax liability to equal or exceed the value of the tax expenditure. Developers with substantial corporate profits generally had enough tax liability to be able to directly use these tax expenditures. However, developers with lower tax liability had to enter into arrangements known as tax equity partnerships with third parties—usually large financial institutions, such as investment banks—that had sufficient tax liability in order to use tax expenditures. Under these partnerships, the third party typically provided equity for the project in exchange for the right to use nearly all of the tax benefits and receive a share of the project revenues. According to stakeholders, the partnerships typically incurred legal, administrative, and other transaction costs that reduced the value of tax expenditures to the developers’ projects by 10 to 30 percent. Nonetheless, some stakeholders reported that tax equity partnerships were critical for projects to move forward. Federal supports reduced the price of renewable electricity for retail customers by reducing the cost to the developers to build projects in two key ways. First, some federal loan programs reduced the cost of capital— i.e., the funds necessary to build the projects. For example, some stakeholders said USDA loan programs offered lower interest rates than were available through the capital markets, which lowered the overall cost of borrowing. Second, federal tax expenditures and payments allowed developers to recover some of their costs. For example, the ITC allowed developers to recover up to 30 percent of eligible project costs for solar and other qualifying renewable energy facilities by reducing the amount of taxes they owed. However, many stakeholders noted that, in some cases, developers needed to enter into complex financial partnerships—tax equity partnerships—to utilize federal tax expenditures, which reduced the value of the federal support to the developer. According to several stakeholders, the amount that developers can bid for a PPA depends on how much federal support the project expects to receive; therefore, these supports allowed developers to offer lower prices in their PPAs than they otherwise could have. These lower prices were then passed on to retail customers. In this way, these supports can be thought of as reducing the price of electricity that retail customers pay. Reducing state or federal supports would likely reduce the development of renewable projects. To understand the effects of changes to federal tax expenditures, we modeled hypothetical utility-scale solar photovoltaic and wind projects and found that reducing or eliminating the ITC or PTC would likely reduce the number of renewable projects built because either developers’ returns would decline or PPA prices would increase. For our analysis, we held investor rates of return—which stakeholders typically refer to as the internal rate of return—constant. We modeled the two projects with variations in the levels of the ITC—at 10 and 30 percent— and PTC—with no PTC and with the PTC at $0.023 per kilowatt-hour. Our modeling suggests that reducing or eliminating federal financial supports could result in substantially reduced returns for developers, which could reduce the number of new renewable utility-scale electricity generation projects built. For example, in the case of the solar project, we found that with a reduced ITC and constant PPA prices, the developer’s returns could decrease by as much as 76 percent (see table 2). Likewise, for the wind project, we found that without the PTC, the developer’s returns could decrease by 68 to 109 percent—in other words, in the extreme case, the developer would lose money by developing the project. Our modeling results are consistent with the effects of past expirations of the PTC. As we have previously found, in the years following the PTC’s expiration, new additions of wind capacity fell dramatically. Alternatively, we found that if we held the developer’s returns constant, a reduction or elimination of federal supports could mean that, for future projects to remain viable, electricity prices in PPAs would have to increase. Specifically, for the solar project with the lower ITC, we found that the electricity prices in PPAs would need to increase by 20 to 27 percent if developers were to maintain their returns. For wind projects without the PTC, we found that electricity prices would need to increase by 32 to 62 percent if developers were to maintain their returns (see table 3). Placed in a broader context, because PPA prices are determined through negotiations between developers and retail service providers, the willingness of these providers and state regulators to agree to higher prices will likely constrain the ability of developers to maintain their returns. If expected returns from renewable energy projects are reduced past a certain point, developers may seek alternative investments, either in the energy sector or elsewhere. Collectively, the constraints faced by developers with reduced or eliminated federal supports would likely lead to a reduction in the level of investment in new renewable utility-scale electricity generation projects. The extent to which development of renewable projects would decrease depends on, among other factors, how states respond to the effects of reduced federal supports. Specifically, reducing federal supports would reduce developers’ returns unless PPA prices increased to compensate for the reduction in federal support. The amount PPA prices could increase may be constrained by how close states are to completing their RPSs. Four of the 24 state regulatory commissions that responded to our survey question about progress made by investor-owned utilities toward completing their RPSs reported that they have either met or exceeded their RPSs (see fig. 4). In these states, if PPA prices were to increase beyond the prices available for other sources of electricity, renewable development would likely decline because investor-owned utilities would not be required to purchase the more expensive renewable electricity. However, assuming that RPSs remain the same in the 20 states that reported not having met their RPSs, investor-owned utilities will need to obtain additional renewable capacity even if the price to do so increases. The amount PPA prices could increase may also be constrained by state cost-containment mechanisms. Cost-containment mechanisms are sometimes included in state RPS legislation to limit costs associated with RPS compliance. For example, some RPSs allow state regulatory commissions to freeze or delay RPS requirements if purchasing additional renewable energy forces retail prices to exceed a threshold deemed excessive. Of the 27 states that reported having an RPS in our survey, 18 reported having cost-containment mechanisms in place, and 8 reported having no such mechanism. Looking forward, however, some states may revise or implement cost-containment mechanisms if prices of renewable electricity increase. Some stakeholders noted that, in the absence of federal supports, developers would continue to build renewable projects to meet existing RPSs even if doing so increased electricity prices for retail customers, unless states had existing cost- containment mechanisms or implemented new ones. The federal government has demonstrated a commitment to supporting the development of utility-scale electricity generation projects through a variety of federal programs. While agencies collect data on projects supported through outlays, loan programs, and some tax expenditures, including the Credit for Investment in Clean Coal Facilities, the IRS does not collect such data for the ITC or PTC—the two largest tax expenditures supporting new utility-scale electricity generation projects. The ITC and PTC have increased sharply in recent years—resulting in billions of dollars in forgone revenue to the government—and will continue to represent significant forgone revenue for years to come. Since 1994, our body of work on tax expenditures has encouraged greater scrutiny of tax expenditures to help policymakers make more informed decisions. Specifically, we have concluded that more data on tax expenditures would allow policymakers to compare and evaluate trade-offs between tax expenditures and outlays and loan programs. Data currently available on outlays and loan programs allow policymakers to understand how many projects and megawatts of new generating capacity were added with federal support, thus allowing for an understanding of how effective the programs were at encouraging the development of renewable projects. However, because basic information on the ITC and PTC are not available, it will be difficult for Congress to evaluate the effectiveness of these tax credits or compare them with outlay or loan programs as it considers reauthorizing or extending them. If Congress wishes to evaluate the effectiveness of the ITC and the PTC as incentives for the development of renewable utility-scale electricity generation projects as it considers proposals to extend the ITC or reauthorize the PTC, it should consider directing the Commissioner of Internal Revenue to take the following two actions: Provide Congress with project-level data currently collected from taxpayers who claim the ITC in lieu of the PTC—such as the number of projects for which they are claiming the credit, the technology of the projects taking the credit, and the total generating capacity added— and make such data available for analysis. Additionally, take steps to collect and report the same data from all taxpayers claiming the ITC. Take steps to collect project-level data from taxpayers claiming the PTC—such as the number of projects for which they are claiming the credit, the technology of the projects taking the credit, and the total generating capacity—and make these data available for analysis. We provided a draft of this report to DOE, Treasury, and USDA for review and comment. None of the agencies provided formal comments. Treasury provided technical comments, which we integrated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of the report to the appropriate congressional committees, the Secretaries of Agriculture, Energy, and the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made significant contributions to this report are listed in appendix IX. This report examines supports for utility-scale electricity generation projects for fiscal years 2004 through 2013. Our objectives were to: (1) identify key state supports for these projects; (2) examine key federal financial supports provided through outlays, loan programs, and tax expenditures for these projects; and (3) examine how state and federal supports affect the development of new renewable projects and how reducing federal supports may affect such development. To identify key state supports, examine federal supports, and examine how these supports affect the development of new renewable projects, we interviewed officials at the U.S. Department of Energy (DOE), U.S. Department of the Treasury (Treasury) and U.S. Department of Agriculture (USDA); representatives from industry trade associations; and project developers known to have received federal support to build projects. We then used the “snowball sampling” technique and selected stakeholders to interview who had experience or knowledge related to our objectives. We conducted semistructured interviews with nearly 50 stakeholders including project developers and owners; attorneys and experts who specialize in project finance; industry trade associations; nongovernmental organizations; banks that provide and arrange equity and debt financing; investor-owned utilities, municipally-owned utilities, and electric cooperatives; state energy agencies; and an independent system operator. Because this was a nonprobability sample, the information these stakeholders provided cannot be generalized to other stakeholders but provided valuable insights. See appendix II for a list of stakeholders we interviewed. To identify the number of utility-scale electricity generating projects constructed and the generating capacity added from 2004 through 2013, we analyzed data from the SNL Financial database. To assess the reliability of these data, we interviewed a knowledgeable individual at SNL Financial and reviewed existing information about the system. From this review, we determined that the data were sufficiently reliable for the purposes of this report. To further examine state and federal supports that aided the development of these projects, we sent a Web-based survey to officials at state regulatory agencies in all 50 states, the District of Columbia, and five U.S. territories. Of those we contacted, 46 states and three U.S. territories responded, for a response rate of 88 percent. We asked survey respondents about: (1) regulatory commission responsibilities; (2) the role of the regulatory process in supporting construction of new utility-scale electricity generation projects; (3) the importance of federal and state supports relative to broader market conditions; (4) federal supports for new utility-scale electricity generation projects; (5) state supports for new utility-scale electricity generation projects; and (6) renewable portfolio standards and goals. We solicited comments on an initial draft of our survey from knowledgeable officials at five state regulatory agencies and at the National Association of Regulatory Utility Commissioners—the national association representing state public service commissioners. We conducted pretests with them to ensure that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on survey respondents, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We chose to pretest with five states that had renewable portfolio standards, as well as some that were traditionally regulated and some with restructured electricity markets. We conducted two pretests in person and four over the telephone. We revised the content and format of the survey as appropriate after each pretest based on the feedback we received. We developed and administered the Web-based survey through a secure server. When we completed the final survey questions and format, we sent an e-mail on July 31, 2014, announcing the survey to the regulatory commissions in all 50 states, the District of Columbia, and five U.S. territories. On August 6, 2014, we notified them via e-mail that the survey was available online and provided unique passwords and usernames. We sent follow-up e-mail messages on August 14, 2014, and again on August 20, 2014, to those who had not yet responded. We then contacted all remaining nonrespondents by telephone. We sent a final e-mail that was copied to the regulatory commission’s chairperson on September 8, 2014 stating that we were extending the deadline for submission to September 12, 2014. The questionnaire was available online until September 22, 2014. We sent follow-up e-mails to officials at 14 state regulatory officials to clarify data about states’ renewable portfolio standards and regulatory responsibilities. We made some changes to the renewable portfolio standards data collected as a result of these conversations. As noted, surveys were completed by 46 states and three U.S. territories, for a response rate of 88 percent. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into the survey or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the survey, collecting the data, and analyzing them to minimize such nonsampling error—including using a social science survey specialist to help design and pretest the survey in collaboration with GAO staff who had subject matter expertise. When we analyzed the data, an independent analyst checked all computer programs. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database, thereby minimizing error. For a copy of our survey, see appendix III. To examine key federal supports for these projects, we reviewed relevant legislation, previous GAO reports, and agency documents, and we interviewed agency officials. Using our previous reports, we compiled a list of federal supports for these projects, and during our interviews with stakeholders we asked which of the supports were key to the development of projects. The federal programs described in this report reflect those supports that stakeholders considered key for the development of new utility-scale electricity generation projects. We also collected and analyzed agency data on outlays, loan programs, and tax expenditures that supported these projects from fiscal year 2004 through 2013 as follows: Outlays: We collected and analyzed data on outlays, projects and generating capacity added from USDA, DOE, and Treasury. To assess the reliability of these data, we interviewed individuals with knowledge of them. From this review, we determined that the data were sufficiently reliable for the purposes of this report. Loan programs: We also collected and reviewed data from DOE and USDA on loan programs, projects, and generating capacity added. We used two methodologies to calculate the cost to the government of loan programs supporting these projects: For DOE’s two loan guarantee programs, we collected net lifetime credit subsidy reestimates, including interest, for all the loan guarantees within our scope—those that supported projects of 1 megawatt (MW) or greater that were connected to the grid with the intent to sell electricity—as of the close of fiscal year 2013. Because only a subset of the loans in DOE’s portfolio is within our scope, our estimates will not match the estimates found in the fiscal year 2014 Federal Credit Supplement to the Budget of the U.S. Government. We added that net lifetime credit subsidy reestimate to the original credit subsidy estimate to calculate the estimated cost to the government of the loan guarantee as of the close of fiscal year 2013. We then summed those estimates to calculate the total cost of DOE’s loan guarantee programs. To assess the reliability of these data, we interviewed agency officials, verified our calculations with agency officials, and made changes as appropriate. From this review, we determined the data were sufficiently reliable for the purposes of this report and agency officials concurred with our results. For USDA’s loan programs, we used USDA’s net lifetime credit subsidy factor reestimates, including interest, for each loan cohort (all loans guaranteed within a fiscal year) from the fiscal year 2014 Federal Credit Supplement to the Budget of the U.S. Government, and applied the reestimated credit subsidy factor to each individual loan. Only a subset of the loans in USDA’s portfolio is within our scope, therefore, our estimates will not match the estimates found in the fiscal year 2014 Federal Credit Supplement to the Budget of the U.S. Government. Because USDA does not calculate estimates on a loan-by-loan basis but does so on a cohort basis, applying a cohort’s subsidy factor to only those loans included in our scope represents an estimate of the expected cost to the government. To assess the reliability of these data, we interviewed agency officials, verified our calculations with agency officials, and made changes as appropriate. From this review, we determined the data were sufficiently reliable for the purposes of our report and agency officials concurred with our results. Tax expenditures: We compiled estimates of forgone revenue to the government from energy-related tax expenditures calculated by Treasury and the congressional Joint Committee on Taxation (JCT) to estimate the cost to the government of supporting these projects. Both Treasury and JCT estimate the revenue loss associated with each tax provision they have identified as a tax expenditure. Treasury’s list is included in the President’s annual budget submission; JCT issues annual tax expenditure estimates as a stand-alone product. Both organizations calculate a tax expenditure as the difference between tax liability under current law and what the tax liability would be if the provision were eliminated and the item were treated as it would be under a “normal” income tax. Revenue loss estimates do not incorporate any behavioral responses and thus do not reflect the exact amount of revenue that would be gained if a specific tax expenditure were repealed. In general, the tax expenditure lists that Treasury and JCT publish are similar, although these lists differ somewhat in the number of tax expenditures reported and the estimated revenue losses for particular expenditures. Specifically, we used the most recent tax expenditure estimates for fiscal years 2004 to 2013 developed by Treasury and reported by Office of Management and Budget in the Budget of the U.S. Government for fiscal years 2006 to 2015. Similarly, we used the most recent tax expenditures estimates developed by JCT and reported in their Estimates of Federal Tax Expenditures reports for fiscal years 2004 to 2012. For fiscal year 2013 data, we used estimates from the 2012 JCT report, which reflect the provisions in federal tax law enacted through January 2, 2013. Although we present the tax expenditure estimates in aggregate, and the sums are reliable as a gauge of general magnitude, they do not take into account interactions between individual provisions. To assess the reliability of these data sets, we reviewed available documentation on the collection of and methods that were used in calculating the estimates. From this review, we found some limitations but determined that they were sufficiently reliable for the purposes of this report. We did not analyze federal supports related to electricity end use or consumption, such as those designed to promote energy efficiency and conservation or to provide low-income energy assistance. In addition, because our scope was limited to supports for the construction of new utility-scale electricity generation projects, we did not collect data on possible electricity-related research and development funding by federal agencies, nor did we examine other financial structures, such as master limited partnerships, real estate investment trusts, or yield cos, which could have been used for the development of these projects. To examine how state and federal support affect the development of projects, we conducted semistructured interviews with nearly 50 stakeholders, as noted above. We also modeled typical project finance structures—as identified by stakeholders—for hypothetical solar photovoltaic and wind projects using the DOE’s National Renewable Energy Laboratory’s (NREL) System Advisor Model. For information on our analysis, see appendix VIII. We conducted this performance audit from August 2013 to April 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Stakeholders Interviewed by GAO Type of stakeholder For-profit developers: Utilities and independent power producers Caithness Energy Evergreen Clean Energy, LLC Exelon Generation Co. First Solar, Inc. NextEra Energy Resources, LLC Pacific Gas and Electric Company Nonprofit developers: Electric cooperatives and municipally-owned utilities American Municipal Power, Inc. Associated Electric Cooperative Inc. Brazos Electric Cooperative, Inc. The contents of this appendix represent an approximation of how survey respondents viewed GAO’s survey online. In addition to the answer options provided, all survey respondents had the option to report “No answer” to each of the questions contained in this survey. Please see appendix I for additional information about how the GAO administered this survey. The tables in this appendix reflect answers provided by officials from state regulatory commissions to survey questions about their state’s renewable portfolio standard (RPS) or renewable portfolio goal (RPG). Table 4 reflects answers for states and table 5 reflects answers for U.S. territories. States and territories that did not participate in our survey are not included in these tables. Tables 6 and 7 in this appendix reflect survey respondents’ answers to questions about whether regulatory commissions regulate retail rates for electricity generation services provided by investor-owned utilities, municipally-owned utilities, and electric cooperatives in their states or territories. States and territories that did not participate in our survey are not included in these tables. For more information about how we administered our survey, see appendix I. Table 8 in this appendix reflects outlays that supported utility-scale electricity generation projects for fiscal years 2004 to 2013. Table 9 reflects the estimated cost to the government of loan programs that supported these projects during this same time period. Table 10 reflects the estimated cost of tax expenditures for these projects during this same time period. Tables 11 through 13 below provide descriptions, by agency, of the federal programs we identified that supported utility-scale electricity generation projects. The tables also provide information on supports that will or have expired, in full or in part, due to an expiration of legislative authority or some other expiration under the law as of the spring of 2015, as well as those supports that currently have no expiration. We used the System Advisor Module (SAM) developed by the National Renewable Energy Laboratory (NREL), and, as noted below, a modification of SAM that we developed, to analyze the possible effects of actual and planned reductions in the value of the Energy Investment Credit, also known as the Investment Tax Credit (ITC) or the Energy Production Credit, also known as the Production Tax Credit (PTC) on renewable utility-scale electricity generation projects. These tax credits, among others, represent a key form of federal support for the construction of new renewable utility-scale electricity generation projects, and they can represent a significant portion of the total after-tax returns from investments in renewable energy projects. We used SAM to estimate the magnitude of these effects. This appendix describes our analysis of the role of the ITC and PTC on investments in renewable energy projects, and the effects of changes in the value of these tax credits on those investments by (1) providing an overview of SAM, (2) describing our use of SAM, and (3) providing the key results from SAM. SAM provides energy performance and financing tools that are designed to facilitate investment and analytical decisions in renewable energy projects. These tools can provide information to participants in the renewable energy sector, including policy analysts and developers of renewable energy projects. SAM provides the flexibility to allow the user to input either highly detailed configurations of equipment and financing or generalized assumptions. For example, a solar flat-plate photovoltaic project could be specified in terms of individual solar panel and inverter modules installed with very specific details as to tilt and ability to track the sun, and with component-by-component acquisition and installation costs. Alternatively, the project can be described in a less specific manner, with an aggregate installation cost per watt. SAM is composed of the following two modules: Performance module: SAM can be used to analyze many aspects of the expected energy performance of large, utility-scale solar and wind projects. SAM also allows users to compare differences in how specific equipment may perform and can estimate equipment conversion efficiencies for specific modules of solar panels, wind turbines, and other equipment that can be used to develop estimates of electricity production. SAM also includes data on typical, as well as historical weather patterns for a wide range of locations. Financial module: In SAM’s financial module, the user specifies values for cost and other financial characteristics, including the value of federal tax credits and accelerated depreciation for renewable energy property. The financial module begins with energy inputs automatically transferred from the performance module—specifically, the estimated amount of annual electricity generated. The financial module assumes that the project earns its revenues from sales of this electricity to an electric utility through a contract referred to as a power purchase agreement (PPA). The financial module generates a cash flow analysis over the life of the project given the specification of revenues, costs, and information about the nature of the investment in the project. The financial module is flexible in that different investment structures can be examined. Specifically, the project finance structures that can be analyzed in SAM include two partnership flip structures—a structure in which the vast majority of project cash and tax benefits and liabilities go to one partner until certain financial conditions are met, at which point they flip so that the other partner receives the vast majority of cash and tax responsibilities— and a structure in which the developer owns the project—referred to as the single owner structure. The user must specify other financial parameters, including desired rates of return for the investors— specifically after-tax internal rates of return (IRR)—project borrowing costs, and how project revenues and tax expenditures will be allocated among partners in the two partnership structures. SAM’s ability to analyze different investment structures is important. As noted elsewhere in this report, some developers have to enter into complex financial partnerships—tax equity partnerships—with third party entities in order for the project to make use of these tax benefits. For this analysis, we examined two such partnerships as follows: All equity partnership flip: In the all equity partnership flip structure, the developer and tax equity partner create a special-purpose entity, formed exclusively to build and operate the project, which is funded entirely by the equity contributions from both partners. The tax equity partner provides the majority of funding for the project in return for nearly all project revenues and tax expenditures (as well as any tax liabilities) generated by the project for a specified amount of time from the beginning of the project. This period of time, which can vary by project, depends on the tax equity partner’s desired rate of return and rules governing the tax expenditures used by the project. Once the tax equity partner realizes its required rate of return, the allocation of project proceeds “flips” so that the developer begins receiving the vast majority of project revenues and tax liabilities. Leveraged partnership flip: The leveraged partnership flip is similar to the all equity flip, but substitutes some or all of the project developer’s equity investment with borrowed funds, referred to as debt. Project revenues and tax expenditures are still shared between the partners in the same manner as in the all equity flip; however, the existence of debt means that the project must make principal and interest payments before any revenues can be shared between the partners. Thus, if the project were to run into financial difficulties, the debt- holder would have a senior claim on project proceeds, so the tax equity investor would receive lower-than-anticipated returns. Several stakeholders told us that tax equity partners prefer arrangements in which their returns are not subordinate to debt, so leveraged structures are not commonly used. They also noted that when leveraged structures are used, tax equity investors require higher rates of return in order to compensate them for the higher risk. In contrast to the partnership structures, the single owner structure, which we also modeled, is simpler in that there are no arrangements between the partners that must be negotiated and monitored. The owner makes the equity investment, typically accompanied by debt financing, and receives all available cash proceeds and tax benefits (or liabilities), but, as mentioned, this structure is not attractive to those developers without income tax liabilities sufficient to make use of tax credits. The single owner and leveraged partnership flip structures involve debt- financing. SAM assumes that the project takes on as much debt as possible because debt is typically the least-costly funding source for a project. The maximum level of debt depends on two factors: the amount of cash available for debt service, and the debt service coverage ratio. The amount of cash available for debt service is a pretax measure of earnings defined as total revenue minus total expenses minus the amount set aside for equipment replacement reserves. The debt service coverage ratio is the ratio of cash available for debt service and the amount used for debt service, defined as the sum of principal and interest payments. If the ratio has a value of one, that means all available cash is used for debt service. For a given debt service coverage ratio, the maximum level of debt increases with the amount of cash available for debt service. For a given amount of cash available for debt service, the maximum level of debt decreases with the debt service coverage ratio. The debt service coverage ratio is selected by the user of SAM to represent constraints imposed by the lender. One implication of this aspect of SAM is that, as PPA prices increase, so will the amount of cash available for debt service and thus the share of debt in the project financing. Thus, higher PPA prices are associated with smaller equity investments. The SAM financial module has two possible solution methods. Both solutions link project investments, returns on those investments, and the PPA price. In one solution mode, the module solves for the lowest PPA price that will provide the investor’s return on investment goal. For example, if an investment goal—i.e., an after-tax internal rate of return goal—is 12 percent, the module will determine the lowest PPA price meeting that goal. The second solution mode calculates the cash flows that would result from the selection of a particular PPA price. For example, if a PPA price of $0.07 per kilowatt-hour is desired, the module will determine the financial flows that result from that PPA price, including the rate of return an investor would earn with that price. In this section, we discuss our use of SAM. Specifically, we discuss: (1) the types of projects we modeled; (2) our investor rates of return targets; (3) installation and finance costs; and (4) aspects of the partnership structures related to equity shares and capital recovery by the project developer. We believe that our use of SAM to examine the possible effects of changes in the value of tax credits is an appropriate use of the model, and that SAM is sufficiently reliable for the purposes of this report. To make that determination, we met with NREL officials to learn about the development and uses of the module, and we identified peer-reviewed and other publications that used SAM to analyze various energy and financial performance issues related to investments in renewable energy. We also interviewed industry experts who had used the modules and shared our preliminary results with officials from NREL and other industry participants and analysts. Where applicable, we incorporated their comments into our analysis. We modeled a hypothetical solar photovoltaic project and a hypothetical wind project, and we used SAM to examine the role of tax credits on investments in these projects. We located our hypothetical solar photovoltaic project in Phoenix with a generating capacity of 100 MW, and we located our hypothetical wind project in the state of Washington with a generating capacity of approximately 150 MW. The SAM performance module calculated that the solar photovoltaic project would generate172,975,664 kilowatt-hours of first-year energy for a capacity factor of 19.7 percent. Likewise, SAM calculated that the wind project would generate 530,041,600 kilowatt-hours of first-year energy for an implied capacity factor of 40.4 percent. We believe projects of these sizes and locations represented reasonable examples of utility-scale renewable energy projects. For each of our projects, we assumed that the costs to install and operate the project would not change across the project finance structures we examined; however, total project costs varied because financing costs vary across the investment structures. For both types of projects, we assumed a project life of 20 years, PPA terms of 20 years, and PPA price escalation at the rate of 2.5 percent annually. After specifying project costs, return on investment targets, and other parameters as inputs to the module, which we describe in more detail below, we analyzed module solutions for each project in two tax credit environments, and compared the results between the two environments. First, we defined a more- generous tax credit environment—which, in the case of the ITC, was at the current level of 30 percent of the value of a qualified investment and, in the case of the PTC, was at the level of the PTC before it is scheduled to expire on December 31, 2014. In 2013, this value was $0.023 per kilowatt hour, which was then set to escalate over a 10-year period. We then defined a less-generous tax credit environment in which the ITC is 10 percent of the value of the investment—a level to which the ITC is scheduled to change in 2017—and there is no PTC—since the PTC is scheduled to expire at the close of 2014. We modeled the solar project using the ITC and the wind project using the PTC because, according to stakeholders, utility-scale solar projects generally use the ITC and utility- scale wind projects generally use the PTC. We modeled both projects with accelerated depreciation for renewable energy property. In the more generous tax credit environment, we used SAM to calculate the PPA price that yields the investor’s rate of return target. We labeled this case as the base case. We analyzed the less generous tax credit environment in two ways. First, we used SAM to calculate the PPA price that provided the investor’s target rate of return in the new environment. This solution PPA price will be higher than the solution in the more generous case because the contribution of the tax credit to total after-tax returns is lower (in the solar photovoltaic project), or nonexistent (in the wind project). If the investor is to receive the same return on investment, the returns from energy revenues must increase to replace those lost with the reduction or elimination of the tax credit. Since the amount of electricity generated by the project does not change, the price at which this electricity is sold is the only mechanism by which higher revenues can be obtained. We labeled this solution as the higher PPA case. In the second solution concept, we maintained the PPA at the level found in the more generous tax credit environment, and we calculated the lower returns that resulted from holding energy revenues at this level while the returns from the tax credit were reduced or eliminated. We labeled this solution as the lower returns case. For the leveraged partnership and single owner investment structures, the higher PPA case leads to a smaller equity investment by the tax equity investor and the single owner, respectively, because the share of debt financing increases along with increases in the PPA price. In the all equity partnership flip structure, the equity investment share of the tax equity partner does not similarly adjust within the module; rather, it is a parameter chosen by the user of SAM. Because the reduction or elimination of the tax credits would likely reduce the value of these projects to tax investors, and hence their willingness to invest in these projects, we chose to reduce the tax investor’s share in the investment from 60 percent in the base case to 30 percent in the less generous environment. We specified rate of return targets for the investors based on information we collected in interviews with stakeholders, which included project developers and owners; attorneys and experts who specialize in project finance; industry trade associations; nongovernmental organizations; banks that provide equity and debt financing; and investor-owned utilities, municipally-owned utilities, and electric cooperatives. These stakeholders provided their opinions about investment return targets, including differences that might exist between photovoltaic solar projects and wind projects, and considerations that relate to the different investment structures we analyzed. We synthesized this information in providing specifications for our hypothetical projects, and while we believe that they represent contemporary features and conditions, target rates of return for actual projects vary. Table 14, below, shows the specific rate of return targets we selected for our analysis. Because the choice of target year is influenced by the duration of available tax benefits, we selected different target year values for the partnership flip structures for the wind project; specifically, we selected a 10-year target when the PTC was available to match the 10-year duration of the PTC. We intended for these projects, although hypothetical, to represent utility-scale projects developed by experienced developers with market-tested features. In particular, we did not intend for the rate of return targets we chose to include a premium that investors might look for as compensation for any extra risk that might result from projects that contained particularly risky components—such as untested technology. Installation costs include the costs of acquiring and installing capital equipment, such as panels and inverters in the solar photovoltaic project and wind turbines in the wind project. Installation cost represents the most important determinant of total project costs, and hence affects the scale of the investment on which investor returns are calculated. We relied on studies of recent trends in solar and wind installations conducted by analysts at the Lawrence Berkeley National Laboratory. We discussed the issue of installed costs with several analysts and those familiar with recent trends in renewable energy project costs. We selected an installed cost per watt value of $2.00 for the solar photovoltaic project and an installed cost per watt value of $1.70 for the wind project. We believe that these values represent reasonable values for installed cost in the environment in which projects are currently being developed. Total project costs vary across investment structures because of differences in the sources of investment funds. In the single owner and leveraged partnership flip cases, the projects are financed with significant amounts of debt, and there are fees and other costs associated with obtaining loans. Likewise, the partnership flip structures include costs to arrange the partnership and to negotiate the rules under which the partners share the proceeds from the project. Additionally, in the partnership flip structures, we assumed that the project pays the developer a development fee. Table 15 provides information on the values we specified for key cost variables. As we mentioned above, in addition to tax credits, federal support is available to renewable energy projects through the use of accelerated depreciation for renewable energy property on certain equipment. We allocated 95 percent of the project costs into this depreciation category, and placed the remaining 5 percent into the 20-year straight line category. In the case of the solar projects, we reduced the tax basis by an amount equal to 50 percent of the dollar value of the ITC to reflect the basis disallowance treatment associated with the tax provisions governing the use of the ITC. We assumed that the project’s taxable income is subject to a state tax rate of 7 percent and a federal (corporate) tax rate of 35 percent. In describing partnership flip structures, we mentioned the general rule that the vast majority of cash proceeds and tax benefits early in the project life are allocated to the tax investor, and that once the tax investor meets its rate of return target, the allocation of the proceeds flip, and the vast majority of them flow to the developer. One exception to this rule concerns the possibility of capital recovery by the developer in the all equity partnership flip. The SAM all equity partnership flip permits the developer to recover some or all of its equity investment in the early years of a project by receiving all of the cash proceeds for some period of time or until it recovers its equity investment, at which point the bulk of cash proceeds reverts to the tax investor until the time at which the investor’s rate of return target is met. The greater the share of cash that goes to the developer through capital recovery means that less of that cash goes to the tax investor. Thus, a more generous capital recovery selection in SAM increases the developer’s after-tax returns and reduces the tax investor’s after-tax returns. This in turn means that a higher solution level of the PPA price is required to meet the investor’s rate of return target if the developer’s capital recovery increases. Looked at another way, there can be different combinations of developer capital recovery and PPA prices that will meet the tax investor’s rate of return target, but they will result in different rates of return to the developer. Because of our analytical focus on the effects of changes in tax credits, we wanted to hold both investor and developer returns constant when looking at the change in the solution value of the PPA price. That is, we wanted the solution PPA price in the higher PPA case to increase by no more than was necessary to meet both partners’ investment targets. To do this in the case of the all equity partnership flip, we modified the SAM financial module so that we could define an explicit after-tax rate of return target for the developer and meet this target by modifying the capital recovery feature in SAM. We specified the developer’s target year to be the end of the project life, and we specified a rate of return target of 10 percent. Analytically, things are somewhat different in the case of the leveraged partnership flip, even though we wanted to hold the developer’s returns constant in the higher PPA case. Given the small equity investment by the developer and the presence of a relatively large development fee both occurring at the beginning of the project, we chose to express the developer’s returns in terms of the net present value of the total after tax returns, a dollar denominated value rather than in rate of return terms. To do this, we adjusted the size of the development fee paid to the developer so that the developer’s returns, defined in net present value terms, did not increase with the higher solution PPA. We used a discount rate of 10 percent to make this calculation; this is the value we selected as the developer’s rate of return target in both the all equity partnership flip and single owner structures. Another aspect of the all equity partnership flip concerns the specification of the ownership shares of the partners. In the current environment, tax investors are generally the majority partners and, based on our interviews with stakeholders, we specified that the tax investor would have a 60 percent ownership share in the more generous tax credit environment. However, information from our interviews and studies by industry analysts also suggests that tax investors would be less willing to make investments at the same scale if the value of the tax benefits is reduced. We specified that the tax investor would have a 30 percent ownership share in the less generous cases we modeled (see table 16). As can be seen from the material presented in this section, SAM requires the user to specify values for many factors that can influence the cost of investments and the returns to those investments. We interviewed many knowledgeable analysts and market participants to develop the values we used for our analysis, and we believe them to be analytically conservative. We recognize that the selection of different values for key factors will lead to different analytical results. Using SAM, we estimated that reducing or eliminating the value of tax credits increases the required revenues provided through PPAs by approximately 20 to 25 percent in the case of the ITC and approximately 30 to 60 percent in the case of the PTC. The module results suggest that the contribution of the tax credits to total after-tax returns are substantial, and that if developers and investors are to continue to meet their investment targets, projects that appear to have been financially viable in an environment with more generous tax credits would not be viable with less tax credit support without an increased contribution from energy revenue through a higher PPA price or reduced return on investment targets. For example, in the single owner case, the ITC provides approximately half of total after-tax returns when the value of the ITC is 30 percent of a qualified investment and about 23 percent of total after- tax returns when the ITC is reduced to 10 percent. Likewise, for the wind project, the net present value of the PTC is almost half of total after-tax returns when the PTC is in place, and of course makes no contribution to after-tax returns when the PTC is eliminated. While the increases in calculated solution PPA prices were somewhat smaller in dollars per kilowatt hour for the wind project than for the solar project, when they were expressed in percentage terms, the wind project solution PPA price increases were much larger. The value of the PTC started at $0.023 per kilowatt hour, and increased over time. For the wind projects, the base case solution values were below $0.05 per kilowatt hour in each of the investment structures, so the magnitude of the tax credit was about half or more of the solution PPA price. In terms of the lower returns cases, the projects with debt financing took on the same level of debt as in the base case. This meant that there were large equity investments, but reduction or elimination of the tax credits reduced or eliminated those contributions to total after-tax returns. In the all equity partnership flip structures, the tax equity investor met its rate of return target, but the developer’s returns were substantially lower—35 percent in the solar project and over 70 percent in the wind project. In the leveraged partnership flip structure, the project cash flows never flipped, which meant that the tax investor was not able to meet its target rate of return even by the end of the project. We do not characterize the PPA price or investment return changes shown in these tables as predictions of what will happen to electricity prices or to potential investments in and returns from utility-scale renewable energy projects. Some investment structures may become less favored, and other structures may become more favored in response to a change in the level or form of federal support. Nonetheless, we think that the results indicate that the reduction in federal support of renewable energy projects would put upward pressure on the level of PPA prices and downward pressure on the returns that could be reasonably expected by developers. As such, reducing the ITC or eliminating the PTC could result in a combination of the effects suggested in our modeling. Specifically, to compensate for the decline in federal support, developers might be willing to accept lower rates of return, and states might be willing to require utilities and other retail service providers to pay higher electricity prices. However, there may be limits to which these effects could offset a reduction in federal supports. Placed in a broader context, the willingness of utilities and their regulators to agree to significantly higher prices will likely constrain the ability of developers to maintain their returns on investment by negotiating PPAs with significantly higher prices. Similarly, to the extent that project lenders and investors have alternative investment opportunities, it seems unlikely that they would make financing cost concessions on a scale that would offset the reduction in federal support. Developers themselves are likely to have alternative outlets, either in the energy sector or elsewhere, in which to direct investments if expected returns from renewable energy projects are reduced to unacceptably low levels. Collectively, the constraints faced by project developers may lead to a reduction in the level of investment in renewable energy projects if reductions in the level of federal support in the magnitude examined here are observed. Tables 17, 18 and 19 present the module’s results for the hypothetical solar project under the three ownership structures. In table 17 we express owner returns and in table 18, we express developer returns in terms of after-tax internal rates of return. In table 19, we express developer returns in terms of the present value of after-tax returns flowing to the developer. Tables 20, 21, and 22 present the module results for the hypothetical wind project under the three ownership structures. In table 20 we express owner returns and in table 21, we express developer returns in terms of after-tax internal rates of return. In table 22, we express developer returns in terms of the present value of after-tax returns flowing to the developer. In addition to the individual named above, Jon Ludwigson (Assistant Director), Stephen Brown, Marcia Carlsen, Marissa Dondoe, Tanya Doriss, Cindy Gilbert, Carol Henn, Mitchell Karpman, Mary Koenen, Alison O’Neill, Dan Royer, Kelly Rubin, MaryLynn Sergent, Anne Stevens, and Barbara Timmerman made key contributions to this report. | The states and the federal government have supported the development of electricity generation projects in a variety of ways. In recent years, state and federal supports have been targeted toward renewable energy sources, such as solar and wind, although there have been some supports for projects using traditional sources—natural gas, coal, and nuclear. GAO was asked to examine state and federal supports for the development of utility-scale electricity generation projects—power plants with generating capacities of at least 1 MW that are connected to the grid and intend to sell electricity—for fiscal years 2004 through 2013. This report (1) identifies key state supports for these projects; (2) examines key federal support provided through outlays, loan programs, and tax expenditures for these projects; and (3) examines how state and federal supports affect the development of new renewable projects. GAO analyzed relevant legislation, agency outlay and loan program data, and interviewed stakeholders, including project developers and experts. GAO also surveyed state regulatory commissions about state policies. In addition, GAO modeled the impact of reducing federal tax expenditures on project finances. Key state supports, in the form of state policies, aided the development of utility-scale electricity generation projects—particularly renewable ones—in most states, for fiscal years 2004 through 2013. For example, most states have a renewable portfolio standard (RPS) mandating that retail service providers obtain a specific amount of the electricity they sell from renewable energy sources, which creates additional demand for renewable energy. In addition, most states supported new renewable and traditional projects through regulatory policies that set electricity prices, which allowed utilities to recover the costs of building new projects or purchasing electricity from them. Federal financial supports aided the development of new projects, but limited data hinder an understanding of the effectiveness of tax expenditures. From fiscal year 2004 through 2013, programs at the Departments of Agriculture (USDA), Energy (DOE), and the Treasury (Treasury) provided supports including outlays, loan programs, and tax expenditures. For example, one Treasury program provided payments in lieu of tax credits and accounted for almost all of the $16.8 billion in outlays that supported 29,000 megawatts (MW) of new renewable generating capacity. Tax expenditures accounted for an estimated $13.7 billion in forgone revenue to the federal government for renewable projects and $1.4 billion for traditional projects. The two largest tax expenditures GAO examined—the Investment Tax Credit (ITC) and the Production Tax Credit (PTC)—supported renewable projects and accounted for $11.5 billion in forgone revenue. However, the total generating capacity they supported is unknown because the Internal Revenue Service (IRS) is not required to collect project-level data from all taxpayers claiming the ITC or report the data it does collect, nor is it required to collect project-level data for the PTC. IRS officials stated that IRS is unlikely to collect additional data on these tax credits unless it is directed to do so. Since 1994, GAO has encouraged greater scrutiny of tax expenditures, including data collection. Without project-level data on the ITC and PTC, Congress cannot evaluate their effectiveness as it considers whether to reauthorize or extend them. Developers combined state and federal supports to finance renewable projects, and reducing these supports would likely reduce development of such projects. Demand created by state RPSs allowed developers of renewable projects to obtain power purchase agreements (PPA)—long-term contracts to sell power at specific prices. Federal supports, in turn, lowered developers' costs to build renewable projects, which allowed them to offer lower PPA prices than they otherwise could have. According to most stakeholders, these lower prices were then passed on to retail customers. Overall, if the level of support is reduced, fewer projects would likely be built. For example, GAO's modeling suggests that reducing the ITC or eliminating the PTC would likely reduce the number of renewable projects built because developers' returns would decline unless PPA prices increased to compensate for the reduction in federal support. The extent to which development would decrease depends on how states respond to reduced federal support and the associated increase in prices. For example, many states limit the amount retail prices could increase, limiting PPA price increases, which could reduce development. Congress should consider directing IRS to (1) collect and report project-level data from all taxpayers who claim the ITC and (2) collect and report similar data for taxpayers who claim the PTC. DOE, Treasury, and USDA did not provide formal comments in response to a draft of this report. |
In October 1990, the Federal Accounting Standards Advisory Board (FASAB) was established by the Secretary of the Treasury, the Director of the Office of Management and Budget (OMB), and the Comptroller General of the United States to consider and recommend accounting standards to address the financial and budgetary information needs of the Congress, executive agencies, and other users of federal financial information. Using a due process and consensus building approach, the nine-member Board, which has since its formation included a member from DOD, recommends accounting standards for the federal government. Once FASAB recommends accounting standards, the Secretary of the Treasury, the Director of OMB, and the Comptroller General decide whether to adopt the recommended standards. If they are adopted, the standards are published as Statements of Federal Financial Accounting Standards (SFFAS) by OMB and by GAO. In addition, the Federal Financial Management Improvement Act of 1996, as well as the Federal Managers’ Financial Integrity Act, require federal agencies to implement and maintain financial management systems that will permit the preparation of financial statements that substantially comply with applicable federal accounting standards. Issued in December 1995 and effective beginning with fiscal year 1997, SFFAS No. 5, Accounting for Liabilities of the Federal Government, requires the recognition of a liability for any probable and measurable future outflow of resources arising from past transactions. The statement defines probable as that which is likely to occur based on current facts and circumstances. It also states that a future outflow is measurable if it can be reasonably estimated. The statement recognizes that this estimate may not be precise and, in such cases, it provides for recording the lowest estimate and disclosing in the financial statements the full range of estimated outflows that are likely to occur. The liability disclosure requirements stated in SFFAS No. 5 apply to several types of assets, including property, plant, and equipment (PP&E) and operating materials and supplies. SFFAS No. 3, Accounting for Inventory and Related Property, defines operating materials and supplies as consisting of tangible personal property to be consumed in normal operations. SFFAS No. 6, Accounting for Property, Plant, and Equipment, which is effective beginning in fiscal year 1998, deals with various accounting issues pertaining to PP&E. This statement establishes several new accounting categories of PP&E, collectively called stewardship PP&E. Other PP&E is referred to as general PP&E. One of the new stewardship categories—federal mission PP&E—is defined as tangible items owned by a federal government entity, principally DOD, that have no expected nongovernmental use, are held for use in the event of emergency, war, or natural disaster, and have an unpredictable useful life. Federal mission PP&E, which includes ships, submarines, aircraft, and combat vehicles, is a major part of DOD’s total PP&E. Recently, FASAB reviewed the asset category for ammunition and reiterated its position that ammunition should be classified as operating materials and supplies rather than federal mission PP&E. Although the asset categories for financial reporting may be different, the SFFAS No. 5 requirements for recording the disposal liability are the same for operating materials and supplies and mission assets. We undertook this review to assist DOD in its efforts to meet the new federal accounting standard, SFFAS No. 5, and because of our responsibility to audit the federal government’s consolidated financial statements beginning with fiscal year 1997. Our objectives were to determine (1) the status of DOD’s efforts to implement the new federal accounting standard for disclosure of liabilities, such as ammunition disposal costs, and (2) whether the ammunition disposal liability was probable and whether a reasonable estimate of the minimum disposal liability for ammunition could be made. To determine if the liability is probable, we reviewed financial accounting standards, environmental laws and regulations, and DOD manuals that address the handling and disposal of hazardous material. We also reviewed congressional committee reports that requested ammunition disposal cost information. We also interviewed DOD officials responsible for financial reporting and those at the service level responsible for program management, ammunition demilitarization, and disposal. To determine if a disposal liability was reasonably estimable, we obtained information on ammunition inventories, as of September 30, 1996, the most recent data available at the time of our review, and what it costs to demilitarize and dispose of ammunition. To determine the availability of ammunition inventory information, we interviewed service officials responsible for inventory management and obtained information on the services’ ammunition inventory systems as well as their ammunition inventories as of September 30, 1996. To determine the availability of ammunition demilitarization and disposal cost information, we obtained information on the volume, nature, and cost of the Army Industrial Operation Command’s (DOD’s single manager for ammunition) disposal activities, as well as information on the Joint Ordnance Commanders Group’s munitions demilitarization studies. We also obtained information on the Navy’s ammunition disposal activities because the Navy disposes of certain Navy-specific items, such as underwater torpedoes and depth charges. In addition, we interviewed service officials responsible for the accounting and reporting of demilitarization and disposal costs. We analyzed the DOD Joint Ordnance Commanders Group, Munitions Demil/Disposal Subgroup’s 1995 and 1996 reports on demilitarization and disposal cost information. The Subgroup’s 1996 Munitions Demilitarization Study identified disposal costs for 23 munition categories, referred to as Munition Items Disposition Action System (MIDAS) families. We did not independently verify the inventory and cost data furnished to us. We conducted our review between November 1996 and November 1997 in accordance with generally accepted government auditing standards. Appendix I lists the primary locations where we performed our review. We provided a draft of this report to the Secretary of Defense for review and comment. We received oral comments which are discussed in the “Agency Comments and Our Evaluation” section. As we recently stated in our report on DOD’s aircraft disposal liability, as of the end of the fiscal year on September 30, 1997, DOD had not established a policy to implement SFFAS No. 5. On September 30, 1997, the DOD Comptroller’s office posted revisions to the electronic version of DOD’s Financial Management Regulation (FMR) to include SFFAS No. 1 through 4, but SFFAS No. 5 was not included. In commenting on a draft of our aircraft disposal liability report, DOD agreed with our recommendation that SFFAS No. 5 be incorporated in the FMR. In addition, the DOD Comptroller, who is responsible for developing and issuing guidance on accounting standards, and the Under Secretary of Defense (Acquisition and Technology), who is responsible for the operational activities associated with ammunition disposal, have not provided implementation guidance to the services to assist them in estimating the disposal costs for ammunition. Service officials stated that they are reluctant to estimate a liability for their ammunition disposal until they receive DOD-wide guidance. Unless prompt action to implement this standard is taken, it is unlikely that DOD’s or the military services’ fiscal year 1997 financial statements will include an estimate of ammunition disposal costs as required. One of the key criteria cited in SFFAS No. 5 for a liability to be reported is that a future payment is probable—that is, the future outflow of resources is likely to occur. Although, in some cases, the likelihood of a future outflow may be difficult to determine and an entity may have difficulty deciding whether to record a liability for certain events, this is not the case for DOD. DOD continually disposes of ammunition and has an amount for disposal costs in its annual budget. According to the Industrial Operations Command’s Associate Director for Demilitarization, during the last 5 years, DOD has spent over $370 million to dispose of ammunition. Thus, because it is known at the time of acquisition that costs will be incurred for ammunition disposal, the probability criterion for recording a liability is met. The Congress has also recognized that disposal will occur and has emphasized the importance of accumulating these costs and considering this information. In the past 3 years, congressional committees have specifically asked for information related to ammunition disposal costs. The Senate Committee on Appropriations, in its report on the fiscal year 1995 Defense Appropriations bill, directed DOD to develop a plan for the disposal of rocket motors, ammunition, and other explosives, including information on alternative ammunition disposal methods and related costs. The next year, the House Committee on Appropriations, in its report on the fiscal year 1996 Defense Appropriations bill, expressed concern about the Army’s continuing practice of demilitarizing ammunition by open-air burning and detonation. The committee requested an analysis that included the costs and savings of recycle and reuse technologies, the revenue that could be derived from the sale of recycled and reusable products, and the ultimate clean-up costs for open-air burning and detonation sites. Most recently, the Fiscal Year 1997 National Defense Authorization Act required the establishment of a 5-year program for the development and demonstration of environmentally compliant technologies for the disposal of ammunition, explosives, and rockets. The Army Industrial Operations Command (IOC), DOD’s single manager for ammunition, has an ongoing program to dispose of ammunition from all services. While each service maintains its own ammunition inventory management systems and technically owns the ammunition, IOC manages procurement and disposal of ammunition items for all services. IOC supplies or ships ammunition from production and storage sites to field installations and units; manages ammunition disposal programs; oversees efforts to upgrade ammunition already in service; and manages the Army’s worldwide ammunition stockpile. IOC is responsible for the disposal of about 96 percent of all ammunition. The exceptions are generally service-unique ammunition items, like Navy torpedoes, or items that are more practical to dispose of at their current installation or depot or by a contractor. The reasons for disposal of ammunition are obsolescence, deterioration, and excess supply. Obsolescence occurs when the weapons systems that use the ammunition are phased out, thus eliminating the need for the ammunition. Ammunition becomes excess when the quantities on hand exceed what is needed due to factors such as downsizing of the military forces. Deterioration can result from age and long-term storage conditions that render the ammunition unusable. The two overall types of ammunition disposal methods are (1) destructive processes that either explode or incinerate the ammunition and (2) resource recovery and recycling processes that remove the explosive components through a variety of methods and allow their reuse. Ammunition under IOC’s control is tracked by the Commodity Command Standard System. This system accounts for ammunition in depots, but does not include ammunition issued to military field units. Army field ammunition stocks are accounted for by the World-wide Ammunition Reporting System. Systems used by the other services are the Air Force’s Combat Ammunition System, the Navy’s Conventional Ammunition Information Management System, and the Marine Corps Ammunition and Accounting Reporting System. We requested a detailed listing of the year-end ammunition inventory from each of the services and obtained the inventory balance from the information they provided to us. The September 30, 1996, ammunition inventory balance is shown in table 1, which contains the most recent data available. The ammunition inventory serves as the basis for estimating the disposal liability because ammunition used in training and operations is generally replaced to maintain the inventory at certain levels. As a result, training and operational usage may not reduce the total liability for ammunition disposal. The second key criterion in SFFAS No. 5 for reporting of a liability is that an amount be reasonably estimable. In the past, DOD has reported its estimated ammunition disposal costs using several methodologies that range from a “rule of thumb” figure to more detailed cost analyses based on specific types of ammunition. Achieving a reasonable estimate is possible using the existing detailed analyses as a starting point. A number of key factors would have to be considered to ensure that the development of the ammunition disposal cost estimate is as accurate as possible. DOD has used $1,000 per ton as a “rule of thumb” for ammunition disposal costs. For example, in May 1995 congressional testimony, the Deputy for Ammunition, U.S. Army, stated that “rule of thumb is that it costs about $1,000 a ton, hopefully a little less, to get rid of unserviceable ammunition.” According to officials in the Army’s Office of the Deputy for Ammunition, the $1,000 estimate was calculated for the Army’s Conventional Ammunition Demilitarization Master Plan issued in May 1993. They stated that the $1,000 estimate was an average cost based on actual disposals during the 2 years preceding the master plan’s issuance. As part of its response to Senate and House requests for more detailed disposal cost information in 1995 and 1996, DOD compiled historical cost information that could be used as a starting point for developing a reasonable estimate of the ammunition disposal cost liability. In response to the fiscal year 1995 Senate Committee on Appropriations report requesting information on alternative disposal procedures and costs for ammunition and other explosives, the Joint Ordnance Commanders Group (JOCG), Munitions Demil/Disposal Subgroup, analyzed the ammunition stockpile in 1995 using “families” of items for disposal. The MIDAS families are based on the materials contained in the ammunition, methods of assembly/disassembly, preferred demilitarization methods, and any unique features. In its September 1995 report, JOCG provided an estimate of the tonnage requiring disposal for each MIDAS family of ammunition. In response to the 1996 House Committee on Appropriations report that requested additional information, the JOCG subgroup formed an ad hoc working group to study the comparative benefits and costs of different disposal methods using the MIDAS families as the basis for collecting and summarizing the cost per ton for the various alternative disposal procedures. The cost information was collected for both government installations and contractor facilities that had performed ammunition disposal during fiscal years 1994 to 1996. For the 1996 study, the JOCG working group collected historical cost information on the ammunition disposed of during fiscal years 1994 through 1996. It grouped the disposal actions by MIDAS family and calculated an average disposal cost per ton based on the facility type—government-owned and operated facilities, government-owned facilities operated by a contractor, and contractor-owned and contractor-operated facilities. For 14 of the MIDAS families for which costs were available, the demilitarization was performed by a single category of facility and thus yielded a single average cost. The other eight MIDAS families involved demilitarization at more than one type of facility, thus yielding a range of average costs. See appendix II for a listing of the MIDAS families and the average disposal costs that were developed based on the working group’s report. Although a number of critical factors would have to be considered, including the reliability of the historical data as discussed in the following section, the cost estimates developed for the MIDAS families can be used as a starting point to estimate the ammunition disposal liability. The national stock numbers (NSNs) of ammunition items throughout DOD have been associated with the MIDAS families. For example, for the Army, the U.S. Army Defense Ammunition Center and School staff provided us with a data file that translated the specific Army ammunition NSN line items into their respective MIDAS families. Using Army’s fiscal year 1996 inventory amount of 2,144,995 tons (which excludes 14,672 tons of ammunition for which MIDAS cost information was not available) and the average disposal cost data developed by the JOCG working group for the MIDAS families, we estimated that the Army’s ammunition disposal liability could range from about $1.3 billion to $2.1 billion. Marine Corps, Navy, and Air Force officials verified that information was available to perform a similar analysis of their respective ammunition inventories using NSNs and the MIDAS cost families. In using the MIDAS analysis to estimate the disposal liability, several additional factors would have to be considered to refine the estimate. Although SFFAS No. 5 is clear that the disposal liability is to be a reasonable estimate and that the disclosure may be presented as a range, the accuracy and precision of the range will affect its usefulness to decisionmakers. Key estimation factors to be considered include the following. Data reliability - Although we did not verify the underlying data supporting the historical cost data developed for the MIDAS families, a limited review yielded a number of discrepancies. For example, one schedule indicated that a contractor had destroyed 1,512 tons of small caliber ammunition in fiscal year 1996 for $1,512, or $1 per ton. The same schedule showed that the same contractor had destroyed the same type of ammunition in fiscal year 1995 for $728 per ton. In addition, the schedule showed that two government-owned, government-operated facilities had destroyed small caliber ammunition with costs of $0.10 per ton at one facility and $3,327 per ton at the other facility. Such anomalies would have to be addressed before relying on these data as a basis for estimating the total disposal liability. In addition, the appropriateness of averaging the actual disposal costs by type of facility would have to be considered. For example, the study concluded that the disposal cost for white phosphorus was $1,231 per ton. The underlying data show that white phosphorus was destroyed by one government-owned, government-operated facility in six separate batches in fiscal years 1994 through 1996. The reported costs per ton for each batch ranged from $646 to $14,208 and were averaged to arrive at the $1,231 figure. If it is determined that averaging does not appropriately reflect the true range of these costs, alternatives may include calculating a disposal liability range for each MIDAS family based on factors such as type of facility or disposal method. Data completeness - As stated in appendix II, data were not available for six of the MIDAS families because (1) the types of ammunition were not disposed of during the period studied or (2) the ammunition was disposed of by a specific service and the cost data were not available to the working group. DOD would have to consider the significance of these costs and determine whether cost data were collected by individual services. In addition, the MIDAS costs did not include an amount for packaging, crating, handling, and transportation, nor do they reflect the value of any scrap recovered from the demilitarization process. JOCG determined that packaging, crating, handling, and transportation costs ranged from $66 per ton to $228 per ton. Also, although the working group’s 1996 report indicated that storage costs were part of the disposal costs, the group did not develop any storage cost estimates. Updated information - Finally, estimates using the MIDAS costs would have to be updated periodically to take into account the use of different disposal methods and locations, current costs, and other factors that would affect costs. Such factors would have to be considered by DOD as it develops its policy for determining its ammunition disposal liability. DOD has pointed out that the total disposal liability estimate for ammunition will result in a significant liability—much of which would not require budget authority in the current year. Thus, one way to provide a proper context for this reported liability and make it more meaningful to decisionmakers would be to provide a breakdown of the liability in a footnote to the financial statements showing the liability based on the services’ estimates of the ammunition scheduled to be taken out of service. Table 2 is a simplified illustration of how the ammunition disposal liability for ammunition managed by the single manager could be related to the time period in which it is taken out of service. For the purposes of this illustration, we used $1,000 per ton as the basis for estimating the disposal liability. As discussed previously, in actual practice, the services should refine this figure to reflect the expected cost experience within the time period. We applied the $1,000 per ton disposal liability to (1) tonnage estimates reported in the JOCG September 1995 study that projected the quantity of ammunition that would be turned over to the single manager for disposal in fiscal years 1995 through 2001 and (2) the remaining inventory after subtracting these quantities. Such information could provide important context for congressional and other budget decisionmakers on the total liability by showing the annual impact of potentially needed budget authority for ammunition expected to be transferred for disposal. Furthermore, using time periods to present data consistent with budget justification documents, such as DOD’s Future Years Defense Program, provides a link between budgetary and accounting information, one of the key objectives of the CFO Act. Ammunition disposal costs are both probable and estimable and, therefore, meet the criteria stated in SFFAS No. 5 for reportable liabilities. In commenting on our recent report on the aircraft disposal liability, DOD agreed to implement SFFAS No. 5 and to record the disposal liability related to aircraft, which are categorized as federal mission assets. DOD also agreed that the DOD Comptroller and Under Secretary of Defense (Acquisition and Technology) should promptly issue implementing guidance to assist the services in estimating the aircraft disposal liability. Because the same requirements apply to ammunition, which is considered part of the operating material and supply asset category, similar action is necessary to ensure that the ammunition disposal liability is properly recorded. Development of a reasonable estimate of the ammunition disposal liability, which addresses the key factors identified in this report, will help ensure not only that the financial statement disclosure requirements are met, but will also provide important information to the Congress and other decisionmakers as they continue to assess ammunition disposal methods and related costs. We recommend that you ensure that the DOD Comptroller and the Under Secretary of Defense (Acquisition and Technology) promptly issue joint implementing guidance for the services on the SFFAS No. 5 requirements for recognition of a liability for ammunition disposal costs. This guidance should address the key liability estimation factors identified in this report, including data reliability, data completeness, and the need for updated information. the DOD and military service comptrollers include the estimated ammunition disposal liability in DOD’s fiscal year 1997 financial statements. In commenting on a draft of this report, Department of Defense officials concurred with our recommendations that joint implementing guidance be issued promptly on the SFFAS No. 5 requirements for recognition of a liability for ammunition disposal costs. In addition, DOD officials stated that current disposal cost estimates can be reasonably determined for ammunition types that have been in the active inventory for some period of time. However, DOD officials stated that the development of disposal cost estimates for all types in the inventory and the development and coordination of standard application procedures and reporting guidance would take time to complete. For this reason, Defense officials stated that it will not be feasible to report the estimated ammunition disposal liability in the DOD’s financial statements prior to fiscal year 1998. SFFAS No. 5 was issued almost 2 years ago to allow agencies ample time to develop implementing policies and procedures prior to its fiscal year 1997 effective date. As stated in this report, information is available on all types of ammunition disposal processes to develop a reasonable estimate of these costs. Such cost information can be applied to all types of ammunition, regardless of the length of time the ammunition has been in the active inventory. Such an estimate need not be precise—SFFAS No. 5 permits the reporting of a range. Accordingly, DOD, with a concentrated effort, can develop an estimate of ammunition disposal costs for its fiscal year 1997 financial statements. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this report. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the House Committee on National Security, the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the Subcommittee on Management, Information and Technology, and to the Director of the Office of Management and Budget. We are also sending copies to the Under Secretary of Defense (Comptroller), the Air Force Assistant Secretary for Financial Management and Comptroller, the Army Assistant Secretary for Financial Management and Comptroller, the Navy Assistant Secretary for Financial Management and Comptroller, the Under Secretary of Defense (Acquisition and Technology), the Deputy Under Secretary of Defense for Environmental Security, and the Acting Director, Defense Finance and Accounting Service. Copies will be made available to others upon request. Please contact me at (202) 512-9095 if you have any questions concerning this letter. Major contributors to this letter are listed in appendix III. We contacted personnel and conducted work at the following locations. DOD Headquarters, Pentagon, Washington, D.C. Air Force Combat Support Division, Air Force Headquarters, Pentagon, Washington, D.C. Type of ammunition (MIDAS family) High explosive “D” (ammunition that contains ammonium picrate) High explosives for improved ammunition/cluster bomb units (ICM/CBUs) and submunitions High explosive projectiles and warheads Bulk propellants and black powder Inert (training material) John R. Richter, Auditor-in-charge Stewart O. Seman, Evaluator Lynn M. Filla-Clark, Auditor Frederick P. Schmidt, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Department of Defense's (DOD) implementation of the requirement to disclose the liability associated with the disposal of various types of assets, specifically conventional ammunition. GAO noted that: (1) DOD has not yet implemented the federal accounting standard that requires recognizing and reporting liabilities such as those associated with ammunition disposal, nor has it provided guidance to the military services; (2) in commenting on GAO's recent report on the aircraft disposal liability, DOD agreed with GAO's recommendation to incorporate statements of federal financial accounting standard (SFFAS) No. 5 in its Financial Management Regulation; (3) ammunition disposal is an ongoing process that results from materials with a limited shelf-life or that otherwise will not be used in operations, and the cost can be reasonably estimated; (4) accordingly, these activities meet the criteria for a reportable liability; (5) the cost information that DOD developed in response to requests from congressional committees can be used as a starting point to estimate the ammunition disposal liability; and (6) a number of additional factors will have to be addressed, including data reliability, data completeness, and the need for periodic updates. |
Interoperable communications is not an end in itself. Rather, it is a necessary means for achieving an important goal—the ability to respond effectively to and mitigate incidents that require the coordinated actions of first responders, such as multi-vehicle accidents, natural disasters, or terrorist attacks. Public safety officials have pointed out that needed interoperable communications capabilities are based on whether communications are needed for (1) “mutual-aid responses” or routine day- to-day coordination between two local agencies; (2) extended task force operations involving members of different agencies coming together to work on a common problem, such as the 2002 sniper attacks in the Washington, D.C. metropolitan area; or (3) a major event that requires response from a variety of local, state, and federal agencies, such as major wildfires, hurricanes, or the terrorist attacks of September 11, 2001. A California State official with long experience in public safety communications breaks the major event category into three separate types of events: (1) planned events, such as the Olympics, for which plans can be made in advance; (2) recurring events, such as major wildfires and other weather events, that can be expected every year and for which contingency plans can be prepared based on past experience; and (3) unplanned events, such as the September 11th attacks, that can rapidly overwhelm the ability of local forces to handle the problem. Interoperable communications are but one component, although a key one, of an effective incident command planning and operations structure. As shown in figure 1, determining the most appropriate means of achieving interoperable communications must flow from an comprehensive incident command and operations plan that includes developing an operational definition of who is in charge for different types of events and what types of information would need to be communicated (voice, data, or both) to whom under what circumstances. Other steps include: defining the range of interoperable communications capabilities needed for specific types of events; assessing the current capabilities to meet these communications needs; identifying the gap between current capabilities and defined requirements; assessing alternative means of achieving defined interoperable developing a comprehensive plan—including, for example, mutual aid agreements, technology and equipment specifications, and training—for closing the gap between current capabilities and identified requirements. Interoperable communications requirements are not static, but change over time with changing circumstances (e.g., new threats) and technology (e.g., new equipment), and additional spectrum as it becomes available. Consequently, both a short- and long-term “feedback loop” that incorporates regular assessments of current capabilities and needed changes is important. The first responder community is extensive and extremely diverse in size and the types of equipment in their communications systems. According to SAFECOM officials, there are over 2.5 million public safety- first responders within more than 50,000 public safety organizations in the United States. Local and state agencies own over 90 percent of the existing public safety communications infrastructure. This intricate public safety communications infrastructure incorporates a wide variety of technologies, equipment types, and spectrum bands. In addition to the difficulty that this complex environment poses for federal, state, and local coordination, 85 percent of fire personnel, and nearly as many emergency management technicians, are volunteers with elected leadership. Many of these agencies are small and do not have technical expertise; only the largest of the agencies have engineers and technicians. In the past, a stovepiped, single jurisdiction, or agency-specific communication systems development approach prevailed—resulting in none or less than desired interoperable communications systems. Public safety agencies have historically planned and acquired communications systems for their own jurisdictions without concern for interoperability. This meant that each state and local agency developed communications systems to meet their own requirements, without regard to interoperability requirements to talk to adjacent jurisdictions. For example, a Public Safety Wireless Network (PSWN) analysis of Fire and Emergency Management Services (EMS) communications interoperability found a significant need for coordinated approaches, relationship building, and information sharing. However, the PSWN program office found that public safety agencies have traditionally developed or updated their radio systems independently to meet specific mission needs. According to a study conducted by the National Task Force on Interoperability, public safety officials have unique and demanding communications requirements. According to the study, however, when the issue of interoperability is raised, officials respond that they are unable to even talk to their own personnel, much less expand their communications to include reliable and interoperable local and regional communications, and, ultimately reliable and interoperable local, state, and federal communications. The events of September 11, 2001, which called for an integrated response of federal, state, and local first responders, highlighted the need for interoperable first responder communication across disciplines and throughout levels of government. The attacks on New York City and the Pentagon have resulted in greater public and governmental focus on the role of first responders and their capabilities to respond to emergencies, including those resulting from terrorist incidents. One result has been significantly increased federal funding for state and local first responders, including funding to improve interoperable communications among federal, state, and local first responders. In fiscal year 2003, Congress appropriated at least $154 million targeted specifically for interoperability through a variety of grants administered by the Department of Homeland Security, the Department of Justice, and other agencies. Other available grants, such as the Homeland Security Grant, could be used for a variety of purposes, including interoperable communications. For over 15 years, the federal government has been concerned with public safety spectrum issues, including communications interoperability issues. A variety of federal departments and agencies have been involved in efforts to define the problem and to identify potential solutions, such as the Department of Homeland Security (DHS), the Department of Justice (DOJ), the Federal Communications Commission (FCC), and the National Telecommunications and Information Agency (NTIA) within the Department of Commerce (DOC), among others. Today, a combination of federal agencies, programs, and associations are involved in coordinating emergency communications. DHS has several agencies and programs involved with addressing first responder interoperable communication barriers, including the SAFECOM program, the Federal Emergency Management Agency (FEMA), and the Office for Domestic Preparedness (ODP). As one of its 24 E-Gov initiatives, the Office of Management and Budget (OMB) in 2001 created SAFECOM to unify the federal government’s efforts to help coordinate the work at the federal, state, local, and tribal levels to establish reliable public safety communications and achieve national wireless communications interoperability. The SAFECOM program was brought into DHS in early 2003. In June 2003, SAFECOM partnered with the National Institute of Standards and Technology (NIST) and the National Institute of Justice (NIJ) to hold a summit that brought together over 60 entities involved with communications interoperability policy setting or programs. According to NIST, the summit familiarized key interoperability players with work being done by others and provided insight into where additional federal resources may be needed. In addition to the many federal agencies and programs involved with shaping first responder interoperable communication policies, a range of public safety associations play a significant role in defining the problems and solutions to emergency communications interoperability. For example the National Public Safety Telecommunications Council (NPSTC) is a federation representing public safety telecommunications. The purpose of NPSTC is to follow up on the recommendations made by the Public Safety Wireless Advisory Committee (PSWAC) to FCC and the National Telecommunications and Information Agency on public safety communication needs. In addition, NPSTC acts as a resource and advocate for public safety telecommunications issues and is working with SAFECOM to develop requirements for first responder communications. FCC established the Public Safety National Coordination Committee (NCC) to advise them on spectrum policy decisions for public safety interoperable communications. In July 2003, NCC made several recommendations to FCC for improving communications interoperability. The NCC’s charter expired on July 25, 2003 and it has since been dissolved. In 2002, the National Governors Association released a report that recommended that governors and their state homeland security directors (1) develop a statewide vision for interoperable communications, (2) ensure adequate wireless spectrum to accommodate all users, (3) invest in new communications infrastructure, (4) develop standards for technology and equipment, and (5) partner with government and private industry. These associations and task forces are just a small representation of the many organizations identified by DHS and NIST as contributors to public safety interoperable communications efforts. Several technical factors specifically limit interoperability of public safety wireless communications systems. First, public safety agencies have been assigned frequencies in new bands over time as available frequencies become congested and as new technology made other frequencies available for use. As a result, public safety agencies now operate over multiple frequency bands—operating on these different bands required different radios because technology was not available to include all bands in one radio. Thus, the new bands provided additional capabilities but fragmented the public safety radio frequency spectrum, making communications among different jurisdictions difficult. Another technical factor inhibiting interoperability is the different technologies or different applications of the same technology by manufacturers of public safety radio equipment. One manufacture may design equipment with proprietary technology that will not work with equipment produced by another manufacturer. The current status of wireless interoperable communications across the nation—including current interoperable communications capability and the scope and severity of any problems—has not been determined. Although various reports have documented the lack of interoperability of first responders’ wireless communications in specific locations, complete and current data do not exist documenting current interoperable communications capabilities and the scope and severity of any problems at the local, state, interstate, or federal level across the nation. SAFECOM plans to conduct a nationwide survey to assess current capabilities of public safety agency wireless communications. Accumulating these data may be difficult, however, because several problems inhibit efforts to identify and define current interoperable communications capabilities and future requirements. Improving the interoperability of first responder wireless communications requires a clear assessment of the current state of public safety wireless communications interoperability, using a set of defined requirements; an operational definition of any problems; and a planning framework to guide the resolution of those problems. However, defining interoperability problems is difficult because interoperability requirements and problems are situation specific and evolve over time. By 2008, SAFECOM expects all public safety agencies in the United States to have a minimum level of interoperability, as defined by a national interoperability baseline. However, SAFECOM officials said they lack current nationwide information on the interoperable communications problems of first responders. Two key studies in the late 1990s sponsored by DOJ and PSWN program provide a nationwide picture of wireless interoperability issues among federal, state, and local police, fire, and emergency medical service agencies at that time. Both studies describe most local public safety agencies as interacting with other local agencies on a daily or weekly basis. As a result, most local agencies had more confidence in establishing radio links with one another than with state agencies, with whom they less frequently interact. Local public safety agencies interact with federal agencies least of all, with a smaller percentage of local agencies expressing confidence in their ability to establish radio links with federal agencies. However, the events of September 11, 2001, have resulted in a reexamination of the circumstances in which interoperable communications should extend across political jurisdictions and levels of government. To obtain a current national picture, SAFECOM established as a key objective to assess by July 2005 the current state of interoperability across the nation and create a nationwide baseline describing public safety communications and interoperability. The baseline will be the basis for measuring future improvements made through local, state, and federal public safety communications initiatives. SAFECOM officials said their study will be designed to measure actual interoperability capabilities in a sample of locations selected to represent the national condition. According to these officials, SAFECOM will conduct a gap analysis, which will compare the actual levels of interoperability within a state to the various scenarios used in a nationwide statement of requirements and determine the minimum level of interoperability that needs to be obtained. Establishing a national baseline for public safety wireless communications interoperability will be difficult because the definition of who to include as a first responder is evolving, and interoperability problems and solutions are situation specific and change over time to reflect new technologies and operational requirements. In a joint SAFECOM/AGILE program planning meeting in December 2003, participants agreed that a national baseline is necessary to know what the nations’ interoperability status really is, to set goals, and to measure progress. However, at the meeting, participants said they did not know how they were going to define interoperability, how they could measure interoperability, or how to select their sample of representative jurisdictions; this was all to be determined at a later date. At the time of our review, SAFECOM officials acknowledged that establishing a baseline will be difficult and said they are working out the details of their baseline study but still expect to complete it by July 2005. DHS also has other work under way that may provide a tool for such self- assessments by public safety officials. An ODP official in the Border and Transportation Security Directorate of DHS said ODP is supporting the development of a communications and interoperability needs assessment for 118 jurisdictions that make up the Kansas City region. The official said the assessment will provide an inventory of communications equipment and identify how the equipment is used. He also said the results of this prototype effort will be placed on a CD-Rom and distributed to states and localities to provide a tool to conduct their own self assessments. SAFECOM officials said they will review ODP’s assessment tool as part of a coordinated effort and use this tool if it meets the interoperability requirements of first responders. Public safety officials generally recognize that interoperable communications is the ability to talk with whom they want, when they want, when authorized, but not the ability to talk with everyone all of the time. However, there is no standard definition of communications interoperability. Nor is there a “one size fits all” requirement for who needs to talk to whom. Traditionally, first responders have been considered to be fire, police, and emergency medical service personnel. However, in a description of public safety challenges, a federal official noted that the attacks of September 11, 2001, have blurred the lines between public safety and national security. According to the Gilmore Commission, effective preparedness for combating terrorism at the local level requires a network that includes public health departments, hospitals and other medical providers, and offices of emergency management, in addition to the traditional police, fire, and emergency medical services first responders. Furthermore, Congress provided an expanded definition of first responders in the Homeland Security Act of 2002, which defined “emergency response providers” as including “Federal, State, and local emergency public safety, law enforcement, emergency response, emergency medical (including hospital emergency facilities), and related personnel, agencies, and authorities.” Technological changes also present new problems and opportunities for achieving and maintaining effective interoperable communications. According to one official, in the 1980s a method of voice transmission called “trunking” became available that allowed more efficient use of spectrum. However, three different and incompatible trunking technologies developed, and these systems were not interoperable. This official noted that as mobile data communications becomes more prevalent and new digital technologies are introduced, standards become more important. In addition, technical standards for interoperable communications are still under development. Beginning in 1989, a partnership between industry and the public safety user community developed what is known as Project 25 (P-25) standards. According to the PSWN program office, Project 25 standards remain the only user-defined set of standards in the United States for public safety communications. DHS purchased radios that incorporate the P-25 standards for each of the nation’s 28 urban search and rescue teams. PSWN believes P-25 is an important step toward achieving interoperability, but the standards do not mandate interoperability among all manufacturers’ systems. Standards development continues today as new technologies emerge that meet changing user needs and new policy requirements. Finally, new public safety mission requirements for video, imaging, and high-speed data transfers, new and highly complex digital communications systems, and the use of commercial wireless systems are potential sources of new interoperability problems. Availability of new spectrum can also encourage the development of new technologies and require further development of technical standards. For example, the FCC recently designated a new band of spectrum, the 4.9 Gigahertz (GHz) band, for use and support of public safety. The FCC provided this additional spectrum to public safety users to support new broadband applications such as high- speed digital technologies and wireless local area networks for incident scene management. The FCC requested in particular comments on the implementation of technical standards for fixed and mobile operations on the band. NPSTC has established a task force that includes work on interoperability standards for the 4.9 GHz band. The federal government has a long history in addressing federal, state, and local government public safety issues—in particular interoperability issues. Congress has also recently contributed to the development of policies. In October 2002 the House Committee on Government Reform issued a report entitled How Can the Federal Government Better Assist State and local Governments in Preparing for a Biological, Chemical, or Nuclear Attack? The Committee’s first finding was that incompatible communication systems impede intergovernmental coordination efforts. The Committee recommended that the federal government take a leadership role in resolving the communications interoperability problem. In December 2003, the SAFECOM and the AGILE program within DOJ issued a joint report in which they established a series of initiatives and goals extending over the next 20 years. The report concludes that a continuous and participatory effort is required to improve public safety communications and interoperability. OMB created the SAFECOM program as a short-term (18-24 months) E-Gov initiative. It had no designated long-term mission. However, OMB has identified SAFECOM as the primary program responsible for coordinating federal efforts to improve interoperability. How to institutionalize that role is still an evolving process. In addition, the roles and responsibilities of the various federal agencies—the FCC, DOJ, and others—involved in communications interoperability have not been fully defined and SAFECOM’s authority to oversee and coordinate federal and state efforts is limited. DHS, where SAFECOM now resides, has recently announced it is establishing an Office for Interoperability and Compatibility to coordinate the federal response to the problems of interoperability and compatibility. The exact structure and funding for the office, which will include SAFECOM, are still being developed. There are areas in which the federal government can provide leadership, such as developing national requirements and a national architecture for public safety interoperable communications, national databases, and common, nationwide terminology for communications. Moreover, the federal government alone can allocate communications spectrum for public safety use. One key barrier to the development of a national interoperability strategy has been the lack of a statement of national mission requirements for public safety—what set of communications capabilities should be built or acquired—and a strategy to get there. A key initiative in the SAFECOM program plan for the year 2005 is to complete a comprehensive Public Safety Statement of Requirements. The statement is to provide functional requirements that define how, when, and where public safety practitioners communicate. On April 26, 2004, DHS announced the release of the first comprehensive Statement of Requirements defining future communication requirements and outlining future technology needed to meet these requirements. According to DHS, the statement provides a shared vision and an architectural framework for future interoperable public safety communications. DHS describes the Statement of Requirements as a living document that will define future communications services as they change or become new requirements for public safety agencies in carrying out their missions. SAFECOM officials said additional versions of the statement will incorporate whatever is needed to meet future needs but did not provide specific details. One example of potential future development is expanded coverage to include public safety support functions. The current statement is incomplete because it only addresses the functional requirements for traditional public safety first responders—Emergency Medical Services personnel, firefighters, and law enforcement officers. The statement recognizes the existence of but does not include in this version those elements of the public safety community—such as transportation or public utility workers—whose primary mission provides vital support to public safety officials. A national architecture has not yet been prepared to guide the creation of interoperable communications. An explicit, commonly understood, and agreed-to blueprint, or architecture, is required to effectively and efficiently guide modernization efforts. For a decade, GAO has promoted the use of architectures, recognizing them as a crucial means to a challenging goal: agency operational structures that are optimally defined in both business and technological environments. Office of Management and Budget officials told us that OMB charged SAFECOM with developing a national architecture, which will include local, state, and federal government architectures. According to these officials, SAFECOM is to work closely with state and local governments to establish a basic understanding of what infrastructure currently exists, and to identify public safety communication requirements. SAFECOM officials said development of a national architecture will take time because SAFECOM must first assist state and local governments to establish their communications architectures. They said SAFECOM will then collect the state and local architectures, and fit them into a national architecture that links federal communications into the state and local infrastructure. State and local officials consider a standard database to be essential to frequency planning and coordination for interoperability frequencies and for general public safety purposes. The Public Safety National Communications Council (NCC), appointed by the FCC to make recommendations for public safety use of the 700 MHz communications spectrum, recommended that the FCC mandate Regional Planning Committee use of a standard database to coordinate frequencies during license applications. In January 2001, the FCC rejected this recommendation noting that while the NCC believed that use of this database would ensure avoidance of channel interference between spectrum users, mandating use of the database was premature because it had not been fully developed and tested. The FCC directed the NCC to revisit the issue of mandating the database once the database is developed and has begun operation. In its final report of July 25, 2003, the NCC noted that on July 18, 2003 the National Public Safety Telecommunications Council demonstrated to FCC staff what it represented was an operational version of the database, now named the Computer Assisted Pre-Coordination Resource and Database System (CAPRAD). The NCC urged the FCC to reevaluate its position in light of the demonstration of CAPRAD, and, if appropriate, to adopt a rule requiring its use by Regional Planning Committees in their planning process. Officials at the National Law Enforcement and Corrections Technology Center (NLECTC)—Rocky Mountain Center said they are developing and administering the CAPRAD database. Center officials told us CAPRAD is a frequency pre-coordination database that is evolving as the user community defines its requirements. For example, they said CAPRAD was used to develop a draft nationwide 700 MHz frequency allocation plan that included interoperability frequencies, frequencies allocated to states for general state purposes, and frequencies allocated to the general public safety community. FCC designated Regional Planning Committees and frequency coordinators can then use this plan as a starting point to develop detailed plans for their regions. Center officials said that several RPCs have also loaded their 700 and 800 MHz regional plans into CAPRAD for review by adjacent RPCs or officials needing information on a regional plan. Center officials also told us that they are working on a comparable SIEC model to include interoperability channels across all bands. State and local officials we visited were familiar with the database and generally favored its use. For example, a California state official wrote us that some California state and local officials participated in the drafting of this NCC recommendation and believe its use will assist in preventing interstate interference. State and local officials in the State of Washington said that the use of the CAPRAD database should be mandatory. The officials said CAPRAD would facilitate new spectrum allocation and pre- coordination of spectrum. In addition, they said CAPRAD holds the potential of eliminating interference between users, and is the first universally accepted frequency coordination database. It holds the promise of a one-stop frequency coordination database, according to a Washington State Department of Information Services official. Technology solutions by themselves are not sufficient to fully address communication interoperability problems in a given local government, state, or multi-state region. For example, the regional communications chairs of the Florida Regional Domestic Security Task Forces have noted that non-technical barriers are the most important and difficult to solve. Police and fire departments often have different concepts and doctrines on how to operate an incident command post and use interoperable communications. Similarly, first responders, such as police and fire departments, may use different terminology to describe the same thing. Differences in terminology and operating procedures can lead to communications problems even where the participating public safety agencies share common communications equipment and spectrum. State and local officials have drawn specific attention to problems caused by the lack of common terminology in naming the same interoperability frequency. In January 2001 the FCC rejected an NCC recommendation that the FCC mandate through its rules that specific names be designated for each interoperability channel on all public safety bands. The Commission said it would have to change its rules each time the public safety community wished to revise a channel label and that this procedure would be too cumbersome. In its final report on July 25, 2003, the NCC renewed its earlier recommendation and added a recommendation that all radios that include a channel-selection display be required to use the standard names. The NCC said standard names are essential to achieve interoperability because all responders to an incident must know what channel to which they must tune their radios. The NCC said adoption of such standard names will avoid confusion resulting from use of different names for the same frequency by different jurisdictions. In an earlier May 29, 2003 report, the NCC noted multiple examples where lack of common channel names had disrupted coordination of effective response to incidents. The NCC noted that the problem could endanger life and property in a very large-scale incident. In addition, the NCC noted that its recommendation could be implemented in a short time at virtually no cost and that the recommendation was consistent with previous FCC actions. For example, the NCC noted that the FCC had designated channels specified for medical communications use for the specific purpose of uniform usage. The Office of Management and Budget (OMB) created SAFECOM in 2001 to unify the federal governments’ efforts to coordinate work at the federal, state, local and tribal levels on improving interoperable communications. According to OMB, SAFECOM is the umbrella program for all Federal interoperability efforts and will work with state and local interoperability initiatives. DHS is the managing partner of the SAFECOM project with six additional agencies as partner agencies. The partner agencies include the Departments of Defense, Energy, Interior, Justice, Health and Human Services, and Agriculture. According to OMB, all of these agencies have significant roles to play in public safety communications, emergency/incident response and management, and law enforcement. Our April 2004 report on Project SAFECOM compared SAFECOM’s progress against its overall objective of achieving national wireless communications interoperability among first responders and public safety systems at all levels of government. This broad objective could not be fully realized within the target of 18 to 24 months. However, we also noted that two major factors have contributed to the project’s limited progress toward this objective: (1) a lack of consistent executive commitment and support and (2) an inadequate level of interagency collaboration. We concluded that until these shortcomings are addressed, the ability of SAFECOM to deliver on its promise of improved interoperability and better response to emergencies will remain in doubt. We recommended that the Secretary of Homeland Security direct the Under Secretary for Science and Technology to complete written agreements with other federal agencies and organizations representing state, local, and tribal governments that define the responsibilities and resource commitments that each of those organizations will assume. These agreements should include specific provisions for funding the project and measuring its performance. In addition, key program structure and funding issues seriously limit the ability of SAFECOM to affect the future long-term development of the interoperability function and mission. SAFECOM’s program and funding structure were established to address the public safety wireless communications problems as a short-term, 18-24 month project. However, DHS recognizes that a long-term, intergovernmental effort will be needed to achieve the program’s overall goal of improving emergency response through broadly interoperable first responder communications systems. As a result, DHS set a SAFECOM goal to establish a “system of systems” by 2023 that will provide the necessary interoperability for public safety users. The program funding structure as established does not support a long-term program. Because SAFECOM is an E-Gov project, each year OMB instructs federal agencies designated as a partner with SAFECOM to provide specified amounts of funding to SAFECOM. SAFECOM negotiates an annual Memorandum of Agreement on funding or program participation with each of these agencies; however, in our Project SAFECOM report, we said that by the end of our field work in 2004 SAFECOM had signed an agreement with only one agency in fiscal year 2004. Representatives of federal, state, and local public safety users identified as a high priority the development of a business case with long term sustainable funding for a national office for public safety communications and interoperability and recommended that this office should become a part of the annual President’s budget request process. SAFECOM officials said establishment of a budget funding line for SAFECOM was discussed for fiscal year 2005 budget, but the budget does not contain a funding line for SAFECOM in fiscal year 2005 or beyond. DHS has not defined how it will convert the current short-term program and funding structures to a permanent program office structure. When it does, DHS must carefully define the SAFECOM mission and roles in relation to other agencies within DHS and in other federal agencies that have missions that may be related to the OMB assigned mission for SAFECOM. SAFECOM must coordinate with multiple federal agencies, including ODP within DHS, AGILE in DOJ; DOD; the FCC; the NTIA within DOC, and other agencies. For example, the Homeland Security Act assigns ODP primary responsibility within the executive branch for preparing the United States for acts of terrorism, including coordinating or, as appropriate, consolidating communications and systems of communications relating to homeland security at all levels of government. An ODP official said the Homeland Security Act granted authority to ODP to serve as the primary agency for preparedness against acts of terrorism, to specifically include communications issues. He said ODP is working with states and local jurisdictions to institutionalize a strategic planning process that assesses and funds their requirements. As indicated earlier, ODP also plans to develop tools to link these assessments to detailed interoperable communications plans. According to this official, SAFECOM, as part of the Science and Technology Directorate, is responsible for (1) developing standards; (2) research, development, testing, and evaluation of public safety communications; and (3) advising ODP about available technologies and standards. In addition, although OMB states that SAFECOM is the umbrella program to coordinate actions of the federal government, it does not include all major federal efforts aimed at promoting wireless interoperability for first responders. Specifically, the Justice Department continues to play a strong role in interoperability after establishment of DHS. Key Justice programs—the Advanced Generation of Interoperability for Law Enforcement (AGILE) and the Interoperable Communication Technology Program administered by the Office of Community Oriented Policing Services (COPS)—did not transition to the SAFECOM program in the new Department of Homeland Security. AGILE is the Department of Justice program to assist state and local law enforcement agencies to effectively and efficiently communicate with one another across agency and jurisdictional boundaries. It is dedicated to studying interoperability options and advising state and local law enforcement agencies. The SAFECOM program director also said most of the federal research and development on prototypes is being conducted within the AGILE program. SAFECOM and AGILE officials told us they have a close working relationship. The SAFECOM and AGILE programs also held a joint planning meeting in early December 2003 and developed an action plan that SAFECOM and AGILE said they were committed to implement, given available resources. DHS must also coordinate with the Department of Defense (DOD) to address chemical, biological, radiological, nuclear, and high explosive events. A November 2003 Defense Science Board (DSB) report said DOD’s role includes, when directed, military support to civil authorities, and that DOD assistance could be required to assist in incident response. But the Board concluded that DOD must improve communication interoperability between first responders and federal, state, and local agencies involved in emergency preparedness and incident response. SAFECOM officials also will face a complex issue when they address public safety spectrum management and coordination. The National Governors’ Guide to Emergency Management noted that extensive coordination will be required between the FCC and the NTIA to provide adequate spectrum and to enhance shared local, state, and federal communications. However, the current legal framework for domestic spectrum management is divided between the NTIA within the Department of Commerce, responsible for federal government spectrum use and the FCC, responsible for state, local, and other nonfederal spectrum use. In a September 2002 report on spectrum management and coordination, we found that FCC and NTIA’s efforts to manage their respective areas of responsibility are not guided by a national spectrum strategy. The FCC and the NTIA have conducted independent spectrum planning efforts and have recently taken steps to improve coordination, but have not yet implemented long-standing congressional directives to conduct joint, national spectrum planning. We recommended that the FCC and the NTIA develop a strategy for establishing a clearly defined national spectrum plan and submit a report to the appropriate congressional committees. The FCC and the NTIA generally agreed with this recommendation. In a separate report, we also discussed several barriers to reforming spectrum management in the United States. In written comments on a draft of this report, the Department of Commerce said it had issued two spectrum policy reports on June 24, 2004, in response to the President’s initiative, entitled Spectrum Policy for the 21st Century. The Department said the second report recommends an interagency effort to study the spectrum use and needs of the public safety community, a public safety demonstration program, and a comprehensive plan to address the spectrum shortage, interference, technology, and security issues of the public safety community. The Department also said that the DHS would be an integral partner in fulfilling its recommendations. SAFECOM is involved in several federal coordination initiatives, including efforts to coordinate federal funding, but according to its officials, it does not have the oversight authority or pertinent information to fully accomplish this objective. The SAFECOM program is attempting to coordinate federal grant funding to maximize the prospects for communication interoperability grants across federal agencies by means of interagency guidance. We selected several grant programs to determine how this guidance was used. We found that COPS (with DOJ) and FEMA (within DHS) used this guidance, at least in part, in their coordinated 2003 Interoperable Communications Equipment grants, and ODP used the guidance in its 2004 Homeland Security and Urban Areas Security Initiative grant programs. However, COPS and FEMA officials said that it was difficult to incorporate SAFECOM’s recommended criteria for planning public safety communications systems into their joint guidance because statutory language for their grant programs focuses on the purchase of equipment without specifically addressing planning. SAFECOM also does not have authority to require federal agencies to coordinate their grant award information. SAFECOM is currently engaged in an effort with DOJ to create a “collaborative clearinghouse” that could facilitate federal oversight of interoperable communications funding to jurisdictions and allow states access to this information for planning purposes. The database is intended to decrease duplication of funding and evaluation efforts, de-conflict the application process, maximize efficiency of limited federal funding, and serve as a data collection tool for lessons learned that would be accessible to state and locals. However, SAFECOM officials said that the challenge to implementing the coordinated project is getting federal agency collaboration and compliance. As of February 2004, the database only contains award information from the 2003 COPS and FEMA Interoperability Communications Equipment Grants. The database does not contain grant award information from the Office for Domestic Preparedness on its Urban Areas Security Initiative (UASI) grants or its Homeland Security grants (HSG), nor from FEMA’s Emergency Management Preparedness Grant or any other federal agency grant funds. SAFECOM’s oversight authority and responsibilities are dependant upon its overall mission. OMB officials told us that they are currently in the process of refocusing the mission of the SAFECOM program into three specific parts: (1) coordination of federal activities through several initiatives, including participation in the Federal Interagency Coordination Council (FICC) and establishment of a process for federal agencies to report and coordinate with SAFECOM on federal activities and investments in interoperability; (2) developing standards; and (3) developing a national architecture for addressing communications interoperability problems. OMB officials said identification of all current and planned federal agency communications programs affecting federal, state, and local wireless interoperability is difficult. According to these officials, OMB is developing a strategy to best utilize the SAFECOM program and examining options to enforce the new coordination and reporting process. SAFECOM officials said they are working to formalize the new reporting and coordination process by developing written agreements with other federal agencies and by obtaining concurrence of major state and local associations to the SAFECOM governance structure. SAFECOM officials noted that this newly refocused SAFECOM role does not include providing technical assistance or conducting operational testing of equipment. They said that their authority to conduct such activities will come from DHS enabling directives. SAFECOM officials also said that they have no enforcement authority to require other agencies to use the SAFECOM grant guidance in their funding decisions or to require agencies to provide grant program information to them for use in their database. The Directorate of Science and Technology (S&T) within DHS has been tasked to lead the planning and implementation of the Office of Interoperability and Compatibility (OIC). The new office is responsible for coordinating DHS efforts to address interoperability and compatibility of first responder equipment, to include both communications equipment and equipment such as personal protective equipment used by police and fire from multiple jurisdictions. The plan as approved by the Secretary states that by November 2004 the new office will be fully established and that action plans and a strategy will be prepared for each portfolio (type or class of equipment). The plan presents a budget estimate for the creation of the office through November 2004 but does not include costs to implement each portfolio’s strategy. In addition, plans for the new office do not clarify the roles of various federal agencies or specify what oversight authority the new office will have over federal agency communications programs. The Science and Technology Directorate is the manager of the new office, which is expected to establish partnerships with all relevant offices and agencies to effectively coordinate similar activities. These partners include representatives from national associations of emergency response providers, DHS and other government agencies, standards development organizations, and industry. The DHS plan for the new office includes a tool for relevant offices to identify areas in which they have current interoperability-related projects and thus identify program overlap inside and outside DHS and gaps in coverage. As of June 2004, the exact structure and funding for the office, including SAFECOM’s role within the office, were still being developed. In our November 6, 2003,testimony, we identified three barriers to improving public safety wireless interoperable communications: problem definition, establishing interoperability goals and standards, and defining the roles of federal, state, and local governments and other entities. Of all these barriers, perhaps the most fundamental has been limited and fragmented planning and cooperation. No one first responder group, jurisdiction, or level of government can successfully address the challenges posed by the current state of interoperable communications. Effectively addressing these challenges requires the partnership and collaboration of first responder disciplines, jurisdictions, and levels of government—local, state, federal, and tribal. In the absence of that partnership and collaboration, we risk spending funds ineffectively— especially for immediate, quick response solutions—and creating new problems in our attempt to resolve existing ones. An integrated planning process that is recognized by federal, state, and local officials as representing their interests is necessary to achieve that partnership and collaboration. Although no one level of government can successfully address interoperability communications challenges, the federal government can play a leadership role developing requirements and providing support for state efforts to assess their interoperable communications capability and develop statewide plans for transitioning from today’s capability to identified required capability. States are key players in responding to normal all-hazards emergencies and to terrorist threats. Homeland Security Presidential Directive 8 notes that awards to states are the primary mechanism for delivery of federal preparedness assistance for these missions. State and local officials also believe that states, with broad local and regional participation, have a key role to play in coordinating interoperable communications supporting these missions. The Public Safety Wireless Network (PSWN), in its report on the role of the state in providing interoperable communications, agreed. According to the PSWN report, state leadership in public safety communications is key to outreach efforts that emphasize development of common approaches to regional and statewide interoperability. The report said that state officials have a vested interest in establishing and protecting statewide wireless infrastructures because public safety communications often must cross more than one local jurisdictional boundary. However, states are not required to establish a statewide capability to (1) integrate statewide and regional interoperability planning and (2) prepare statewide interoperability plans that maximize use of spectrum to meet interoperability requirements of day-to-day operations, joint task force operations, and operations in major events. Federal, state, and local officials are not required to coordinate federal, state, and local interoperability spectrum resources that, if successfully addressed, have significant potential to improve public safety wireless communications interoperability. As a result, states may not prepare comprehensive and integrated statewide plans that address the specific interoperability issues present in each state across first responder disciplines and levels of government. Planning requires a structure to develop and implement plans over time. States, with broad input from local governments, are a logical choice to serve as a foundation for interoperability planning. As recognized by the Federal Communications Commission, states play a central role in managing emergency communications, and state level organizations are usually in control at large-scale events and disasters or multiagency incidents. In addition, the FCC noted that states are usually in the best position to coordinate with federal government emergency agencies. Furthermore, according to DHS officials, state and local governments own over 90 percent of the physical infrastructure for public safety communications. Recent DHS policies have also recognized states as being in a key position to coordinate state and local emergency response planning. The Office for Domestic Preparedness has designated states as the appropriate source to develop state homeland security strategies that are inclusive of local needs, including communication needs. According to PSWN, state leaders can also, through memorandum of understandings (MOU), help to define interagency relationships, reach procedural agreements, promote regular meetings of statewide or regional interoperability committees, and encourage joint efforts to deploy communications technology. State and local officials we talked with generally agreed that states can coordinate communications planning and funding support for state communications systems and coordinate interoperability efforts of local governments. For example, several officials said the state can facilitate the planning process by including key stakeholder input in the decision making process and ensuring that communications interoperability issues are addressed. These officials also see state roles in providing common infrastructure and developing routine training exercises. Several state and local agencies that we talked with emphasized that they are taking steps to address the need for statewide communications planning. State officials also told us that statewide interoperability is not enough because incidents first responders face could cross state boundaries. Thus, some states are also taking actions to address interstate interoperability problems. For example, Illinois, Indiana, Kentucky, Michigan, and Ohio officials said that their states have combined efforts to form the Midwest Public Safety Communications Consortium to promote interstate interoperability. According to these officials, they also have taken actions to form an interstate committee to develop interoperability plans and solicit support from key players, such as local public safety agencies. FCC recognized a strong state interest in planning and administering interoperability channels for public safety wireless communications when it adopted various technical and operational rules and polices for the 700 MHz band. In these rules and policies, FCC concluded that administration of the 2.6 MHz of interoperability channels in that band (approximately 10 percent) should occur at the state-level in a State Interoperability Executive Committee (SIEC). FCC said that states play a central role in managing emergency communications and that state-level organizations are usually in control at large-scale events and disasters or multi-agency incidents. FCC also found that states are usually in the best position to coordinate with federal government emergency agencies. FCC said that SIEC administrative activities could include holding licenses, resolving licensing issues, and developing a statewide interoperability plan for the 700 MHz band. Other SIEC responsibilities could include the creation and oversight of incident response protocols and the creation of chains of command for incident response and reporting. State and local officials recognize that the interoperability responsibilities that FCC identified for SIECs in the 700 MHz band are also applicable to interoperability channels in other frequency bands. However, FCC did not retroactively apply the SIEC concept to interoperability channels in the 800 MHz band or in the below 512 MHz band nor did it apply the SIEC concept to the new 4.9 GHz band. The Commission also did not require states to establish a SIEC because it found that some states already have a mechanism in place that could administer the interoperability channel, and requiring a SIEC would be duplicative. The Commission did provide that the administration of the 700 MHz interoperability channels defaults to Regional Planning Committees (RPC) should a state decide not to establish or maintain a SIEC for this purpose. Available data conflict on how many states have established SIECs or similar bodies, but do indicate that from 12 to 15 states did not implement a SIEC. The Public Safety National Coordination Committee, an FCC advisory body for the 700 MHz band, noted that SIECs are optional—there is no requirement that the states implement such committees. NCC recommended that FCC require all states to establish a SIEC or equivalent to provide each state with an identified central point of contact for information on that state’s interoperability capability. NCC, however, also expressed concerns about the extent of state control and the lack of a broad representation of local membership in the SIECs. NCC recommended to FCC that the name SIEC be changed to the Statewide Interoperability Executive Committee to be more inclusive of all agencies in the state. We found general support in the states that we visited for NCC’s recommendation to establish a Statewide Interoperability Executive Committee as the central point of contact for information on a state’s interoperability capability. A state official from California told us that California’s long history of collaboration in mutual aid communications activities was in part the basis for this NCC recommendation. According to officials of the Florida State Technology Office and local public safety officials, they support a central point of contact for statewide interoperability efforts. State of Washington officials said the recommendation appeared consistent with what they are doing in Washington. Local officials in the state of Washington told us that the term “statewide” is inclusive—it represents both the state and local governments interests. The states we visited or contacted were in the early stages of formulating their SIECs, and their roles and responsibilities are still under development. Recently the state of California established the California Statewide Interoperability Executive Committee. The Office of Emergency Services sponsors the Committee, which is responsible for setting technical and operational standards for all existing and planned public safety interoperability frequencies in California. Committee membership is designed to recognize the broad diversity of local communications needs because California has long recognized that responsibility for and command of an incident lies with the jurisdiction where the emergency or disaster occurs, which in the vast majority of incidents is the local government. Thus, a majority of the Committee’s 35 members are representatives of local government, followed by the state agencies that support local government, and the federal agencies that support state and local government. Additionally, two California RPCs and the Association of Public-Safety Communications Officials have representation on the Committee. The Committee is supported by 9 to 10 working groups addressing various aspects of interoperability governance. California has several state communications systems and the coordination of these systems will be addressed by a Committee working group. In March 2003, the state of Florida established the Florida Executive Interoperable Technologies Committee. The Committee’s membership includes state and local government officials from each of the seven Domestic Security regions in Florida and is chaired by the State Technology Office. The Committee’s role is still evolving. The Committee and State Technology Office are responsible for the oversight and management of all interoperable communications issues (voice and data). The State Technology Office manages the interoperable radio frequency resources for the state. Furthermore, the state has identified the need for a single, comprehensive mutual aid plan and assigned the task of developing the plan to the Committee. However, the Committee’s role in reviewing all state and local communications plans is still not determined. The Washington State Interoperability Executive Committee, formed by state legislation enacted on July 1, 2003, is a permanent subcommittee of the Information Services Board. The legislation specified membership for state agencies and associations representing city government, county government, local government fire departments, Sheriffs and Police Chiefs, and emergency managers. Federal agencies were not included as voting members of the Committee, which issued an interim public safety communications plan on March 30, 2004. The interim plan, developed using a recent inventory of state communications systems, outlines various potential solutions and the implementation timeline. These are interim solutions and did not reflect local governments’ concerns. However, the plan will be updated to incorporate local government survey responses. A final plan is due by December 31, 2004. The Committee intends to incorporate the existing mutual aid plans into the new statewide interoperability plan. In Georgia, the state did not opt to form a State Interoperability Executive Committee. Instead, the 700 MHz RPC Interoperability Committee is responsible for managing all radio frequency bands on behalf of the state of Georgia. A comprehensive statewide interoperable plan can provide the guiding framework for achieving defined goals for interoperability within a state and for regions within and across states (such as Kansas City, Mo. and Kansas City, Kans.). NCC recommended that all SIECs prepare an interoperability plan that is filed with FCC and updated when substantive changes are made or at least every three years. NCC also recommended to FCC that SIECs, for Homeland Security reasons, should administer all interoperability channels in a state, not merely those in the 700 MHz band. According to NCC, each state should have a central point identified for information on a state’s interoperability capability. None of the four states we visited had finished preparation and funding of their state interoperability plans. Washington and Florida were preparing statewide interoperability plans at the time we visited. Georgia officials said they have a state interoperability plan but that it is not funded. However, one other state we contacted, Missouri, has extended SIEC responsibility for interoperability channels beyond the 700 MHz band. The Missouri SIEC has also designated standard operational and technical guidelines as conditions for the use of these bands. SIEC requires applicants to sign a MOU agreeing to these conditions in order to use these channels in the state of Missouri. The Missouri SIEC Chairman said the state developed its operational and technical guidelines because FCC had not established its own guidelines for these interoperability channels in the VHF and UHF bands. The chairman said Missouri borders on eight other states and expressed concern that these states will develop different guidelines that are incompatible with the Missouri guidelines. He said FCC was notified of Missouri’s actions but has not taken action to date. In another example, California intends to prepare a statewide interoperability plan. California’s SIEC is re-examining California’s previous stove piped programs of communications interoperability (separate systems for law enforcement, fire, etc.) in light of the need to maintain tactical channels within disciplines while promoting cross-discipline interoperability. FCC-designated frequency coordinators expressed support for a comprehensive interoperability plan in July 2002. The Commission had suggested that the frequency coordinators for the VHF and UHF bands develop an interoperability plan for these bands. FCC said it envisioned that the coordinators would jointly develop an interoperability plan for the management and nationwide use of these interoperability channels. The frequency coordinators in a joint response rejected FCC’s overture, stating that the actual management and operational guidelines for the VHF and UHF frequencies should be integrated with other interoperability frequencies in the 700 and 800 MHz bands, and with other interoperability channels in spectrum identified by NTIA for interoperability with the federal government. The frequency coordinators said operational and management planning should include all of these channels to better coordinate future assignment and use and that NCC and SIECs were better vehicles for developing the guidelines requested by FCC. In some cases, for example, responding to such major events as tornadoes or wildfires, state and local government first responders also require interoperable communications with federal agencies. According to OMB, seven federal agencies have significant roles to play in public safety communications, emergency/incident response and management, and law enforcement. These agencies are the Departments of Homeland Security, Defense, Energy, the Interior, Justice, Health and Human Services, and Agriculture. As mentioned previously, FCC designated frequency coordinators told FCC that planning for interoperability channels should include federal spectrum designated for interoperability with state and local governments. We found several examples in our field work that support inclusion of federal agencies in future state and local planning for interoperable communications. For example, a Washington State official told us that regional systems within the state do not have links to federal communications systems and assets. In another example, according to an emergency preparedness official in Seattle, a study of radio interoperable communications in a medical center also found that federal agencies such as the Federal Bureau of Investigations (FBI) are not integrated into hospital or health communications systems, and other federal agencies have no radio infrastructure to support and participate in a health emergency such as a bio-terrorism event. He told us that he has no idea what the federal communications plan is in the event of a disaster; and he said he does not know how to talk to federal health officials responding to an incident or what the federal government needs when they arrive. Local officials in Washington State also told us that communications and coordination between civil and military emergency communication organizations need improvement. These officials expressed concern that the Department of Defense has not fully coordinated with local officials to ensure that local jurisdictions can communicate with Defense. According to the Washington National Guard Civil Support Team and emergency management officials, the Guard Civil Support Team first responders can exchange radios with other first responders in order to communicate. In addition, the Civil Support Team can communicate on all frequency bands using a Navy Unified Command Communications Suite. Georgia National Guard officials said that they do not participate in the All Hazards Council planning process to coordinate interoperable communications. The federal government is developing a system that could improve interoperable communications on a limited basis between state and federal government agencies. The Integrated Wireless Network (IWN) is a radio system that is intended to replace the existing radio systems for the DOJ, Treasury, and DHS. IWN is an exclusive federal law enforcement communications system that is intended to interact and interface with state and local systems as needed but will not replace these systems. According to DOJ officials, IWN is intended to improve federal to state/ local interoperability but will not address interoperability of state and local systems. However, federal interoperability with state and local wireless communications systems is hindered because NTIA and FCC control different frequencies in the VHF and UHF bands. To enhance interoperability, NTIA has identified 40 federal government frequencies that can be used by state and local public safety agencies for joint law enforcement and incident response purposes. FCC, however, designated different frequencies for interoperability in the VHF band and in the UHF band from spectrum it controls for use by state and local public safety agencies. In addition, complicated FCC licensing and coordination requirements may further limit effective use of federal frequencies by state and local agencies. FCC officials told us in response to our draft report that FCC rules are consistent with what NTIA and FCC agreed to regarding use of federal spectrum by non-federal agencies generally. However, as a condition for their use of the federal VHF and UHF frequencies, FCC requires individual state and local public safety applicants to develop a written agreement between each nonfederal agency and a federal sponsor and to use this agreement to obtain an FCC license. FCC regulations permit federal agencies to use 700 MHz band public safety frequencies under its control if the Commission finds such use necessary, and the state/local government licensee approves the sharing arrangement. PSWN suggested using SIECs to perform the necessary planning and coordination between FCC and NTIA for joint use of their separately controlled frequencies. PSWN noted that the federal government maintains a significant presence in many states, and that interoperable communications must cut across all levels of government. Thus, PSWN said it is essential that NTIA and federal entities and federal spectrum be involved in the SIEC planning process from the beginning. NCC recommended that FCC require the use of standard MOUs and sharing agreements where licensee authorizes federal agencies and other authorized users to use its frequencies. FCC noted that respondents to its notice seeking comments on NCC proposals were divided and that requiring a formal rule could only serve to increase administrative burden on the states, many of whom may be poised to implement the MOUs and sharing agreements or similar documents voluntarily. Thus, FCC decided not to require the use of MOUs but strongly recommended that states have the relevant SIEC or other entity responsible for the administration of the interoperability channels use MOUs. Total one-time replacement of the nation’s communications systems is very unlikely, due to the costs involved. A 1998 study cited the replacement value of the existing public safety communication infrastructure nationwide at $18.3 billion. DHS officials said this estimate is much higher when infrastructure and training costs are taken into account. Furthermore, DHS recently estimated that reaching an accelerated goal of communications interoperability will require a major investment of several billion dollars within the next 5 to 10 years. As a result of these extraordinary costs, federal funding is but one of several resources state and local agencies must use in order to address these costs. Given these high costs, the development of an interoperable communications plan is vital to useful, non-duplicative spending. However, the federal funding assistance programs to state and local governments do not fully support regional planning for communications interoperability. Federal grants that support interoperability have inconsistent requirements to tie funding to interoperable communications plans. In addition, uncoordinated federal and state level grant reviews limit the government’s ability to ensure that federal funds are used to effectively support improved regional and statewide communications systems. Additional barriers to supporting regional planning, such as fragmented funding structures, limitations on time frames to develop and implement plans, and limited support for long-term planning are discussed in appendix V. Local, state and federal officials agree that regional communications plans should be developed to guide decisions on how to use federal funds for interoperable communications; however, the current funding requirements do not support this planning process. Although recent grant requirements have encouraged jurisdictions to take a regional approach to planning, current federal first responder grants are inconsistent in their requirements to tie funding to interoperable communications plans. States and locals are not required to provide an interoperable communications plan as a prerequisite to receiving some federal grant funds. As a result, there is no assurance that federal funds are being used to support a well- developed strategy for improving interoperability. For example, the fiscal year 2004 HSG or UASI grants require states or selected jurisdictions to conduct a needs assessment and submit a Homeland Security Strategy to ODP. However, the required strategies are high-level and broad in nature. They do not require that project narratives or a detailed communications plan be submitted by grantees prior to receiving grant funds. In another example, fiscal year 2003 funding provided by the Office of Community Oriented Policing Services Program (COPS) and FEMA for Interoperable Communications Equipment did not require that a communications plan be completed prior to receiving grant funds. However, grantees were required to provide documentation that they were actively engaged in a planning process and a multijurisdictional and multidisciplinary project narrative was required. In addition to variations in requirements to create communications interoperability plans, federal grants also lack consistency in defining what “regional” body should conduct planning. State and local officials also said that the short grant application deadlines for recent first responder grants limited their ability to develop cohesive communications plans or perform a coordinated review of local requests. Federal officials acknowledged that the limited submission timeframes presents barriers to first responders for developing plans prior to receiving funds. For example, guidance in several federal grant programs—the Homeland Security Grant, UASI grant, COPs and FEMA communication equipment grants, and Assistance to Firefighters Grant—allow states only 30 or 60 days from the date of grant announcement to submit a grant proposal. These time frames are sometimes driven by appropriations language or by the timing of the appropriations enactment. Furthermore, many grants have been awarded to state and locals for communications interoperability that have 1 or 2 year performance periods, and according to state and local officials, do not support long- term solutions. For example, Assistance to Fire Fighters Grants, COPS and FEMA’s Interoperable Communications Equipment Grants, and National Urban Search and Rescue grants all have 1-year performance periods. UASI, HSG program, and Local Law Enforcement Block Grants have 2-year performance periods. The federal and state governments lack a coordinated grant review process to ensure that funds allocated to local governments are used for communication projects that complement each other and add to overall statewide and national interoperability. Federal and state officials said that each agency reviews its own set of applications and projects, without coordination with other agencies. As a result, grants could be given to bordering jurisdictions that propose conflicting interoperability solutions. In fiscal year 2003, federal officials from COPS and FEMA attempted to eliminate awarding funds to conflicting communication systems within bordering jurisdictions by coordinating their review of interoperable communications equipment grant proposals. However, COPS and FEMA are only two of several federal sources of funds for communications interoperability. In an attempt to address this challenge, in 2003 SAFECOM coordinated with other agencies to create the document Recommended Federal Grant Guidance, Public Safety Communications and Interoperability Grants, which lays out standard grant requirements for planning, building, and training for interoperable communications systems. The guidance is designed to advise federal agencies on who is eligible for the first responder interoperable communications grants, the purposes for which grant funds can be used, and eligibility specifications for applicants. The guidance recommends standard minimum requirements, such as requirements to “…define the objectives of what the applicant is ultimately trying to accomplish and how the proposed project would fit into an overall effort to increase interoperability, as well as identify potential partnerships for agreements.” Additionally, the guidance recommends, but does not require, that applicants establish a governance group consisting of local, tribal, state, and federal entities from relevant public safety disciplines and purchase interoperable equipment that is compliant with phase one of Project-25 standards. SAFECOM has also recently sponsored the formation of the Federal Interagency Coordination Committee (FICC), which consists of a federal grant coordination working-group. Federal officials said that the council will assist in shaping the common grant guidance for Federal initiatives involving public safety communications. Despite federal efforts within DHS to synthesize federal grants, various agencies have statutory language which make it difficult to coordinate their use. For example, both SAFECOM and COPS officials said that certain statutory provisions underlying the grant programs presented barriers to the coordination efforts of COPS, FEMA, and SAFECOM to consolidate the grant application process for the 2003 Interoperable Communications Equipment grants. COPS and FEMA coordinated their application process for the grants and used sections of the SAFECOM grant guidance to guide their application requirements. COPS and FEMA officials said that the combined COPS and FEMA application process was intended to maximize the use of funds and reduce duplication and competition between the two agencies’ Interoperability grants. Both COPS and SAFECOM officials explained that COPS and FEMA encountered difficulty in creating a combined grant application process because the COPS grant required a twenty-five percent match while the FEMA grant did not have such a requirement. However, COPS officials said FEMA added a twenty-five percent match of “in-kind” resources to its grant requirements in order to reduce competition between the COPS and FEMA grant programs. The House Committee on Appropriations report for DHS’s fiscal year 2004 appropriation states that the Committee is aware of numerous federal programs addressing communications interoperability through planning, building, upgrading, and maintaining public safety communication systems, among other purposes. The Committee directed that all DHS grant programs issuing grants for the above purposes incorporate the SAFECOM guidance and coordinate with the SAFECOM program when awarding funding. To better coordinate the government’s efforts, the Committee also encouraged all other federal programs issuing grants for the above purposes to use the guidelines outlined by SAFECOM in their grant programs. However, SAFECOM officials said that they have no enforcement authority to require other agencies to use this guidance in their funding decision or to require agencies to provide grant program information to them for use in their database. States are also initiating actions to address the lack of a centralized state- level grant review process. For example, the state of Washington is developing a centralized grant structure to review local requests for communications funds against a statewide interoperable communications plan that is being developed by their SIEC. The funding process is shown in figure 2. A fundamental barrier to successfully addressing interoperable communications problems for public safety has been the lack of effective, collaborative, interdisciplinary, and intergovernmental planning. Jurisdictional boundaries, unique public safety agency missions, and cultural differences among first responder organizations have often fostered barriers that hinder cooperation and collaboration. No one first responder agency, jurisdiction, or level of government can “fix” the nation’s interoperability problems, which vary across the nation and often cross first responder agency and jurisdictional boundaries. Changes in spectrum available to federal, state, and local public safety agencies— primarily a federal responsibility conducted through the FCC and the NTIA—changes in technology, and the evolving missions and responsibilities of public safety agencies in an age of terrorism all highlight the ever-changing environment in which interoperable communications needs and solutions must be addressed. Interdisciplinary, intergovernmental, and multijurisdictional partnership and collaboration are essential for effectively addressing interoperability shortcomings. The current status of wireless interoperable communications across the nation—including current capabilities and the scope and severity of problems that may exist—has not been determined. Long-term prospects for achieving functional interoperable communications are hindered by the lack of an institutionalized process—at the federal, state, regional, or local levels—to systematically identify and address current shortcomings. The federal government can offer leadership and support for state efforts to develop and implement statewide interoperability plans for achieving specific interoperability goals. The federal government is best positioned to address nationwide issues, such as setting national requirements, developing a national architecture, establishing national performance standards, and the development of national databases and common nationwide nomenclature for interoperability channels. Moreover, acting through the FCC and the NTIA, the federal government alone has the authority to address public safety spectrum allocation, including expanding or altering current spectrum allocations. The federal government can also play a major role through such means as technical assistance and grant guidance in supporting state efforts to prepare comprehensive statewide interoperability plans for developing federal, state, and local communications systems that can communicate with one another as needed and as authorized. However, developing and implementing effective statewide plans that draw on the perspectives and expertise of the federal government and local public safety agencies and jurisdictions is not a task that can be completed in a matter of weeks. The federal government’s ability to provide consistent, focused, long-term attention to interoperable communications needs has been hampered by the lack of a designated agency with the authority and ability to coordinate the wide-variety of federal efforts that exist. OMB has described SAFECOM as the umbrella program to unify and coordinate the federal government’s interoperable communications efforts. Although SAFECOM has made progress in developing grant guidance, issuing interoperable communications requirements, beginning the process of assessing current interoperable communications capability, and otherwise coordinating federal efforts, it is dependent upon other federal agencies for funding and their willingness to cooperate. The Department of Homeland Security has recently announced the establishment of the Office of Interoperability and Compatibility—of which SAFECOM would be a part—as the focal point for coordinating federal efforts for wireless and other functional interoperability. However, the exact nature of its roles and responsibilities are still being determined. Moreover, this office would still face many of the challenges that SAFECOM has faced in coordinating the interoperability efforts of a variety of federal agencies outside of DHS, such as the FCC and the Departments of Justice and Commerce. With federal leadership and support and local participation and support, states can serve as a key focus for efforts to assess and improve interoperable communications by developing and implementing statewide bodies to assess interoperability issues and guide efforts to remedy identified problems through statewide interoperability plans. Federal assistance grants to state and local governments do not fully support statewide planning for wireless communications interoperability. Specifically, federal grants do not fully support regional planning and lack requirements to tie federal assistance to an approved statewide interoperability plan. Interoperability plans for public safety communications systems, once prepared, should guide federal funding assistance programs to state and local governments. To improve interoperable wireless communications for first responders, we recommend that the Secretary of the Department of Homeland Security ensure that the following actions are taken: In coordination with the FCC and the NTIA, continue development of a nationwide database of all interoperable public safety communications frequencies, establish a common nomenclature for those frequencies, and establish clear timeframes to complete both efforts; In consultation with state and local governments, determine the current status of wireless public safety interoperable telecommunications across the nation by assessing interoperability in specific locations against interoperability requirements that can be measured, and assist states in assessing interoperability in their states against those requirements; Through DHS grant guidance encourage states to establish a single statewide body responsible for interoperable communications and that this body shall prepare a single comprehensive statewide interoperability plan for federal, state, and local communication systems in all frequency bands. The statewide interoperability plan shall be based upon the nationwide standard frequency database and use the standard nationwide nomenclature for interoperability channels, once they are developed; and At the appropriate time, require through DHS grant guidance that federal grant funding for communications equipment shall be approved only upon certification by the statewide body responsible for interoperable communications that such grant applications are in conformance with statewide interoperability plans. DHS should give states adequate time to develop these focal points and plans and to provide guidance on development of such plans. We further recommend that the Director, OMB, in conjunction with DHS, review the interoperability mission and functions now performed by SAFECOM and establish these functions as a long term program with adequate coordination authority and funding. We sent a draft of this report to the Departments of Commerce, Defense, Homeland Security, and Justice, the Federal Communications Commission, and the Office of Management and Budget. We did not receive comments from OMB or the Department of Defense. The other agencies provided technical comments that we have incorporated into the final report as appropriate. In addition, we received written comments from the Department of Commerce and the Department of Homeland Security. The Department of Commerce said in a letter dated July 12, 2004 that it issued two reports on spectrum policy in June, 2004 (See appendix VI.) We added this information to the report text as appropriate. The Department of Homeland Security provided written comments on a draft of this report in a July 8, 2004 letter, which is reprinted in Appendix VII. With respect to our first recommendation, DHS said it is developing a nationwide database of interoperable public safety communications frequencies in its fiscal year 2004 program as part of its support to the Computer Assisted Pre-coordination Resource and Database System (CAPRAD). DHS also said it plans to work with the National Public Safety Telecommunications Council (NPSTC) on a common nomenclature across public safety disciplines and jurisdictions. DHS did not mention coordination with the FCC and the NTIA on these matters; the FCC regulates state and local public safety wireless communications, and the NTIA regulates federal public safety spectrum. Either or both the FCC and the NTIA may also take action on the development of national databases and common nomenclature. DHS also only refers to the use of this database in the 700 MHz and 4.9 GHz bands: we believe it should be used for interoperable frequencies in all federal, state, and local public safety bands. We have amended our conclusions and recommendation to note the importance that DHS coordinate with the FCC and the NTIA on these matters across all interoperable public safety communications frequencies. With respect to our second recommendation, DHS said it is developing a methodology to establish a national baseline of public safety communication and interoperability capabilities with input from the public safety community. We believe that DHS should also consult directly with state and local governments in developing requirements and assessing interoperability in the individual states against those requirements. We have amended our recommendation to include appropriate language. With respect to our third recommendation, DHS noted that it had created coordinated grant guidance that encourages grant applicants to consider systems requirements to ensure interoperability with systems used by other disciplines and at other levels of government. DHS also discusses a methodology it developed in conjunction with the state of Virginia for development of a statewide communications system that ensures input from local levels, and states that this methodology will be available through the SAFECOM grant guidance for states interested in implementing a statewide system. However, the DHS letter did not directly address our recommendation about encouraging states to create statewide bodies for interoperable communications that would establish statewide interoperability plans for federal, state, and local communications systems in all frequency bands. With respect to our fourth recommendation, DHS discusses a “bottoms- up” approach to development of a meaningful governance structure and a strategic plan for statewide communications and interoperability developed with its partner, the state of Virginia. However, DHS ‘ comments do not directly address our recommendations that DHS grant guidance require at the appropriate time that federal grant funds for communications equipment be approved on condition that such grants are in accordance with statewide interoperability plans. We plan to send copies of this report to relevant congressional committees and subcommittees, to the Secretary of Homeland Security, the Director of the Office of Management and Budget, the Chairman of the Federal Communications Commission and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or wish to discuss it further, please contact me at (202) 512-8777 or Thomas James, Assistant Director at (202) 512-2996. Key contributors to this report are listed in appendix VIII. To examine the availability of data on interoperable wireless communications across the nation, we reviewed our November 6, 2003, testimony where we said that the first challenge to addressing first responder wireless communications interoperability issues was to clearly identify and define the problem and where we identified the absence of effective coordinated planning and collaboration as the fundamental barrier in addressing interoperability issues. We held further discussions on these problems with state and local officials about these issues during our field work in California , Florida, Georgia, and Washington. We also discussed these issues with state and local officials from Illinois, Indiana, Kentucky, Missouri, Ohio and during various public safety conferences and follow-up meetings. On the basis of these discussions, we developed a framework to analyze these issues. (See fig. 1.) We also held discussions with relevant federal officials about identifying and defining interoperable communications of first responders and about the applicability of this framework in a proposed federal nationwide survey of public safety wireless interoperability capabilities and requirements. To examine potential roles that the federal government can play in improving interoperability of first responder wireless communications, we met with officials of key federal agencies about their roles in setting and implementing policy on interoperable communications for first responders. These agencies were the Office of Management and Budget (OMB), the Department of Homeland Security (DHS), Department of Defense (DOD), Department of Justice (DOJ), Department of Commerce, and the Federal Communications Commission (FCC). We obtained and reviewed relevant documentation about federal programs and projects addressing interoperable communications. We also interviewed state and local officials to obtain their views about the role the federal government should play in addressing interoperability issues. To examine potential roles that local and state governments can play in improving interoperability of first responder wireless communications, we interviewed state and local officials in California, Florida, Georgia, and Washington and staff of the National Governors Association. We chose these four states because we had information that they were active in addressing interoperability issues and because California and Washington provided an opportunity to examine specific interoperability issues that might be presented by national borders with Mexico and Canada. We also met with public safety officials at meetings of (1) the National Public Safety Telecommunications Council; (2) the Public Safety Wireless Network program office; and (3) the Public Safety National Coordination Council, an FCC committee that advised the Commission on spectrum policy decisions for public safety interoperable communications. We obtained and reviewed reports, testimonies, and other documents relating to public safety wireless communications and identified examples of state and local government roles in organizing and providing for first responder communications. We evaluated these examples of state and local government roles for potential application to other state and local governments. We also interviewed relevant federal officials about potential state and local government roles in improving first responder wireless communications interoperability issues. To examine how the variety of federal grants for state and local first responders may encourage or inhibit the assessment of interoperable problems and the development of comprehensive plans to address these problems, we selected key federal grant programs that fund projects supporting state and local government first responder communications systems and reviewed program documentation and appropriations language for policies affecting interoperable communications. We also obtained relevant legislation and interviewed federal, state, and local officials to obtain their views on these issues. To obtain information on cross-border communications issues, we visited San Diego, California, and Olympia, Washington, and talked to appropriate state and local officials. We also discussed these issues with federal officials at the Department of Commerce and FCC. We obtained and reviewed relevant documentation from the local, state, and federal officials. Two issues related to radio spectrum allocation affect public safety communications across the United States borders with Canada and Mexico (1) the lack of coordinated cross border spectrum planning and (2) radio interference to users of the allocated spectrum. The United States, Canada, and Mexico are addressing these issues through various negotiations. Radio frequency spectrum allocation has not kept pace with technology and demand. The process used to allocate spectrum over the years has resulted in a problem that is still unresolved, according to the Association of Public-Safety Communications Officials (APCO). One official said past decisions in United States spectrum policy were based on the overall demands for spectrum and the limitations of technology at the time. According to this official, these decisions made sense individually, but collectively those decisions have a negative impact on the current ability of public safety agencies to interoperate. (See fig. 3.) The radio frequency spectrum within the United States extends from 9 KHz to 300 GHz and is allocated to more than 450 frequency bands. The Federal Communications Commission (FCC) regulates the use of frequencies for state and local governments and has allocated certain portions of the spectrum for public safety agencies. Initially, almost all public safety communications were confined to the low end of the frequency range, but as technology advanced, higher frequencies became possible, offering a temporary solution for congestion and crowding. The result—public safety operates in 10 separate bands, which has added capabilities, but which has also caused the fragmentation that characterizes the public safety spectrum today and make it difficult for different agencies and jurisdictions to communicate. According to National Telecommunications and Information Administration (NTIA), Canada and Mexico have developed spectrum use and rules independent of that of the United States. In particular, Canada uses the fixed and mobile bands contained in the band 138-174 MHz for all users, including military, civilian, and government. Canada also uses a different channeling structure than the United States and is in the process of narrow banding portions on a different schedule than the United States. Moreover, the majority of the Canadian population resides in the United States/Canadian border area. Therefore, it is very difficult for the United States to identify and coordinate frequencies for new uses in the border area. The United States/Mexican border presents different problems in that neither country is aware of the operations authorized by the other country in the border area because there is no formal agreement to exchange data or coordinate use. According to FCC, frequency band plans are also not consistent along the United States borders with Canada and Mexico. For example, the Canadian band plan for 800 MHz is different than the Mexican band plan primarily because of demographic differences in the border regions. According to FCC, some degree of harmonized spectrum has been achieved in the 800 MHz and 700 MHz public safety bands, but interoperability in the VHF and UHF bands is difficult to achieve because these bands are highly encumbered and have been operating for many years under different channel plans and different uses. State and local officials in Washington state also said they expect that the 700 MHz band will not be available for the foreseeable future along the Canadian border because Canada currently restricts use of the 700 MHz band for television broadcast purposes only. According to these officials, Canadian authorities have not initiated a process to relocate the television broadcasters out of the 700 MHz band. In addition, local Washington officials said that communication barriers result from border counties using different frequencies and equipment than one another. Interference among users of radio frequency spectrum has been a driving force in the management of spectrum at the national and international levels for many years. Interference among these users can occur when two or more radio signals interact in a manner that disrupts or degrades the transmission and reception of messages. Our work in California and Washington state highlighted interference issues with United States/Mexico and United States/Canada. For example: Unlicensed radio users in Mexico cause interference to United States public safety agencies. For example, some Mexico radio users interfere with United States public safety communication frequencies because Mexico does not have complementary regulations governing its frequency use, according to local California public safety officials. Furthermore, in the 162-174 MHz band, there is also a problem with interference to federal government operations. Many of these interference cases involve unauthorized stations in Mexico. According to local public safety officials in California, Mexico does not limit the frequency power that radios can emit. Mexican taxi radio users can emit enough power to force public safety radio repeaters in California to open up, and taxis can use them to make their radio calls. For example, San Diego County was forced to switch from their UHF and VHF radio systems to a more expensive 800 MHZ system, in order to operate without interference. In addition, Imperial County has 30 VHF frequencies potentially available for use but can only use two of them because of interference from Mexico. Interference is also an issue along the Canadian border because spectrum policies in the United States and Canada are not aligned. United States- devised solutions will not be able to be used in the shared Canadian area, according to local Washington State officials. Efforts are underway by the United States to address cross border problems with Canada and Mexico. According to an NTIA official, NTIA expects in the long term that agreements will be made with both Canada and Mexico that will provide equal segments in specified frequency bands that will be available for exclusive use by each administration. This type of arrangement will mitigate the problems associated with different uses, different channeling plans, and different plans for future use. The official said NTIA is now involved in negotiations with both countries to develop this type of arrangement and that both Canada and Mexico are in agreement with this approach. He said that the time to accomplish the migration of existing use from the segments designated to the other administration is the main factor that must be addressed for successful completion of these efforts. In the short-term, NTIA plans to hold meetings with the Canadian government about four times a year to complete the negotiation of segmenting certain bands, to improve coordination procedures, to identify channels for shared use, and to identify common interference prediction techniques. With Mexico, NTIA plans in the near term to meet with a Mexican delegation to negotiate protocols involving the segmentation of certain land-mobile bands. NTIA also plans to participate in meetings of the Joint Commission, which meets twice a year to address interference problems between stations of both countries. FCC is also in the initial stages of forming an agreement with Canada on the use of public safety spectrum in the 700 MHz band, which will include a channel(s) to be used for mutual aid and interoperability. At this time, Mexico has not allocated the 700 MHz band for public safety. In other bands where public safety spectrum is not harmonized, agreements typically define shared use of spectrum, including power limitations to prevent interference across the border. One question of interest to the Congress is whether a single nationwide frequency should be designated for public safety in the United States and as it relates to the United States borders with Canada and Mexico. Both FCC and NTIA told us that sufficient bands exist for state and local public safety. FCC said that currently five mutual aid frequencies in the 800 MHz band are included in agreements with Canada and Mexico, with the possibility of additional channel(s) in a future agreement with Canada in the 700 MHz band. Similarly, an NTIA official told us there are several interoperable frequencies in the 162 MHz to 174 MHz band and the 406-420 MHz band for state and local public safety. The SAFECOM program has established goals and objectives for the years 2005, 2008, and 2023 in its current work program. This program was developed in December 2003 at a joint SAFECOM and AGILE planning meeting with input from federal, state, and local representatives. The SAFECOM Program Manager said that the SAFECOM Executive Committee approved the program as developed in the December meeting. Key objectives for the year 2005 include: the completion of a statement of requirements for public safety interoperable communications; establishment of a research, development, test, and evaluation program for existing and emerging public safety communications and interoperability; establishment of a technical assistance program for public safety communications and interoperability; and development of a process to advance standards necessary to improve public safety communications and interoperability. We provide descriptive material on these objectives, including why SAFECOM believes they are needed, major benefits anticipated if successfully completed, and key responsibilities of various parties to their accomplishment. One key barrier to the development of a national interoperability strategy has been the lack of a statement of national mission requirements for public safety—what set of communications capabilities should be built or acquired—and a strategy to get there. A key initiative in the SAFECOM program plan for the year 2005 is to complete a comprehensive Public Safety Statement of Requirements. The statement is to provide functional requirements that define how, when, and where public safety practitioners communicate. On April 26, 2004, DHS announced the release of the first comprehensive Statement of Requirements defining future communication requirements and outlining future technology needed to meet these requirements. According to DHS, the statement provides a shared vision and an architectural framework for future interoperable public safety communications. DHS describes the Statement of Requirements as a living document that will define future communications services as they change or become new requirements for public safety agencies in carrying out their missions. SAFECOM officials said additional versions of the statement will incorporate whatever is needed to meet future needs, but did not provide specific details. One example of potential future development is expanded coverage to include public safety support functions. The current statement is incomplete because it only addresses the functional requirements for traditional public safety first responders – Emergency Medical Services personnel, firefighters, and law enforcement officers. The statement recognizes the existence of but does not include in this version those elements of the public safety community—such as transportation or public utility workers—whose primary mission provides vital support to public safety officials. In addition, the frequent changes in SAFECOM management teams and changing implementation strategies has resulted in major changes in how SAFECOM intends to achieve its ultimate goals. As originally conceived while SAFECOM was in the Treasury Department, the program would build upon Public Safety Wireless Network’s (PSWN) efforts to achieve interoperability among state and local agencies by building an interoperable federal communications network. The SAFECOM program implementation strategy changed when the program was transferred to FEMA to focus on helping first responders make short-term improvements in interoperability using vehicles such as demonstration projects and research. At that time, the development of an interoperable federal system was seen as a long-term goal. “There is an integrated system-of-systems, in regular use, that allows public safety personnel to communicate (voice, data, and video) with whom they need on demand, in real time, as authorized: Public safety can respond anywhere, bring their own equipment, and can work on any network immediately when authorized. Public safety will have the networking and spectrum resources it needs to function properly.” SAFECOM officials said under this concept each major region of the country—for example, New York City, Chicago, and Saint Louis and their adjacent suburban jurisdictions—will have their own “system” which is made up of multiple subsystems, such as police agencies, that have established relationships. Part of the SAFECOM concept is that a centrally dispatched Urban Search and Rescue team can respond to any of these cities/regions and operate with the equipment that they bring with them. However, a national architecture has not been prepared yet to guide the creation of interoperable communications. An explicit and commonly understood and agreed-to blueprint, or architecture, is required to effectively and efficiently guide modernization efforts. For a decade, we have promoted the use of architectures, recognizing them as a crucial means to a challenging goal: agency operational structures that are optimally defined in both business and technological environments. Office of Management and Budget officials told us that OMB charged SAFECOM with developing a national architecture, which will include local, state, and federal government architectures. According to these officials, SAFECOM is to work closely with state and local governments to establish a basic understanding of what infrastructure currently exists and to identify public safety communication requirements. SAFECOM officials said the development of a national architecture will take time because SAFECOM must first assist state and local governments to establish their communications architectures. They said SAFECOM will then collect the state and local architectures, and fit them into a national architecture that links federal communications into the state and local infrastructure. The SAFECOM Program Plan includes an objective for 2005 to establish a research, development and testing, and evaluation program that identifies and develops a long-term, sustainable technical foundation. The SAFECOM program plans provide funding and promote coordination across the federal government to test and evaluate existing communications and bridging technologies and to create a research and development program addressing emerging technologies, such as software defined radio. Public safety agencies have been addressing communications interoperability for many years under the name “mutual aid.” Under mutual aid agreements public safety agencies have been monitoring each other’s activities and radio communications through the use of scanners or exchanging radios. The agencies have built cross-patches into dispatcher consoles to interconnect radio systems. They also have agreed on the shared use of specific frequencies for first responders, such as police forces and fire departments. For example, the state of California sponsored the California Law Enforcement Mutual Aid Radio System that provides a common set of channels statewide for mutual aid. Other technology options are also becoming available to public safety agencies from government agencies and commercial vendors. For example, the Naval Research Laboratory (NRL) has developed and fielded a high technology system that includes both civilian and military communications equipment that is capable of satellite communications and traditional public safety VHF, UHF, and 800 MHz spectrum bands. According to NRL, all bands can be linked to every other band and to normal telephone lines, private cellular networks, and satellite links. According to NRL, its system comes in various sizes and configurations that have been used at the 2002 Olympic Games and Superbowl XXXVII and can meet other Homeland Security incidents. New commercial technologies and systems are also becoming available. According to some state and local officials, they have to rely upon vendors for information on these new products because they do not have a single independent source of comprehensive information and the federal government can play a valuable role in testing and evaluating these technologies. For example, officials representing the Midwest Consortium told us that the federal government could create a clearinghouse of technical support for the state and local agencies. Therefore, rather than using the equipment vendors for technical advice on what to purchase and what type of systems to build, the state and local agencies could look to the federal government for technical assistance. But federal officials said there is no single source of data on new vendor equipment and that their first task is to identify what equipment is available. For example, federal laboratory officials in Boulder, Colorado, said they recently conducted a literature search in which they identified 11 vendors that make 24 models of Project 25 portable/mobile radio equipment, 7 vendors that make 9 models of conventional Project 25 repeater/base stations equipment, and only 1 vendor that makes Project 25 base stations using trunking technology. However, they said another center had prepared a list of entirely different equipment. Federal laboratory officials said that many of these technologies have not been tested and that there is no coordinated program today to test and evaluate vendor equipment and technologies. These officials said that various federal agencies conduct testing – for example, the Office of Law Enforcement Standards in the National Institute of Standards and Technology, the Department of the Interior, and the Forest Service. They said these agencies may also have different test objectives, for example, the NTIA/ITS laboratory conducts data analysis evaluation, while the National Law Enforcement and Corrections Technology Center in Rome, New York, concentrates primarily on operational testing. SAFECOM officials said that their role is to coordinate research, development, test, and evaluation activities for the federal government as part of their contribution to communications interoperability. They acknowledged that the federal government has multiple initiatives under way and that no cohesive plan to coordinate these initiatives exists today. These officials said SAFECOM plans to create standardized procedures for uniform testing procedures by the federal government. However, they said that because the SAFECOM program has not been authorized, they cannot create a unified research, development, test, and evaluation program without statutory authority. First responders must have the necessary technical support and training needed to properly communicate with each other using wireless communications on a day-to-day basis as well as in emergency situations. First responders will be challenged to perform at their best ability, especially during a major incident such as a terrorist attack or natural disaster. Therefore, ongoing technical assistance and training is needed. The SAFECOM Program Plan states that the public safety community expressed their need for technical assistance, including support for planning, development, implementation, and assessment of public safety communications systems. In response, SAFECOM is developing a plan to provide technical assistance and training to the public safety community. The plan or work package includes (1) creating a one-stop shop, which will consist of a Web portal and call in center and (2) providing training and technical assistance, which will consist of a practitioner resource group, training and assistance, national calling channels, and technical assistance publications for the public safety community. According to SAFECOM officials, the technical assistance work package has been approved for funding in fiscal year 2005. State and local government officials told us what a national technical assistance and outreach program for the public safety community should include. A Georgia official said that training should also be provided by the federal government to improve wireless communications among public safety officials. According to SAFECOM training should consist of tools and templates to train multiple public safety agencies and personnel on how to use interoperable communications equipment and processes. For example, officials from the state of Georgia told us the federal government should provide programs and assistance to coordinate the design and implementation of communications systems. Local officials in the state of Washington agreed that the federal government could offer staff assistance or technical support to the state and local public safety officials. According to local officials in Florida, the federal government should require that public safety officials have communications training. These local officials told us that the police are required to train and pass qualifications for using their gun at least once a year; however, they use their guns less than their communications equipment. There are no requirements to train on using the communications equipment. Local officials in San Diego County told us that the federal government could use other federal entities, such as the National Accreditation for Law Enforcement, as a model to educate and train public safety agencies. The National Accreditation for Law Enforcement could use state agencies as consultants to provide technical and operation advice to small localities. First responders must plan for and train on new technologies or the technology could have a negative impact on the effectiveness of emergency responders. The states we visited or contacted are using gateway technology as a short-term solution to achieving communications interoperability. However, this technology only patches different systems together and has to be used properly to be effective. For example, an official in California told us some public safety officials caused an entire system to crash at the most critical point of communications when they used it for the first time during an emergency because they had not been properly trained on the system. In addition, use of gateway systems may result in too many people trying to talk, in turn, taxing the communication systems. State and local public safety officials we talked with told us they needed national guidance on standards. For example, members of the Midwest Consortium we spoke with said they needed more national guidance on standards and technical issues and the establishment of a national entity made up of federal, state, and local entities that set standards. However, consortium officials emphasized that federal communications standards and initiatives must be reasonable, balanced, and consistent with state and local jurisdictions’ funding capabilities and their communication needs and objectives. OMB has established the development of standards for first responder interoperability at all levels of government as a SAFECOM objective. SAFECOM is to develop these standards by working in partnership with federal, state, local, and tribal public safety organizations. SAFECOM is working on a plan to address the development of national standards to improve public safety communications and interoperability. A key initiative in the SAFECOM program plan for the year 2005 is development of a process to advance standards needed to improve public safety communications. This initiative will identify, test, and where necessary, develop standards in coordination with the public safety community and ongoing standards activities. In our November 2003 testimony, we noted that a partnership between industry and the public safety user community developed what is known as Project 25 (P-25) standards. According to the PSWN program office, P-25 standards remain the only user-defined set of standards in the United States for public safety communications. PSWN believes P-25 is an important step toward achieving interoperability, but the standards do not mandate interoperability among all manufacturers’ systems. Federal officials also told us significant work remains to complete the development of the Project 25 standards and to test vendor equipment against these standards. The SAFECOM work plan states that SAFECOM will devote resources to accelerate the completion of the Project 25 suite of standards and create a common radio nomenclature for first responders. One problem that occurred in New York City on September 11, 2001, was that incompatible radio systems prevented police and fire department personnel from talking to one another. The DHS Secretary recently announced that DHS has identified technical specifications for a baseline interoperable communications system as the short-term solution to allow first responders to communicate by voice—no matter what frequency on which they are operating. SAFECOM officials said that the specifications the Secretary referred to are for generic bridging technologies that interconnect first responders’ different land mobile radios. According to these officials, the Secretary has also determined that local emergency- based communications interoperability capabilities should be in locations of critical concern by December 2004. These officials said that this date is the deadline for putting an interim solution in place for interoperable radio communications for police, fire, and emergency first responders. Some states are already using the bridging equipment or audio switches identified as a short-term solution by DHS and have identified several nontechnical barriers to successful use of the equipment. A state official in California told us that first responders need to plan their use of these technologies and become trained on using the technology, or it could have a negative impact on emergency response to an incident. This official said, for example, that some public safety officials had not been properly trained on using one vendor’s system, causing the system to fail at a critical point the first time they used the system in an emergency. According to this official, this technology must be used properly to be effective. Local officials in the State of Washington also told us that multiple units of these systems could overload communications because too many officials are trying to talk at the same time. A federal laboratory official said the bridging or audio switches provide the benefits of interoperability of disparate radio systems but have several shortfalls. These shortfalls include a requirement that users be within coverage of their home radio systems and that the use of bridging equipment may require pre-incident coordination. He said there are 4 major vendors, and about 30 vendors in total that make bridging equipment. He said testing has been conducted on only 2 of the major vendors’ equipment. State and local officials said they want an independent source of information on new products and that the federal government can play a valuable role in providing that information. SAFECOM officials said they intend to include their bridging specifications in federal grant guidance as a condition for using federal funds to purchase bridging equipment. However, they said that the specifications for such equipment may be released and in use before their testing program for switches and bridging technologies is complete. They said public safety agencies must rely on vendor data to determine whether the untested systems meet DHS’s requirements. SAFECOM officials also recognize that significant training on such equipment must accompany the delivery of the equipment to first responders. The officials said COPS and ODP have developed a template for providing technical assistance training for bridging equipment. State and local governments play a large, perhaps defining, role in resolving the communications interoperability problem. As recognized by the Federal Communications Commission, states play a central role in managing emergency communications, and state level organizations are usually in control at large-scale events and disasters or multiagency incidents. FCC also said that states are usually in the best position to coordinate with federal government emergency agencies. According to the National Strategy for Homeland Security, local officials stress that they are the first to respond to any incident and the last to leave the scene of an incident. According to the SAFECOM program, state and local governments also own 90 percent of the public safety communications infrastructure. In our November 2003 testimony, we identified fragmented planning and cooperation as the key barrier to improving interoperability of public safety wireless communications systems. In the past, a stovepiped, single jurisdiction or agency-specific systems development approach prevailed— resulting in none or less than desired interoperable communications systems. Public safety agencies have historically planned and acquired communications systems for their own jurisdictions without concern for interoperability. This meant that each state and local agency developed communications systems to meet their own requirements, without regard to interoperability requirements to talk to adjacent jurisdictions. For example, a PSWN analysis of Fire and EMS communications interoperability found a significant need for coordinated approaches, relationship building, and information sharing. However, the PSWN program office found that public safety agencies have traditionally developed or updated their radio systems independently to meet specific mission needs. The PSWN program also concluded that state leaders can, through memorandums of understanding (MOU), help to define interagency relationships, reach procedural agreements, promote regular meetings of statewide or regional interoperability committees, and encourage joint efforts to deploy communications technology. State and local officials that we talked with generally agree that states can coordinate communications planning and funding support for state communications systems and coordinate local governments’ interoperability efforts. For example, several officials said the state can facilitate the planning process by including key stakeholder input in the decision making process and ensure that communications interoperability issues are addressed. However, officials also see state roles in providing common infrastructure and developing routine training exercises. Several states have or are taking executive and legislative actions that coordinate and facilitate efforts to address problems of interoperable communications within their states. For example, as we indicated previously, states we visited have or are in the process of establishing SIECs to enhance communications interoperability planning, including the development of interoperability plans and administration of interoperability spectrum. California in 2003 also established the Public Safety Radio Strategic Planning Committee (PSRSPC) to develop and implement a statewide integrated public safety communications system for state government agencies that facilitates interoperability and other shared uses of public safety spectrum with local and federal agencies. In Florida, the governor issued an executive order in 2001 to establish seven Regional Domestic Security Task Forces that make up the entire state. Each of the regional task forces has a committee on interoperable communications under Florida’s State Working Group. The Florida legislature supported that effort by establishing the task forces in law and formally designating the Florida Department of Law Enforcement and the Division of Emergency Management as the lead agencies. The Task Forces consist of agencies from Fire/Rescue, Emergency Management, and public health and hospitals, as well as law enforcement. In addition, it includes partnerships with education/schools, business, and private industry. In addition, planning on a regional basis is key to interoperable communications systems development. The Public Safety Wireless Network report also notes that although in the past public safety agencies have addressed interoperability on an individual basis, more recently, local, state, and federal agencies have come to realize that they cannot do it alone. The report also notes that officials at all levels of government are now taking action to improve coordination and facilitate multijurisdictional interoperability. We talked with officials from several state and local agencies about their efforts to address interoperability issues on a regional basis. For example: In Georgia and Washington, state and local emergency consequence planning continues to be structured around the all-hazards planning model and are broken down into regions. The regions are made up of one or more counties that include cities, towns, and tribal nations within the regional geographical boundaries. This regional configuration was implemented to develop regional interoperability plans, distribute federal grant funds, develop emergency responder equipment priority lists, plan and execute training exercises, create regionally based mutual aid plans, and develop volunteer infrastructure to support citizens’ involvement in homeland security initiatives. The King County Regional Communications Board system in Washington State is a multijurisdictional coordination body. Communication decisions are made by the group and not made by individual jurisdictions. This regional cooperation is informal and not legislated or mandated. The San Diego County Regional Communications System was established in 1994 to provide an interoperable wireless network available to all public safety agencies. State officials also told us that statewide interoperability is not enough because incident first responders face could cross boundaries. Thus, some states are also taking actions to address interstate interoperability problems. For example, state officials from Illinois, Indiana, Kentucky, Michigan, and Ohio said their states have combined efforts to form the Midwest Public Safety Communications Consortium to promote interstate interoperability. These officials told us that the governors of their five member states plan to sign an MOU with each other to signify that each state is willing to be interoperable with the other states and provide communication assistance and resources to the other states, to the extent that it does not harm their own state. According to these officials, they also have taken actions to form an interstate committee to develop interoperability plans and solicit support from key players such as local public safety agencies. The benefits of the consortium are increased interoperability on a larger regional basis, an exchange of technical information, greater power over vendor manipulation because of increased purchasing power, an exchange of pricing and technical information, and lessons learned from their collective experiences. Although efforts are underway to address communications interoperability issues, state and local public safety officials face challenges in addressing communications interoperability. According to state and local public safety officials, some of the key challenges they are confronted with today include (1) multiple statewide communication systems, (2) turf or control issues, and (3) lack of communications training for public safety officials. Federal officials told us that states have multiple state communications systems that make communications interoperability planning more difficult. The states we visited have multiple statewide communications systems. For example, in the state of Washington, the departments of Transportation, Corrections, and Health use communication systems operating in the 800 MHz frequency band, while the National Guard and Emergency Management Division operate communications systems with the spectrum reserved for federal agencies. The remainder of the state agencies operates in the 150 MHz frequency band. Similarly, Florida has several statewide systems such as State Law Enforcement Radio System (SLERS) and Forestry systems that are not compatible. Because the forestry system operates on a different frequency band than SLERS, it does not allow users to communicate with law enforcement except through console patches. The SLERS was originally designed primarily for 8 state law enforcement entities. Membership now includes 17 law enforcement entities in 15 state agencies. Some local jurisdictions also have multiple communications systems. For example, San Diego and Imperial Counties have developed and implemented a radio system referred to as the Regional Communications System (RCS). RCS’s primary mission is to provide an interoperable wireless network available to all public safety and public service agencies within the counties, regardless of jurisdiction or level of government. However, according to local public safety officials in California, political, funding, and technology limitations such as incompatible communications equipment have prevented full participation in the system by the city of San Diego and other jurisdictions in the counties. According to a local government official in California, however, RCS and the city have collaborated on planning the transition from their current systems to a P-25 compatible system, which he said will provide seamless interoperability for all public safety agencies operating in the Southern California region. According to PSWN, efforts to develop and implement regional or shared systems are hindered by perceptions that management control of radio system development and operations will be lost. As a result, coordination and partnership efforts do not evolve, and “stop gap” measures are implemented to address specific interoperability requirements. Interoperable communications is meaningless unless first responders overcome turf issues and learn to cooperate in any given incident, according to Midwest Public Safety Communications Consortium members. The Consortium members said that the technical part of building interoperability is easy, compared with the political and operational issues. As a result, the planning process for addressing political and operational issues is vital. In the state of Washington, a potential obstacle to effective coordination may lie in the historical relationship between state and local governments. The state has 39 counties and 268 towns and counties. According to a Century Foundation report, local and regional governments in Washington have a long tradition of home rule and independent action, which makes it difficult for state officials to coordinate the activities of the units of local government. Washington state and local officials said that the political power in the state is decentralized, and the local city and county governments may resist state-driven mandates. Things get done on a consensus basis at the local level. According to local officials in Washington, that type of relationship does not exist between the state and local jurisdictions or the federal agencies and local jurisdictions. Regionally based planning is problematic due to resistance by locally elected officials, lack of trust between officials in different jurisdictions or disciplines, and competition over resources, according to a Century Foundation report. For example, one of the concerns of the Washington SIEC planning group was that the state could not force locals to participate or adhere to the development of a statewide communications plan, they could only invite locals to participate. Federal grant funds can be used to facilitate and encourage coordinated regional planning. However, there are currently several challenges to the ability to use these funds to support the long-term coordinated regional planning that we have identified as being essential to improving interoperable communications. First, federal funds are structured to address short-term needs for the development of interoperability projects rather than long-term planning needs for communications interoperability. Second, federal grants have inconsistent requirements to plan regionally. Third, the first responders grant structure is fragmented, which can complicate coordination and integration of services and planning at the state and local levels, and has presented additional barriers to federal efforts to coordinate communications funds. Fourth, uncoordinated federal and state level grant reviews limit the government’s ability to ensure that funds are used to improve regional and statewide communications interoperability. A study conducted in 1998 estimated the current replacement value of the existing public safety LMR infrastructure nationwide at $18.3 billion. According to a PSWN report, DHS officials have said that this estimate is much higher when infrastructure and training costs are taken into account. In addition, reaching an accelerated goal for improving communications interoperability will require a major investment of several billion dollars within the next 5 to 10 years. The estimated cost of an LMR system for a state or local jurisdiction can range from tens of thousands to hundreds of millions of dollars, depending on the size and type of system being implemented. According to PSWN, these cost estimates account only for the procurement of the equipment and infrastructure and do not include ongoing operation and maintenance costs. According to another Public Safety Wireless network (PSWN) funding report, the extraordinary investment in LMR systems makes obtaining the necessary funding to finance the replacement or upgrade of LMR systems one of the greatest challenges facing public safety agencies. This is especially true because public safety communications systems typically reach the end of their useful life cycle in 8 to 10 years. In addition, the National Telecommunications and Information Administration (NTIA) and Federal Communications Commission (FCC) have established a new migration plan that will require that all federal and state and local public safety agencies replace current LMR equipment with narrowband (12.5 kHz) equipment by 2008 and 2018, respectively. Federal funding is but one of several resources state and local agencies must utilize in order to address these financial challenges. State and local public safety officials say that they do not have reliable federal funding support for the planning costs associated with the long- term development of interoperable communications. State and locals officials from states that we visited identified the lack of a sustained funding source for communications as a major barrier. Local officials emphasized that public safety agencies need a re-occurring source of funds for communications because interoperability barriers cannot be fixed with a one-time grant. For example, local public safety officials from Washington state asserted that, once the granted project is complete, locals still have intense fiscal pressures to face in the support and operation of the communication systems. As a result, state and local agencies need to provide assurances that they can sustain the projects that the grants have developed. However, they emphasized that further federal support is needed to help with these costs. Officials from Georgia and California also expressed the need for federal support in addressing on- going costs and suggested creating a dedicated source of funds similar to the interstate highway program or 911 tax to assist states with implementing the long-term solutions. We have identified several federal grants that can be used to address first responder communications (See table 1.) Among these grants, in fiscal year 2003, Congress appropriated funds for two programs specifically dedicated to improving first responder interoperable communications. However, since 2003, the funding for these grant programs has changed significantly. In fiscal year 2003, the Office of Community Oriented Policing Services (COPS) and Federal Emergency Management Agency (FEMA) received approximately $154 million to provide grants for interoperable communications equipment. In fiscal year 2004 FEMA’s line- item budget for this program was cut and was not explicitly picked up anywhere else in DHS. The COPS program was awarded only $85 million as the sole source for the interoperable communications equipment grant for fiscal year 2004. In addition, the President’s fiscal year 2005 budget proposal allocates no funds for the Interoperable Communications Equipment grant program to the DHS and suggests reductions in other funding sources that state and locals are eligible to use for communications interoperability. For more details on changes to these funding sources, see table 1. Local, state, and federal officials agree that regional communications plans should be developed to guide decisions on how to use federal funds for interoperable communications. However, the officials emphasize that federal grant conditions and requirements do not support this planning process. While there are several grants to assist first responders in preparing for emergency response, state and local public safety officials from the states that we visited said that these grants do not provide adequate support for dedicated staff resources for communications planning or allow adequate time for state and locals to plan. Officials emphasized that most public safety organizations that are tasked with addressing the planning functions for the operational, technical, and coordination needs of communications systems, such as Regional Planning Committees, State Interoperability Executive Committees, and system managers rely on volunteer efforts of first responders, who also have full-time duties in their regular jobs. “The success of the regional planning approach can no longer be left to the volunteer efforts of the engaged public entities, particularly for something as complicated and intense as the re-banding proposed in the Supplemental Filing. All local governments are stretched to the maximum in our combined situation of economic challenges and security uncertainty. This has a limiting effect on the ability of the skilled personnel who normally engage in the regional planning efforts to continue engagement at the high levels that would be necessary to deal with a re-banding effort. This is even more the case in the complex border areas where numerous technical, procedural and perhaps political issues need to be resolved to make the effort a success. Region 43 strongly supports the need for a national pool of experts and funding to work with the RPCs as they undertake the re- banding in their Regions. These need to be people and resources that can do the hard work of inventorying systems, understanding spectrum relationships, evaluating the unique terrain and topography of the area and helping establish technically and operationally competent migration strategies that work for the unique situations of each Region… But Committees on their own can’t do this work effectively, and left to their own resources, we will see staggered and inconsistent results across the country.” As we mentioned previously, creating communications interoperability requires a coordinated regional approach. Recent grant requirements have encouraged jurisdictions to take a regional approach to planning, which has resulted in more local efforts to plan using a multidisciplinary and multi-jurisdictional approach rather than the stove-piped planning that formerly existed. For example, grant criteria used in the fiscal year 2003 COPS and FEMA Interoperable Communications Equipment grants encouraged multi-jurisdictional and multidisciplinary approaches, which resulted in grants being given to applicants that developed regional and multidisciplinary partnerships. For example, officials from Florida that received the COPS grant award for $6 million told us that as a result of this encouraged regional approach, they applied for the grant using a consortium of nine counties that formed a plan for interoperability and will use the funds on a multiregional basis to increase interoperability within and among their jurisdictions. State and local officials that we spoke with said that the federal government needs to do more to encourage regional communications planning and that this requirement should be made a condition of receiving grants. In our November 6 testimony, we also identified coordinated planning for communications interoperability as a pre-requisite to effectively addressing communication issues. However, current federal first responder grants are inconsistent in their requirements to tie funding to interoperable communications plans. States and locals are not required to provide an interoperable communications plan as a pre-requisite to receiving some federal grant funds. As a result, there is no assurance that federal funds are being used to support a well-developed strategy for improving interoperability. For example: The fiscal year 2004 Homeland Security Grant Program (HSGP) requires states to conduct a needs assessment and submit a State Homeland Security Strategy to Office for Domestic Preparedness (ODP); however, the required strategy is high-level and broad in nature. It does not require that project narratives or a detailed communications plan be submitted by grantees prior to receiving grant funds. The Urban Areas Security Initiative (UASI) grant requires a Needs Assessment and Urban Area Strategy to be developed by grantees, but also does not require project narratives or detailed plans. The COPS and FEMA Interoperable Communications Equipment grants did not require that a communications plan be completed prior to receiving grant funds. However, grantees were required to provide documentation that they were actively engaged in a planning process and a multijurisdictional and multidisciplinary project narrative was required for submission. If applicants intended to use the funds to support a project that was previously developed, they were required to submit the plan for review. An ODP program official acknowledged that requirements to develop a detailed communications needs assessment are missing and that ODP is currently developing an assessment tool. The official said that grantees could use this tool to assess their specific communication needs and conduct a gap analysis. The analysis would be used by the jurisdictions to develop an interoperable communications plan that would support the State and Urban Area Homeland Security strategies. State and local public safety officials that we spoke with reported that because of the lack of federal requirements to submit plans for interoperable communications; some federal grant funds are being spent on individual projects without a plan to guide these expenditures. States that we visited received federal funds that could be used for communications, but did not have statewide communications plans to guide decisions on local requests for federal funds. To combat this concern, the state of Washington Emergency Management Division said that it is holding back on allocating its obligated funds until its State Executive Interoperability Committee has developed a statewide communications plan that can be used to guide decisions on local request for communication funds. In addition to variations in requirements to create communications interoperability plans, federal grants lack consistency in defining what “regional” body should conduct planning. Regions are defined differently by different federal agencies. The COPS office, which provided grant funds for interoperable communications equipment, defined eligible regions as Metropolitan Statistical Areas (MSA’s). The Office for Domestic Preparedness’ (ODP) Urban Areas Security Initiative’s provided grants to “urban area” regions, which were defined—in some cases—as a subset of a MSA. On the other hand, FEMA awarded its grants for interoperable communications equipment based upon a jurisdictional nomination from the state governor. Furthermore, FCC has defined regions for communications planning based upon other characteristics. However, all four of the agencies encourage state and locals to conduct “regional” planning for communications. In addition to resources for planning, first responders emphasized that the limited time provided to first responders to conduct planning for communications interoperability before submission of grants presents a barrier. State and local officials from the Office of Emergency Management Services expressed concern about their inability to develop effective plans within the current grant timeframes. State officials from California’s Office of Emergency Management said that the short turn around timeframe on the ODP Homeland Security and UASI grants limited their ability to perform a high-level grant review or assist with local planning. ODP required that grantees submit a proposal within 30 days of the announcement. As a result, state officials said that they were allowed only enough time to review whether local grant proposals matched an itemized equipment list provided by ODP and could not perform an evaluation of local grant proposals or provide assistance to the locals in planning for and writing their grants. A representative from a county Office of Emergency Services in California expressed the same sentiment. He said that grants are coming with such short timeframes that localities are operating with a total lack of information before submitting the grants. He stressed that states and localities need time to study what they need in order to get something worthwhile. Officials from the other three states that we visited—Florida, Georgia, and Washington—also articulated similar concerns. Similar to state and local officials, federal officials expressed concerns about first responders’ ability to plan for long-term regional communication systems within the current 30 or 60 day submission time frames allotted for the grants. Officials from SAFECOM said that in order to alleviate the previous stove pipe communications planning of agencies, regional planning should be a pre-requisite to receiving federal funds. However, they emphasized that if planning were required as a condition for receiving grants, states would have to be given enough lead time to prepare a successful plan. The officials said that the current time frames placed on grants does not allow states or jurisdictions enough time to effectively create a communications plan that would make the most efficient use of federal funds. Adequate lead time may be a 1 or 2 year planning period. In addition, states should be given a planning model to demonstrate how to successfully plan for communications—including creating a governance structure as the first step. SAFECOM officials said that they are trying to develop this type of model in the Commonwealth of Virginia. ODP is also developing a similar model in Kansas City, Missouri. COPS officials administering the fiscal year 2003 Interoperable Communications Technology grant also said that requiring that a communications plan be developed prior to receiving grants would be a positive thing, if the grantees were given an appropriate amount of time to develop a plan before submission—perhaps several months. They noted that they did not require that grantees have a communications plan developed prior to receiving federal funds because the grantees only had 30 days from the grant announcement to submit their proposals. The Homeland Security Grant, UASI grant, Assistance to Firefighters grants also allow states only 30 or 60 days to submit a grant proposal. Demonstration grants also have been awarded to state and locals for communications interoperability that have 1 or 2 year performance periods and do not support long-term solutions. For example, Assistance to Firefighters Grant, COPS and FEMA’s Interoperable Communications Equipment grants, and National Urban Search and Rescue Response System grants all have 1-year performance periods. UASI, HSGP, and Local Law Enforcement Block Grants have 2-year performance periods. In our 2003 testimony, we pointed out that the federal first responder grant programs’ structure was fragmented, which can complicate coordination and integration of services and planning at the state and local levels. We also highlighted the variation in grant requirements for first responders grants. For example, DHS’s Assistance to Firefighters grant had a maintenance of effort requirement while the Fire Training Systems grant had no similar requirement. In this report, we find that fragmentation exists within Communications Interoperability grants that presents challenges to federal efforts to coordinate and streamline the funding process. Multiple agencies provide communication interoperability funding and have different guidelines and appropriations language that define how the funds can be used. A list of interoperable communications grant sources from 2003 through 2004 within DHS and DOJ and their eligible uses are listed in table 2. Despite federal efforts within DHS to synthesize federal grants, various agencies have statutory language that make it difficult to coordinate their use. For example, both SAFECOM and COPS officials said that certain statutory provisions underlying the grant programs presented barriers to the coordination efforts of COPS, FEMA, and SAFECOM to consolidate the grant application process for the 2003 Interoperable Communications Equipment grants. COPS and FEMA coordinated their application process for the grants and used sections of the SAFECOM grant guidance to guide their application requirements. According to COPS and FEMA officials, the combined COPS and FEMA application process was intended to maximize the use of funds and reduce duplication and competition between the two agencies’ Interoperability grants. Both COPS and SAFECOM officials explained that COPS and FEMA encountered difficulty in creating a combined grant application process because the COPS grant required a twenty-five percent match while the FEMA grant did not have such a requirement. However, COPS officials said FEMA added a twenty-five percent match of “in-kind” resources to its grant requirements in order to reduce competition between the COPS and FEMA grant programs. In addition to matching requirements, the underlying statutory language for COPS and FEMA interoperable communications grants made it difficult to incorporate some of the SAFECOM grant guidance recommendations. For example, SAFECOM grant guidance recommended that applicants conduct planning for developing public safety communications and specified eligible planning activities. However, the underlying statutory language for the COPS and FEMA grants focuses on the purchase of equipment without specifically addressing planning. COPS and FEMA officials said that they were able to justify allowing certain planning activities directly related to the purchase of equipment, but could not require that funds be used to develop a communications system. SAFECOM grant guidance also recommended addressing maintenance and other life-cycle costs of communications equipment; however, the statutory language underlying COPS and FEMA interoperable communications equipment grants focuses on funding the purchase of equipment rather than maintenance and other related costs. Federal officials that we spoke with agreed that, generally, there is no high-level review of communications interoperability across the federal government to ensure that the full range of granted projects compliment each other and add to overall statewide and national interoperability. Each agency reviews its own set of applications and projects. As a result, grants can be given to bordering jurisdictions that propose conflicting interoperability solutions. For fiscal year 2003, federal officials from COPS and FEMA attempted to eliminate awarding funds to conflicting communication systems within bordering jurisdictions by selecting different applicant pools and coordinating their review of grant proposals. The COPS office selected the largest MSAs from each state and territory as well as the 50 largest MSA’s regardless of state, to apply for COPS funds. FEMA requested that the governor of each state nominate one lead jurisdiction to submit a grant proposal, taking into account the state’s demographics and the location of critical infrastructure. In addition to selecting applicants from different jurisdictions, COPS and FEMA engaged in a process to ensure that projects from neighboring jurisdictions did not conflict with or duplicate each other. The collaboration that occurred between COPS and FEMA to review the 2003 Interoperable Communications Equipment grant proposals was a step forward, however, these agencies constitute only two of several federal agencies that provide funds for communications interoperability. A coordinated high-level review of key federal grant programs that award funds for communication purposes does not exist. In response to this challenge, SAFECOM has recently sponsored the formation of the Federal Interagency Coordination Committee (FICC), which includes a federal grant coordination working group. The FICC is an informal council consisting of federal agencies, whose mission is to help local, tribal, state and federal public safety agencies improve public safety response through more effective and efficient interoperable wireless communications by reducing duplication in programs and activities, identifying and promoting best practices and coordinating federal grants, technical assistance, training, and standards. Federal officials said that FICC will assist in shaping the common grant guidance for federal initiatives involving public safety communications. As of April 23, 2004, officials said that FICC has held two meetings. State governments that we visited also did not have a coordinated or centralized grant review process to ensure that communications grant funds in the programs that we reviewed were being used to support projects that were complimentary and not duplicative. Florida State Technology Office (STO) officials, who are members of Florida’s Domestic Security Oversight Board (DSOB), said that the DSOB was concerned that there was no overall centralized review of grant applications for federal funding and no central review of federal funds passing through the state to local governments. For example, STO has the statutory authority to review plans for new or expanded communication systems. However, STO officials said that some local communications plans are not reviewed by the state because there is no requirement that locals must submit their plan to STO for review before grant approval. Florida is now developing a funding working group under the DSOB to review funding requests for communication interoperability. Officials that we spoke with in California also acknowledged that there has been no centralized grant review process for funds that can be used for communications interoperability. Officials from the grants administration division within the Office of Emergency Services said that they do not have a centralized review of grant funds in California because several state and local agencies receive funds directly to their agencies or jurisdictions from the federal government. Local officials were concerned that this lack of a coordinated review of grants used across the state for communications interoperability can result in grants being awarded to bordering jurisdictions or localities that propose conflicting interoperability solutions and, therefore, compound existing barriers to regional or statewide interoperability. As a result, the state of Washington has set up a structure to facilitate centralized grant review of federal and state funding to ensure that they promote regional interoperability. Officials intend to use a statewide communications plan being developed by their State Interoperability Executive Committee (SIEC) to review local funding proposals. Currently, there is no database that can be used as a tool for coordinating federal or state oversight of funding for interoperable communications systems. SAFECOM is currently engaged in an effort with DOJ to create a “collaborative clearinghouse” that could facilitate federal oversight of interoperable communications funding to neighboring jurisdictions and allow states access to this information for planning purposes. The database is intended to decrease duplication of funding and evaluation efforts, de-conflict the application process, maximize efficiency of limited federal funding, and serve as a data collection tool for lessons learned that would be accessible to state and locals. According to federal officials, this database is operational; however, its use is limited in its ability to coordinate federal oversight of grant funds for several reasons. First, the database does not contain information from the majority of relevant federal agencies and SAFECOM has no enforcement authority to require that all federal agencies provide information to the database or use it guide decisions in their grant approval process. In addition, SAFECOM officials said that it is unclear how to obtain the needed information from formula grants on the use of federal funds for communications. The State Homeland Security grant issued by ODP is a large grant provided to states that can be used for communications interoperability, among other things. However, federal officials said that once these funds enter the states, there is no reporting obligation on the use of the funds by jurisdiction—this information is lost. According to these officials, formula grants that go directly to the jurisdictions, like the ODP UASI grants, have the potential to be tracked and used within the database if ODP provides application and award information for the database. The officials said that, as a result of limitations that may exist in obtaining the relevant information from formula grants, the database would likely only include information from discretionary grants, earmarks, or grants provided directly to the local jurisdictions. In addition to the above, Leo Barbour, Karen Burke, Katherine Davis, Sally Gilley, Robert Hadley, Latesha Love, Gary Malavenda, and Shirley Perry made contributions to this report. Information Technology: The Federal Enterprise Architecture and Agencies Enterprise Architectures Are Still Maturing. GAO-04-798T. Washington, D.C.: May 19, 2004. Project SAFECOM: Key Cross-Agency Emergency Communications Effort Requires Stronger Collaboration. GAO-04-494. Washington, D.C.: April 16, 2004. Homeland Security: Challenges in Achieving Interoperable Communications for First Responders. GAO 04-231T. Washington, D.C.: November 6, 2003. Reforming Federal Grants to Better Meet Outstanding Needs. GAO-03-1146T. Washington, D.C.: September 3, 2003. Telecommunications: Comprehensive Review Of U.S. Spectrum Management With Broad Stakeholder Involvement Is Needed. GAO-03-277. Washington, D.C.: January 31, 2003. Telecommunications: Better Coordination and Enhanced Accountability Needed to Improve Spectrum Management. GAO-02-906. Washington, D.C.: September 26, 2002. Congressional Research Service. FY 2005 Budget Request for First Responder Preparedness: Issues and Analysis. By Shawn Reese. Washington, D.C.: February 12, 2004. Congressional Research Service. DHS: First Responder Grants: A Summary. Washington, D.C.: October 2003. Congressional Research Service. First Responder Initiative: Policy Issues and Options. By Ben Canada. Washington, D.C.: Updated August 28, 2003. Congressional Research Service. FY2003: Appropriations for First Responder Awareness. By Ben Canada. Washington, D.C.: Updated June 2, 2003. Congressional Research Service. Terrorism Preparedness: Catalog of Selected Federal Assistance Programs. By Ben Canada. Washington, D.C.: April 2003. Congressional Research Service. Analysis of Selected Aspects of Terrorism Preparedness Assistance Programs. By Ben Canada. Washington, D.C.: Updated March, 13, 2003. Gilmore Commission. Third Annual Report to the President and the Congress of the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction. Washington, D.C.: 2001. National Governors Association. A Governor’s Guide to Emergency Management. Volume Two: Homeland Security. Washington, D.C.: 2002. National Task Force on Interoperability. Why Can’t We Talk? Working Together to Bridge the Communications Gap to Save Lives. Washington, D.C.: 2003. Public Safety Wireless Network. Information Material—‘How To’ Guide for Funding State and Local Public Safety Wireless Networks. Fairfax, Virginia: 2003. Public Safety Wireless Network. Federal Interoperability Assistance Support—Funding Strategy Best Practices Report. Fairfax, Virginia: 2002. Public Safety Wireless Network. The Report Card on Funding Mechanisms for Public Safety Radio Communications. Fairfax, Virginia: 2001. Public Safety Wireless Network. Fire and Emergency Management Services (EMS) Communications Interoperability. Fairfax, Virginia: 1999. Public Safety Wireless Network. Land Mobile Radio Replacement Cost Study. Fairfax, Virginia: 1998. Public Safety Wireless Network. Report on Funding Strategies for Public Safety Radio Communications. As prepared by Booz⋅Allen & Hamilton. Fairfax, Virginia: 1998. Public Safety Wireless Network. Report on Funding Mechanisms for Public Safety Radio Communications. Prepared by Booz ⋅Allen & Hamilton. Fairfax, Virginia: 1997. Public Safety Wireless Network. Final Report of the Public Safety Wireless Advisory Committee. Fairfax, Virginia: 1996. Public Safety Wireless Network. Public Safety Coordination and Preparedness Guide. Fairfax, Virginia. Public Safety Wireless Network. The Role of the Federal Government in Public Safety Wireless Interoperability. Fairfax, Virginia. Public Safety Wireless Network. The Role of the Local Public Safety Community Government in Public Safety Wireless Interoperability. Fairfax, Virginia. Public Safety Wireless Network. The Role of the States in Public Safety Wireless Interoperability. Fairfax, Virginia. U.S. Department of Justice. Wireless Communications and Interoperability Among State and Local Law Enforcement Agencies. Washington, D.C.: 1998. | Lives of first responders and those whom they are trying to assist can be lost when first responders cannot communicate effectively as needed. This report addresses issues of determining the status of interoperable wireless communications across the nation, and the potential roles that federal, state, and local governments can play in improving these communications. In a November 6, 2003, testimony, GAO said that no one group or level of government could "fix" the nation's interoperable communications problems. Success would require effective, collaborative, interdisciplinary, and intergovernmental planning. The present extent and scope nationwide of public safety wireless communication systems' ability to talk among themselves as necessary and authorized has not been determined. Data on current conditions compared to needs are necessary to develop plans for improvement and measure progress over time. However, the nationwide data needed to do this are not currently available. The Department of Homeland Security (DHS) intends to obtain this information by the year 2005 by means of a nationwide survey. However, at the time of our review, DHS had not yet developed its detailed plans for conducting this survey and reporting its results. The federal government can take a leadership role in support of efforts to improve interoperability by developing national requirements and a national architecture, developing nationwide databases, and providing technical and financial support for state and local efforts to improve interoperability. In 2001, the Office of Management and Budget (OMB) established the federal government's Wireless Public Safety Interoperable Communications Program, SAFECOM, to unify efforts to achieve national wireless communications interoperability. However, SAFECOM's authority and ability to oversee and coordinate federal and state efforts has been limited by its dependence upon other agencies for funding and their willingness to cooperate. OMB is currently examining alternative methods to implement SAFECOM's mission. In addition, DHS, where SAFECOM now resides, has recently announced it is establishing an Office for Interoperability and Compatibility to coordinate the federal response to the problems of interoperability in several functions, including wireless communications. The exact structure and funding for this office, which will include SAFECOM, are still being developed. State and local governments can play a large role in developing and implementing plans to improve public safety agencies' interoperable communications. State and local governments own most of the physical infrastructure of public safety communications systems, and states play a central role in managing emergency communications. The Federal Communications Commission recognized the central role of states in concluding that states should manage the public safety interoperability channels in the 700 MHz communications spectrum. States, with broad input from local governments, are a logical choice to serve as a foundation for interoperability planning because incidents of any level of severity originate at the local level with states as the primary source of support. However, states are not required to develop interoperability plans, and there is no clear guidance on what should be included in such plans. |
The District of Columbia Family Court Act of 2001 (P.L. 107-114) was enacted on January 8, 2002. The act stated that, not later than 90 days after the date of the enactment, the chief judge of the Superior Court shall submit to the president and Congress a transition plan for the Family Court of the Superior Court, and shall include in the plan the following: The chief judge’s determination of the role and function of the presiding judge of the Family Court. The chief judge’s determination of the number of judges needed to serve on the Family Court. The chief judge’s determination of the number of magistrates of the Family Court needed for appointment under Section 11-1732, District of Columbia Code. The chief judge’s determination of the appropriate functions of such magistrates, together with the compensation of and other personnel matters pertaining to such magistrates. A plan for case flow, case management, and staffing needs (including the needs of both judicial and nonjudicial personnel) for the Family Court, including a description of how the Superior Court will handle the one family/one judge requirement pursuant to Section 11-1104(a) for all cases and proceedings assigned to the Family Court. A plan for space, equipment, and other physical needs and requirements during the transition, as determined in consultation with the administrator of General Services. An analysis of the number of magistrates needed under the expedited appointment procedures established under Section 6(d) in reducing the number of pending actions and proceedings within the jurisdiction of the Family Court. A proposal for the disposition or transfer to the Family Court of child abuse and neglect actions pending as of the date of enactment of the act (which were initiated in the Family Division but remain pending before judges serving in other divisions of the Superior Court as of such date) in a manner consistent with applicable federal and District of Columbia law and best practices, including best practices developed by the American Bar Association and the National Council of Juvenile and Family Court Judges. An estimate of the number of cases for which the deadline for disposition or transfer to the Family Court cannot be met and the reasons why such deadline cannot be met. The chief judge’s determination of the number of individuals serving as judges of the Superior Court who meet the qualifications for judges of the Family Court and are willing and able to serve on the Family Court. If the chief judge determines that the number of individuals described in the act is less than 15, the plan is to include a request that the Judicial Nomination Commission recruit and the president nominate additional individuals to serve on the Superior Court who meet the qualifications for judges of the Family Court, as may be required to enable the chief judge to make the required number of assignments. The Family Court Act states that the number of judges serving on the Family Court of the Superior Court cannot exceed 15. These judges must meet certain qualifications, such as having training or expertise in family law, certifying to the chief judge of the Superior Court that he or she intends to serve the full term of service and that he or she will participate in the ongoing training programs conducted for judges of the Family Court. The act also allows the court to hire and use magistrates to hear family court cases. Magistrates must also meet certain qualifications, such as holding U.S. citizenship, being an active member of the D.C. Bar, and having not fewer than 3 years of training or experience in the practice of family law as a lawyer or judicial officer. The act further states that the chief judge shall appoint individuals to serve as magistrates not later than 60 days after the date of enactment of the act. The magistrates hired under this expedited appointment process are to assist in implementing the transition plan, and in particular, assist with the transition or disposal of child abuse and neglect proceedings not currently assigned to judges in the Family Court. The Superior Court submitted its transition plan on April 5, 2002. The plan consists of three volumes. Volume I contains information on how the court will address case management issues, including organizational and human capital requirements. Volume II contains information on the development of IJIS and its planned applications. Volume III addresses the physical space the court needs to house and operate the Family Court. Courts interact with various organizations and operate in the context of many different programmatic requirements. In the District of Columbia, the Family Court frequently interacts with the child welfare agency—the Child and Family Services Agency (CFSA)—a key organization responsible for helping children obtain permanent homes. CFSA must comply with federal laws and other requirements, including the Adoption and Safe Families Act (ASFA), which placed new responsibilities on child welfare agencies nationwide. ASFA introduced new time periods for moving children who have been removed from their homes to permanent home arrangements and penalties for noncompliance. For example, the act requires states to hold a permanency planning hearing not later than 12 months after the child is considered to have entered foster care. Permanent placements include the child’s return home and the child’s adoption. The Family Court transition plan provides information on most, but not all, of the elements required by the Family Court Act. For example, the plan describes the Family Court’s method for transferring child abuse and neglect cases to the Family Court, its one family/one judge case management principle, and the number and roles of judges and magistrates. However, the plan does not (1) indicate if the 12 judges who volunteered for the Family Court meet all of the qualifications outlined in the act, (2) include a request for judicial nomination, and (3) state how the number of magistrates to hire under the expedited process was determined. In addition, the court could consider taking additional actions, such as using a full range of measures by which the court can evaluate its progress in ensuring better outcomes for children. The transition plan establishes criteria for transferring cases to the Family Court and states that the Family Court intends to have all child abuse and neglect cases pending before judges serving in other divisions of the Superior Court closed or transferred into the Family Court by June 2003. According to the plan, the court has asked each Superior Court judge to review his or her caseload to identify those cases that meet the criteria established by the court for transferring or not transferring cases. Cases identified for transfer include those in which (1) the child is 18 years of age and older, the case is being monitored primarily for the delivery of services, and no recent allegations of abuse or neglect exist; and (2) the child is committed to the child welfare agency and is placed with a relative in a kinship care program. Cases that the court believes may not be candidates for transfer by June 2002 include those with respect to which the judge believes transferring the case would delay permanency. The court expects that older cases will first be reviewed for possible closure and expects to transfer the entire abuse and neglect caseloads of several judges serving in other divisions of the Superior Court to the Family Court. Using the established criteria to review cases, the court estimates that 1,500 cases could be candidates for immediate transfer. The act also requires the court to estimate the number of cases that cannot be transferred into the Family Court in the timeframes specified. The plan provides no estimate because the court’s proposed transfer process assumes all cases will be closed or transferred, based on the outlined criteria. However, the plan states that the full transfer of all cases is partially contingent on hiring three new judges. The transition plan identifies the way in which the Family Court will implement the one family/one judge approach and improve its case management practices; however, the evaluation measures developed to assess the court’s progress in reforming its operations could include additional measures that reflect outcomes for children. The plan indicates that the Family Court will implement the one family/one judge approach by assigning all cases involving the same family to one judicial team— comprised of a Family Court judge and a magistrate. This assignment will begin with the initial hearing by the magistrate on the team and continue throughout the life of the case. Juvenile and family court experts indicated that this team approach is realistic and a good model of judicial collaboration. One expert said that such an approach provides for continuity if either team member is absent. Another expert said that, given the volume of cases that must be heard, the team approach can ease the burden on judicial resources by permitting the magistrate to make recommendations and decisions, thereby allowing the Family Court judge time to schedule and hear trials and other proceedings more quickly. Court experts also praised the proposed staggered terms for judicial officials—newly-hired judges, magistrates, and judges who are already serving on the Superior Court will be appointed to the Family Court for varying numbers of years—which can provide continuity while recognizing the need to rotate among divisions in the Superior Court. In addition, the plan identifies actions the court plans to take to improve case management. First, the Family Court plans to centralize intake. According to the plan, a central office will encompass all the functions that various clerks’ offices—such as juvenile, domestic relations, paternity and support, and mental health—in the Family Court currently carry out. As part of centralized intake, case coordinators will identify any related cases that may exist in the Family Court. To do this, the coordinator will ensure that a new “Intake/Cross Reference Form” will be completed by various parties to a case and also check the 18 current computer systems serving the Family Court. Second, the court plans to use alternative dispute resolution to resolve cases more quickly and expand initial hearings to address many of the issues that the court previously handled later in the life of the case. Last, the plan states that the Family Court will provide all affected parties speedy notice of court proceedings and implement strict policies for the handling of cases—such as those for granting continuances—although it does not indicate who is responsible for developing the policies or the status of their development. The plan states that the court will conduct evaluations to assess whether components of the Family Court were implemented as planned and whether modifications are necessary; the court could consider using additional measures to focus on outcomes for children. For example, evaluation measures listed in the plan are oriented more toward the court’s processes, such as whether hearings are held on time, than on outcomes. According to a court expert, measures must also account for outcomes the court achieves for children. Measures could include the number of finalized adoptions that did not disrupt, reunifications that do not fail, children who remain safe and are not abused again while under court jurisdiction or in foster care, and the proportion of children who successfully achieve permanency. In addition, the court will need to determine how it will gather the data necessary to measure each team’s progress in ensuring such outcomes or in meeting the requirements of ASFA, and the court has not yet established a baseline from which to judge its performance. The transition plan states that the court has determined that 15 judges are needed to carry out the duties of the court and that 12 judges have volunteered to serve on the court, but does not address recruitment and the nomination of the three additional judges. Court experts said that the court’s analysis to identify the appropriate number of judges is based on best practices identified by highly credible national organizations and is, therefore, pragmatic and realistic. The plan, however, does not include a request that the Judicial Nomination Commission recruit and the president nominate the additional three individuals to serve on the Superior Court, as required by the Family Court Act. The Superior Court does not provide in the plan its determination of the number of nonjudicial staff needed. The court acknowledges that while it budgeted for a certain number of nonjudicial personnel based on current operating practices, determining the number of different types of personnel needed to operate the Family Court effectively is pending completion of a staffing study. Furthermore, the plan does not address the qualifications of the 12 judges who volunteered for the court. Although the plan states that these judges have agreed to serve full terms of service, according to the act, the chief judge of the Superior Court may not assign an individual to serve on the Family Court unless the individual also has training or expertise in family law and certifies that he or she will participate in the ongoing training programs conducted for judges of the Family Court. The transition plan describes the duties of judges assigned to the Family Court, as required by the act. Specifically, the plan describes the roles of the designated presiding judge, the deputy presiding judge, and the magistrates. The plan states that the presiding and deputy presiding judges will handle the administrative functions of the Family Court, ensure the implementation of the alternative dispute resolution projects, oversee grant-funded projects, and serve as back-up judges to all Family Court judges. These judges will also have a post-disposition abuse and neglect caseload of more than 80 cases and will continue to consult and coordinate with other organizations (such as the child welfare agency), primarily by serving on 19 committees. One court expert has observed that the list of committees to which the judges are assigned seems overwhelming and added that strong leadership by the judges could result in the consolidation of some of the committees’ efforts. The plan also describes the duties of the magistrates, but does not provide all the information required by the act. Magistrates will be responsible for initial hearings in new child abuse and neglect cases, and the resolution of cases assigned to them by the Family Court judge to whose team they are assigned. They will also be assigned initial hearings in juvenile cases, noncomplex abuse and neglect trials, and the subsequent review and permanency hearings, as well as a variety of other matters related to domestic violence, paternity and support, mental competency, and other domestic relations cases. As noted previously, one court expert said that the proposed use of the magistrates would ease the burden on judicial resources by permitting these magistrates to make recommendations and decisions. However, although specifically required by the act, the transition plan does not state how the court determined the number of magistrates to be hired under the expedited process. In addition, while the act outlines the required qualifications of magistrates, it does not specifically require a discussion of qualifications of the newly hired magistrates in the transition plan. As a result, none was provided and whether these magistrates meet the qualifications outlined in the act is unknown. A discussion of how the court will provide initial and ongoing training for its judicial and nonjudicial staff is also not required by the act, although the court does include relevant information about training. For example, the plan states that the Family Court will develop and implement a quarterly training program for Family Court judges, magistrates, and staff covering a variety of topics and that it will promote and encourage participation in cross-training. In addition, the plan states new judges and magistrates will participate in a 2 to 3 week intensive training program, although it does not provide details on the content of such training for the five magistrates hired under the expedited process, even though they were scheduled to begin working at the court on April 8, 2002. One court expert said that a standard curriculum for all court-related staff and judicial officers should be developed and that judges should have manuals available outlining procedures for all categories of cases. In a September 2000 report on human capital, we said that an explicit link between the organization’s training offerings and curricula and the competencies identified by the organization for mission accomplishment is essential.Likewise, organizations should make fact-based determinations of the impact of its training and development programs to provide feedback for continuous improvement and ensure that these programs improve performance and help achieve organizational results. Two factors are critical to fully transitioning to the Family Court in a timely and effective manner: obtaining and renovating appropriate space for all new Family Court personnel and the development and installation of a new automated information system, currently planned as part of the D.C. Courts IJIS system. The court acknowledges that its implementation plans may be slowed if appropriate space cannot be obtained in a timely manner. For example, the plan addresses how the abuse and neglect cases currently being heard by judges in other divisions of the Superior Court will be transferred to the Family Court, but states that the complete transfer of cases hinges on the court’s ability to hire, train, and provide appropriate space for additional judges and magistrates. In addition, the Family Court’s current reliance on nonintegrated automated information systems that do not fully support planned court operations, such as the one family/one judge approach to case management, constrains its transition to a Family Court. The transition plan states that the interim space plan carries a number of project risks. These include a very aggressive implementation schedule and a design that makes each part of the plan interdependent with other parts of the plan. The transition plan further states that the desired results cannot be reached if each plan increment does not take place in a timely fashion. For example, obtaining and renovating the almost 30,000 occupiable square feet of new court space needed requires a complex series of interrelated steps—from moving current tenants in some buildings to temporary space, to renovating the John Marshall level of the H. Carl Moultrie Courthouse by July 2003. The Family Court of the Superior Court is currently housed in the H. Carl Moultrie Courthouse, and interim plans call for expanding and renovating additional space in this courthouse to accommodate the additional judges, magistrates, and staff who will help implement the D.C. Family Court Act. The court estimates that accommodating these judges, magistrates, and staff requires an additional 29,700 occupiable square feet, plus an undetermined amount for security and other amenities. Obtaining this space will require nonrelated D.C. Courts entities to vacate space to allow renovations, as well as require tenants in other buildings to move to house the staff who have been displaced. The plan calls for renovations under tight deadlines and all required space may not be available, as currently planned, to support the additional judges the Family Court needs to perform its work in accordance with the act, making it uncertain as to when the court can fully complete its transition. For example, D.C. Courts recommends that a portion of the John Marshall level of the H. Carl Moultrie Courthouse, currently occupied by civil court functions, be vacated and redesigned for the new courtrooms and court-related support facilities. Although some space is available on the fourth floor of the courthouse for the four magistrates to be hired by December 2002, renovations to the John Marshall level are tentatively scheduled for completion in July 2003—2 months after the court anticipates having three additional Family Court judges on board. Another D.C. Courts building—Building B—would be partially vacated by non-court tenants and altered for use by displaced civil courts functions and other units temporarily displaced in future renovations. Renovations to Building B are scheduled to be complete by August 2002. Space for 30 additional Family Court-related staff, approximately 3,300 occupiable square feet, would be created in the H. Carl Moultrie Courthouse in an as yet undetermined location. The Family Court act calls for an integrated information technology system to support the goals it outlines, but a number of factors significantly increase the risks associated with this effort, as we reported in February 2002. For example, The D.C. Courts had not yet implemented the disciplined processes necessary to reduce the risks associated with acquiring and managing IJIS to acceptable levels. A disciplined software development and acquisition effort maximizes the likelihood of achieving the intended results (performance) on schedule using available resources (costs). The requirements contained in a draft Request for Proposal (RFP) lacked the necessary specificity to ensure that any defects in these requirements had been reduced to acceptable levels and that the system would meet its users’ needs. Studies have shown that problems associated with requirements definition are key factors in software projects that do not meet their cost, schedule, and performance goals. The requirements contained in the D.C. Courts’ draft RFP did not directly relate to industry standards. As a result, inadequate information was available for prospective vendors and others to readily map systems built upon these standards to the needs of the D.C. Courts. Prior to issuing our February 2002 report, we discussed our findings with D.C. Courts officials, who generally concurred with our findings and stated their commitment to only go forward with the project when the necessary actions had been taken to reduce the risks to acceptable levels. In that report, we made several recommendations designed to reduce the risks associated with this effort to acceptable levels. In April 2002, we met with D.C. Courts officials to discuss the actions taken on our recommendations and found that significant actions have been initiated that, if properly implemented, will help reduce the risks associated with this effort. For example, D.C. Courts is beginning the work to provide the needed specificity for its system requirements. This includes soliciting requirements from the users and ensuring that the requirements are properly sourced (e.g., traced back to their origin). According to D.C. Courts officials, this work has identified significant deficiencies in the original requirements that we discussed in our February 2002 report. issuing a Request for Information to obtain additional information on commercial products that should be considered by the D.C. Courts during its acquisition efforts. This helps the requirements management process by identifying requirements that are not supported by commercial products so that the courts can reevaluate whether it needs to (1) keep the requirement or revise it to be in greater conformance with industry practices or (2) undertake a development effort to achieve the needed capability. developing a systems engineering life-cycle process for managing the D.C. Courts information technology efforts. This will help define the processes and events that should be performed from the time that a system is conceived until the system is no longer needed. Examples of processes used include requirements development, testing, and implementation. developing policies and procedures that will help ensure that the courts’ information technology investments are consistent with the requirements of the Clinger-Cohen Act of 1996 (P.L. 104-106); and developing the processes that will enable the D.C. Courts to achieve a level 2 rating—this means basic project management processes are established to track performance, cost, and schedule—on the Software Engineering Institute’s Capability Maturity Model. In addition, D.C. Courts officials told us that they are developing a separate transition plan that will allow them to use the existing (legacy) systems should the IJIS project experience delays. We will review the plan once it is made available to us. Although they recognize that maintaining two systems concurrently is expensive and causes additional resource needs, such as additional staff and training for them, these officials believe that they are needed to mitigate the risk associated with any delays in system implementation. Although these are positive steps forward, D.C. Courts still faces many challenges in its efforts to develop an IJIS system that will meet its needs and fulfill the goals established by the act. Examples of these include: Ensuring that the systems interfacing with IJIS do not become the weak link: The act calls for effectively interfacing information technology systems operated by the District government with IJIS. According to D.C. Courts officials, at least 14 District systems will need to interface with IJIS. However, several of our reviews have noted problems in the District’s ability to develop, acquire, and implement new systems. The District’s difficulties in effectively managing its information technology investments could lead to adverse impacts on the IJIS system. For example, the interface systems may not be able to provide the quality of data necessary to fully utilize IJIS’s capabilities or provide the necessary data to support IJIS’s needs. The D.C. Courts will need to ensure that adequate controls and processes have been implemented to mitigate the potential impacts associated with these risks. Effectively implementing the disciplined processes necessary to reduce the risks associated with IJIS to acceptable levels: The key to having a disciplined effort is to have disciplined processes in multiple areas. This is a complex task and will require the D.C. Courts to maintain its management commitment to implementing the necessary processes. In our February 2002 report, we highlighted several processes, such as requirements management, risk management, and testing that appeared critical to the IJIS effort. Ensuring that the requirements used to acquire IJIS contain the necessary specificity to reduce requirement related defects to acceptable levels: Although D.C. Courts officials have said that they are adopting a requirements management process that will address the concerns expressed in our February 2002 report, maintaining such a process will require management commitment and discipline. Court experts report that effective technological support is critical to effective family court case management. One expert said that minimal system functionality should include the identification of parties and their relationships; the tracking of case processing events through on-line inquiry; the generation of orders, forms, summons, and notices; and statistical reports. The State Justice Institute’s report on how courts are coordinating family cases states that automated information systems, programmed to inform a court system of a family’s prior cases, are a vital ingredient of case coordination efforts. The National Council of Juvenile and Family Court Judges echoes these findings by stating that effective management systems (1) have standard procedures for collecting data; (2) collect data about individual cases, aggregate caseload by judge, and the systemwide caseload; (3) assign an individual the responsibility of monitoring case processing; and (4) are user-friendly. While anticipating technological enhancements through IJIS, Superior Court officials stated that the current information systems do not have the functionality required to implement the Family Court’s one family/one judge case management principle. Ensuring that users receive adequate training: As with any new system, adequately training the users is critical to its success. As we reported in April 2001, one problem that hindered the implementation of the District’s financial management system was its difficulty in adequately training the users. Avoiding a schedule-driven effort: According to D.C. Courts officials, the act establishes ambitious timeframes to convert to a family court. Although schedules are important, it is critical that the D.C. Courts follows an event-driven acquisition and development program rather than adopting a schedule-driven approach. Organizations that are schedule- driven tend to cut out or inadequately complete activities such as business process reengineering and requirements analysis. These tasks are frequently not considered “important” since many people view “getting the application in the hands of the user” as one of the more productive activities. However, the results of this approach are very predictable. Projects that do not perform planning and requirements functions well typically have to redo that work later. However, the costs associated with delaying the critical planning and requirements activities is anywhere from 10 to 100 times the cost of doing it correctly in the first place. On the whole, even though some important issues are not discussed, the Superior Court’s transition plan represents a good effort at outlining the steps it will take to implement a family court. However, the court still faces key challenges in ensuring that its implementation will occur in a timely and efficient manner. The court recognizes that its plan for obtaining and renovating needed physical space warrants close attention to reduce the risk of project delays. In addition, the court has taken important steps that begin to address many of the shortcomings we identified in our February 2002 report on its proposed information system. The court’s actions reflect their recognition that developing an automated information system for the Family Court will play a pivotal role in the court’s ability to implement its improved case management framework. Our final report on the transition plan may discuss some additional actions the court might take to further enhance its ability to implement the Family Court Act as required. Madam Chairman, this concludes my prepared statement. I will be happy to respond to any questions that you or other members of the subcommittee may have. For further contacts regarding this testimony, please call Cornelia M. Ashby at (202) 512-8403. Individuals making key contributions to this testimony included Diana Pietrowiak, Mark Ward, Nila Garces-Osorio, Steven J. Berke, Patrick DiBattista, William Doherty, John C. Martin, Susan Ragland, and Norma Samuel. | The District of Columbia Family Act of 2001 was enacted to (1) redesignate the Family Division of the Superior Court of the District of Columbia as the Family Court of the Superior Court, (2) recruit trained and experienced judges to serve in the Family Court, and (3) promote consistency and efficiency in the assignment of judges to the Family Court and in its consideration of actions and proceedings. GAO found the Superior Court made progress in planning the transition of its Family Division to a Family Court, but some challenges remain. The transition requires the timely completion of a series of interdependent plans to obtain and renovate physical space for the court and its functions. Adequate space may not be available to support the additional judges the Family Court needs. Furthermore, the development of the Integrated Justice Information System will be critical for the Family Court's operational effectiveness, its ability to evaluate its performance, and to meet the judicial goals mandated by the Family Court Act. |
The Goals 2000: Educate America Act, which became law in 1994 and was amended in 1996, is intended to promote coordinated improvements in the nation’s education system at the state and local levels. All states and the District of Columbia, Puerto Rico, and the U.S. Territories are currently participating in the program. Goals 2000 funds aim to support state efforts to develop clear standards for and comprehensive planning of school efforts to improve student achievement. Funds are provided through title III of the act and are to be used at the state and local levels to initiate, support, and sustain coordinated school reform activities. (See app. II for a listing of allocations.) States can retain up to 10 percent of the funds received each year, and the remainder is to be distributed to districts through a subgrant program. States have up to 27 months to obligate funds; after this time, unobligated funds must be returned to the federal government. Goals 2000 requires states to award subgrants competitively. To comply with this component of the law, states’ subgrant programs require districts to compete directly against one another for funding or compete against a standard set of criteria established by the state to determine levels of funding for individual applicants. Some states weigh districts’ subgrant proposals against one another and against standard criteria. Prior to the 1996 amendments, Goals 2000 was criticized as being too directive and intrusive in state and local education activities. The act initially required that states submit their education reform plans to the Secretary of Education for review and approval before they could become eligible for grants. The Omnibus Consolidated Rescissions and Appropriations Act of 1996 amended the law by providing an alternative grant application process that did not include the Secretary of Education’s approval of a state’s education reform plan and eliminated some requirements for state reporting of information to the Department of Education. The amendment also allowed local districts in certain states to apply directly to the Department for Goals 2000 funds, even if their state did not participate at the state level. As a result of the 1996 changes, the Goals 2000 program is essentially a funding-stream grant program with fiscal objectives. These types of grants differ from performance-related grants, which have more immediate, concrete, and readily measurable objectives. Funding-stream grant programs often confine the federal role to providing funds and give broad discretion to the grantee. They are also the least likely of various grant types to have performance information. Goals 2000 does not have specific performance requirements and objectives, and the Department of Education has issued no regulations specifically related to performance by states and districts concerning their activities under Goals 2000. Rather, the Department of Education provides states the latitude to merge Goals 2000 funds with other funds from state and local sources to support state and local reform activities. However, the Department has identified objectives in its annual performance plan that it expects to achieve as a result of this program, along with other education programs. Goals 2000 funds, totaling about $1.25 billion for fiscal years 1994 through 1997, have supported a broad range of education reform activities at both the state and local levels. Of this amount, states reported that about $109 million (9 percent) was retained at the state level where it was used for management, development of statewide standards, and other related purposes. The remaining funding was provided in the form of subgrants to local districts, consortia of districts, individual schools, and teachers. State program officials reported that subgrants supported a broad array of district efforts to promote education reform activities and keep up with new state standards and assessments. These efforts included developing district and school reform plans, aligning local curricula with new assessments, and promoting professional development activities for teachers. Subgrants, with few exceptions, were not used to support health-related activities. (See app. IV for additional information on state subgrants.) As permitted by the act, most states retained a portion of their total Goals 2000 funds at the state level and used it primarily to manage the subgrant program and support state-level activities. (See app. III for state-retained funds by cateogry and fiscal year.) Many states retained less than the maximum amount permitted, and a few states retained almost no funds at all. In some instances, state-retained funds were combined with subgrants to support local initiatives. In the 4-year period that we reviewed, states were able to provide detail on how $62 million in state-retained funds have been used. Of this amount, states primarily used Goals 2000 funds for personnel and benefits and contract services and consultants. (See fig. 1.) Funds were also used for training and travel; printing and postage; equipment and supplies; and rent, telephone, overhead, and other costs not classified elsewhere. The largest category of state-retained funds where detail was available was for funds reported as used for personnel and benefit costs (44 percent). These expenditures typically involved salaries and benefits for state-level staff who managed the state’s subgrant program and other state-sponsored education reform activities. Generally, these personnel were responsible for disseminating information on the Goals 2000 program, providing technical expertise to districts regarding grant requirements, assisting district personnel with proposal writing, reviewing districts’ subgrant proposals, and managing the subgrant selection process. These staff also typically monitored subgrantees’ expenditures and reviewed reports that subgrantees submitted regarding their projects. The remaining state-retained funds where detail was reported were used for contract services, training and travel, printing and postage, equipment and supplies, and other activities. Contract services and consultant fees constituted about 28 percent of state-retained funds. These expenditures were often associated with state efforts to create new standards and assessments, develop new curricula in alignment with the standards, and use outside experts to research and develop these measures. Travel, training, and conference costs, accounting for about 9 percent of total expenditures, typically supported state Goals 2000 panel activities and training for teachers and administrators. These funds were also used to support state conferences designed to educate district and school officials about Goals 2000 and allow them to share information and collaborate on projects. Printing and postage made up 7 percent of state-retained funds, and funds used for equipment and supplies, such as purchasing computer hardware and software, made up another 7 percent. Other expenses—such as rent, telephone costs, overhead, and other costs not classified elsewhere—accounted for the remaining 5 percent of the identified funds.The additional $47 million identified by states as having been retained at the state level had either not yet been spent or could not be identified in detail. Most state officials said that Goals 2000 funding has been an important resource in their states’ development of new standards and assessments, but they were unable to estimate how much future Goals 2000 funding they would need to complete these activities. Generally, officials said they were unqualified to make this estimate because their involvement in the state’s overall education reform efforts was limited or they viewed the development of standards and assessments as an iterative process that will never be fully complete. We identified 16,375 local subgrants totaling over $1 billion that were awarded with funding provided in fiscal years 1994 through 1997. As shown in table 1, the number of subgrants and total dollar amount of subgrant awards rose each year between fiscal years 1994 through 1996. (Amounts for fiscal year 1997 are incomplete because several states had not yet awarded their subgrants for that year at the time of our review.) Subgrants ranged from a $28 subgrant that funded a reading professional development activity in a single California school to a $6.1 million subgrant for fourth- to eighth-grade reading instruction awarded to the Los Angeles Unified school district. More than 34 percent of the 14,367 school districts nationwide that provide instructional services received at least one Goals 2000 subgrant during the 4-year period reviewed. Many districts received Goals 2000 funding for 2 or more of the years we reviewed. Over the 4-year period reviewed, Goals 2000 subgrants funded several general categories of activities: local education reform projects, professional development, computer equipment and training, preservice training, and standards and assessments. Local education reform projects and professional development, the two largest categories, together account for about two-thirds of the subgrant funding. Some activities fell into a “crosscutting and other” category that reflected activities that had been combined or were too infrequent to categorize separately. In cases where states could not identify a single primary activity for a grant, we classified the grants as having had a crosscutting purpose. (See fig. 2.) Standards and Assessments $50,871,015 Table 2 summarizes some of the activities undertaken with subgrant funds under each of the general categories. Local education reform activities, constituting about 39 percent of total subgrant funding, included activities such as the development of district improvement plans, alignment of local activities with new state education reform plans, and efforts to update curriculum frameworks. For example, Indiana awarded a subgrant to align curricula and instruction and to design and implement an improvement plan that allows secondary schools to build on foundations developed at the elementary schools. In Kentucky, state officials reviewed their comprehensive reform activity and concluded that their plan was missing a public engagement program for parents and community members that would sustain education reform. Thus, the state awarded subgrants to improve public information, boost parental understanding, increase families’ understanding of technology, engage parents, and broaden the reach of the school into the community. Professional development activities, representing about 28 percent of Goals 2000 subgrant funding over the 4-year period reviewed, included activities such as updating teacher skills in new teaching approaches and providing enrichment courses for teachers. For example, Tennessee provided a grant for 11 teachers to complete a year-long Reading Recovery training program in strategies to teach the most at-risk first-graders to read. Teachers who participated in the training program subsequently used the strategies to help 63 of 89 at-risk first-graders progress to reading at a level comparable to the average of their class. In the Troy, New York, area, subgrants funded a series of professional development activities for staff providing inservice programs, a curriculum workshop, and training in the use of learning and telecommunications technologies as tools to support innovative instructional processes. Preservice training activities, which involved teachers-in-training and university programs conducting new teacher training, used about 6 percent of the subgrant funds. For example, subgrant projects funded mentor programs in Illinois, where up to 50 percent of new teachers leave the profession after 5 years. In Peoria, Goals 2000 funded a grant allowing education majors in local colleges to attend an educators’ fair, observe classes, create projects for classroom use, and meet regularly with selected master teachers from the district. In Delaware, a subgrant funded technology and staff support for a preservice program that allowed second-year student teachers to teach during the day and attend courses by videoconference rather than driving long distances to the state’s only university with a preservice training program. Subgrants for computer equipment and training—which are used to buy computer hardware and software, network schools to educational sites on the Internet, and train teachers and staff on the effective use of the new technology—amounted to about 10 percent of total funding. For example, a subgrant in Louisiana allowed a teacher to buy a graphing calculator, which could be used with an overhead projector to help low-performing math students better understand algebra. In some states, districts could purchase technology using Goals 2000 funds if the primary purpose of the subgrants involved meeting state education reform goals. Other states—including New Mexico, Kansas, and Wisconsin—permitted districts to purchase technology using Goals 2000 funds only if the equipment was closely tied to an education reform project. As one Wisconsin official stated, “Districts cannot purchase technology for technology’s sake.” A few states restricted technology purchases in 1 or more years. Oregon, for example, did not permit districts to purchase high-cost computer equipment using Goals 2000 subgrant funds. However, some states, such as Virginia and Alabama, required all subgrant projects to be associated with technology. Officials in these states told us that they had taken this approach because their states tied their education reform efforts to their state technology plans or because the approach was one of the least controversial purposes available for using Goals 2000 funds. Standards and assessments activities, accounting for about 5 percent of total subgrant funding, included funding for such activities as the development of standards, alignment of current curriculum standards with new state content standards, and the development of new or alternative assessment techniques. For example, state officials in New York said Goals 2000 funds are being used to clarify standards for the core curriculum and to prepare students for the state’s regents examination for twelfth-graders—an examination all New York students must pass to graduate from high school. State staff were also developing new assessments using state-retained funds. With Goals 2000 funds, Texas funded the development and dissemination of its Texas Essential Knowledge and Skills (TEKS) program, which informs teachers about what students should know and be able to do. Goals 2000 paid for items such as a statewide public and committee review of TEKS and subsequent revisions; printing and distribution of TEKS following its adoption by the state board; and ongoing support, including statewide centers, resource materials and products, and training related to TEKS. In Louisiana, Goals 2000 project directors reported that teachers in a number of subgrant projects were able to experiment with alternative assessment techniques. Project directors reported that team planning and networking made possible by Goals 2000 grants encouraged more applied learning strategies and the use of alternative approaches to student evaluation, such as portfolios, applied problem solving (especially in math and science), the use of journals, checklists, and oral examinations. These subgrant activities associated with education reform, reflecting districts’ crosscutting approaches to meeting education reform goals, accounted for the remaining 12 percent of subgrant funding. In many of these cases, state officials were unable to identify a single focus for subgrant activities because they reflected a combination of activities. Some subgrants, for example, combined development of a district improvement plan (a local education reform activity) with teacher education on the new curriculum (a professional development activity). In Pennsylvania, most of the $41 million in subgrants for the 4-year period had several different areas of focus, such as a district’s $462,100 subgrant identified as being for the development and implementation of a local improvement plan, assessments, technology, and preservice teacher training and professional development. Less than two-tenths of 1 percent of Goals 2000 subgrant funding was identified as being used to support health-related education activities. In the 31 subgrants specifically identified as being related to health issues, most involved nutrition and hygiene education efforts that district officials believed were important to the preparedness of their students to learn. For example, a subgrant in New Mexico focused on making children healthier and used subgrant funds to implement a curriculum that taught children about health issues, such as dental care, nutrition, exercise, and problems associated with cigarette smoking and alcohol use. According to a state official, this proposal was in congruence with a comprehensive health component that state officials had originally included in the state’s education reform plan because they believed that their reform effort should address barriers to learning. Subgrants to local education agencies supported state education reform efforts. Professional development, preservice training, standards and assessments, and technology subgrants generally were aligned with state standards or reform priorities. Almost all state and local officials said Goals 2000 funds provided valuable assistance to education reform efforts at both the state and local levels and that, without this funding, some reform efforts either would not have been accomplished or would not have been accomplished as quickly. Some officials said Goals 2000 had been a catalyst for some aspect of the state’s reform movement, though in most cases the funding served as an added resource for reform efforts already under way. State-level officials voiced strong support for the program’s existing funding design. Almost all of the state officials we interviewed told us that Goals 2000 funds furthered their state’s and local districts’ education reform efforts by providing additional funding that they could use to implement reform plans that they had already initiated. In many cases, state officials said that Goals 2000 state-retained funds or subgrant money allowed the state and districts to accomplish things that would not have been done—or would not have been done as quickly or as well—had it not been for the extra funding provided by Goals 2000. For example, one Oregon official said that Goals 2000 funding was the difference between “doing it and doing it right” and that, without Goals 2000 funds, the state would either not have been able to develop standards or would have had to settle for standards only half as good as the ones that were developed. For example, Goals 2000 funds allowed Oregon to bring in experts, partner with colleges, align standards, create institutes to help teachers with content standards, and articulate the curriculum to all teachers to prepare students for standardized testing. Local officials in Kentucky described how their Goals 2000 funded projects allowed them to make progress in meeting their new state standards and speed their reform efforts. In several cases, state officials reported that Goals 2000 had served as a catalyst for a certain aspect of their reform efforts, such as the development of standards and assessments. For example, in Nevada, a state official said that Goals 2000 was a catalyst for developing content and performance standards that identified what, at a minimum, students would need to master at certain grade levels. Before Goals 2000, the state did not even have the terminology for standards-based reform. Goals 2000 brought terminology and a consistency of ideas regarding standards-based reform, he said. Goals 2000 was also a catalyst for education reform communication in Missouri. One state official reported that Goals 2000 was the vehicle that got schools and universities talking for the first time about issues such as student-teacher preservice training. While the scope of our work did not specifically include ascertaining the view of state education officials on the format of the Goals 2000 funding, most of the officials we interviewed expressed support for continuing the funding in its present format. The Congress has been considering changing the present format of Goals 2000 funding as part of ongoing discussions on how to better assist states in their education reform efforts. Almost every state official told us that flexibility is key to Goals 2000’s usefulness in promoting state education reform because states could direct these funds toward their state’s chosen education reform priorities. The current level of flexibility, officials told us, allowed states to use their state-retained funding according to self-determined priorities as well as structure their subgrant programs to mesh with their states’ education reform plans. As one Washington state official said, Goals 2000 is laid out in the law with broad functions rather than with specific programs, which has had an impact in bringing schools and districts together to increase standards and prioritize issues rather than developing program “stovepipes.” A state official from Arizona said that the flexibility permitted in determining how funds will be used allows states that are at different points in the reform process to use the funds according to their own needs—an especially important feature given the wide variation among states with respect to education reform progress. In New York, local and state officials described the Goals 2000 funding as being valuable because it allowed the state to react quickly to problems and opportunities. As one official stated, “It allows you to change the tire while the car is moving.” Further, several state officials told us that they did not want more program flexibility, such as placing the funding into block grants that could be used for many purposes in addition to education reform. Generally, these state officials wanted the funding criteria to remain as they are with funds dedicated to systemic education reform purposes at a broad level but permitting flexibility at the state and local levels to determine what would be funded within that broad purpose. For example, Louisiana state officials said that they feared the funding would be used in lieu of current state spending if it were not earmarked for education reform and that this would reduce the level of reform that would occur in the state. In Nevada, an official told us that he did not want Goals 2000 funds to be more flexible because he thought this would cause the state to lose the focus on the standards and improved learning that it has had under Goals 2000. Title III of Goals 2000 provided more than $1.25 billion from fiscal years 1994 through 1997 for broad-based efforts to promote systemic improvements in education. State and local officials believe that Goals 2000 funding has served a useful purpose by helping states to promote and sustain their individual education reform efforts over the past 4 years. While the state-retained portion of funding allowed states to employ staff to coordinate overall reform efforts, the bulk of the funding was distributed as subgrants to thousands of local districts where, according to state and local officials, it enhanced their ability to develop education reform projects, professional development activities, preservice training, and new standards and assessments. Goals 2000 funds have provided an additional resource to enhance education reform efforts and helped states promote and accomplish reforms at an accelerated pace—which state officials believed would not have occurred without this funding. By giving states the flexibility to target funds toward their own education reform goals, states were able to direct funds toward their greatest priorities within the broad constraints of the law. While a program such as this, which entails great latitude in the use of funds and requires little in the way of reporting requirements, reduces some of the states’ accountability for process and results, Goals 2000 appears to be accomplishing what the Congress intended—providing an additional and flexible funding source to promote coordinated improvements to state and local education systems. The Department of Education provided written comments on a draft of this report. The Department said that our report represents the most comprehensive review to date of state and local activities supported under Goals 2000 and that it would find this information extremely informative in its consideration of reauthorization proposals. Staff from the Goals 2000 office provided technical comments that clarified certain information presented in the draft, which we incorporated as appropriate. The Department of Education’s comments appear in appendix V. Copies of this report are being sent to the Secretary of Education and interested congressional committees. We will also make copies available to others upon request. If you have questions about this report, please call me or Harriet Ganson, Assistant Director, on (202) 512-7014. Other major contributors to this report are listed in appendix VI. We were asked to (1) review the purposes for which Goals 2000 state-retained funds have been used, (2) determine what local projects have been funded using Goals 2000 funds, (3) determine state officials’ views about how Goals 2000 relates to state reform, (4) ascertain how much of Goals 2000 funds have been used for developing standards and assessments and what future support is needed for these purposes, and (5) find to what extent Goals 2000 funds have been used for health education activities. For reporting purposes, we combined these questions into two broader objectives: (1) how Goals 2000 funds have been spent at both the state and local levels, including the levels of funding for developing standards and assessments as well as health education, and (2) how state and local officials view Goals 2000 as a means to promote education reform efforts. To conduct our work, we visited 10 states and interviewed federal, state, and local officials in these states. We also reviewed documents from the Department of Education, state departments of education, and the Council of Chief State School Officers; surveyed Goals 2000 coordinators in all states; analyzed quantitative and qualitative data from federal and state Goals 2000 offices and from independent audits; and reviewed the statutory and regulatory requirements of the Goals 2000 program. To obtain information about each assignment objective, we conducted site visits to 10 states, which account for over 32 percent of the 4-year total Goals 2000 funding under review. The sites visited were California, Delaware, Illinois, Kentucky, Louisiana, Maryland, New York, Oregon, South Carolina, and the District of Columbia. The selection of these sites was made on the basis of the 10 states’ funding allocations and geographic representation, the number of subgrants awarded, activities we became aware of during our review, and recommendations of the Department of Education and Council of Chief State School Officers. At each site visit location, we interviewed state, district, and school officials to obtain comprehensive and detailed information about how the program has been used to promote education reform. At the state level, we spoke with various officials including state superintendents, Goals 2000 coordinators and staff, and financial officials. At the district level, we spoke with representatives of 71 districts. These included district superintendents, finance or budget officials, district staff, teachers, and students. In addition to the site visits, we also conducted comprehensive telephone interviews with state Goals 2000 coordinators. Both the telephone interviews and the site visits were used to obtain information on how each state has used Goals 2000 funding to support education reform. These interviews also included queries on subgrant selection criteria and processes, financial and programmatic monitoring, and evaluation efforts. We surveyed each state, the District of Columbia, and Puerto Rico to obtain financial and programmatic documentation of their Goals 2000 program. (Although small amounts of Goals 2000 funds are provided to the U.S. Territories and the Bureau of Indian Affairs, we did not review their programs.) We collected this documentation, reviewed it, and cross-checked it with documents and funding reports from the Department of Education and the Council of Chief State School Officers. We also clarified any discrepancies found in the data during our interviews. Documentation provided to us included requests for proposals, state reform plans, progress reports, budget and expenditure reports, and applicable audits. We also gathered and analyzed subgrant summaries from each state containing the name of the recipient, category of the subgrant, and subgrant amounts for all subgrants supported by Goals 2000 funds from fiscal years 1994 through 1997. (See app. IV.) For various reasons, several states were unable to provide details on state-retained funds, subgrant data, or both for 1 or more years. We reviewed title III of the Goals 2000: Educate America Act and analyzed regulations pertinent to the program. This review provided the foundation from which we analyzed the information collected. In conducting the data collection, we relied primarily on the opinions of the officials we interviewed and the data and supporting documents they provided. Although we did not independently verify this information, we requested copies of all state audits pertaining to Goals 2000 and reviewed those we received for relevant findings. We also reviewed, for internal consistency, the data that officials provided us and sought clarification where needed. We did not attempt to determine the effectiveness of the various grant-funded activities or measure the outcomes achieved by the funded projects. We conducted our work in accordance with generally accepted government auditing standards between November 1997 and October 1998. From fiscal years 1994 through 1997, a total of $1,262,740,153 was allocated to the states and the District of Columbia and Puerto Rico. The smallest allocation was $370,124 to Wyoming in 1994; the largest was $54,659,343 to California in 1997. (See table II.1.) Table II.1: Goals 2000 Allocations by State, Fiscal Years 1994 Through 1997 (continued) Fiscal year 1995 and fiscal year 1996 funds were awarded directly to LEAs in Montana, New Hampshire, and Oklahoma on a competitive basis. Direct awards are also being made to LEAs in Montana and Oklahoma with respect to fiscal year 1997 and fiscal year 1998 funds. The Goals 2000: Educate America Act permits states to retain a portion of their total Goals 2000 funds at the state level—up to 40 percent in fiscal year 1994 and 10 percent thereafter—to develop state reform plans and engage in statewide activities. States primarily use this portion to manage the district subgrant program and support state-level activities. Many states retained less than the maximum amount permitted, and a few states retained almost no funds at all. As shown in table III.1 below, states primarily used Goals 2000 funds for personnel and benefits; contract services and consultants; and, to a lesser extent printing, travel, equipment, training, supplies, and conferences. Other expenses such as rent, telephone, and postage (along with indirect and other costs not elsewhere classified) accounted for the remainder. In cases where states could not provide specific categorizations for the state-retained funds they reported, these amounts were included in the “other” category. This appendix provides state-by-state information on subgrants made to local school districts and other organizations. Table IV.1 shows the number and amount of subgrants in total for each state, table IV.2 shows the number of subgrants by category for each state, and table IV.3 shows the dollar amounts of subgrants by category for each state. Total amount of subgrants reported (continued) Table IV.3: Total Dollar Amounts for Subgrants by Category, by State, Fiscal Years 1994 Through 1997 (continued) In addition to those named above, the following individuals made important contributions to this report: Dawn Hoff collected and analyzed state information and drafted major sections of the report, Sonya Harmeyer collected state information and had a lead role in analyzing and developing graphic presentations of the data, Richard Kelley gathered and assisted in the analysis of information from states and the Department of Education, Edward C. Shepherd and Jennifer Pearl assisted in data collection activities, Edward Tuchman provided assistance in analyzing and verifying data, Stanley Stenersen assisted in structuring and reviewing the draft report, and Jonathan Barker of the Office of the General Counsel provided legal assistance. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Goals 2000 Program, focusing on determining how: (1) its funds have been spent at both the state and local levels, including the levels of funding for developing standards and assessments as well as health education; and (2) state and local officials view Goals 2000 as a means to promote education reform efforts. GAO noted that: (1) Goals 2000 funds are being used to support a broad range of education reform activities at the state and local levels; (2) grants to states in the 4 fiscal years (FY) that GAO reviewed ranged from $370,000 to Wyoming in FY 1994 to $54.7 million to California in FY 1997; (3) over the 4-year period reviewed, Goals 2000 funds have been broadly disseminated: more than one-third of the 14,367 school districts nationwide that provide instructional services have received at least one Goals 2000 subgrant funded with fiscal years 1994 through 1997 funds; (4) state-retained funds were spent primarily for personnel, contracting services, and consultants involved in fund related activities; (5) districts used Goals 2000 subgrant funds to pay for education reform initiatives centered around several major categories: local education reform; professional development; and technology acquisition and training; (6) other uses included preservice training for college students who plan on becoming teachers; the development of education standards and assessments; and crosscutting and other activities; (7) most states had begun their state education reform efforts prior to receiving Goals 2000 funds, thus Goals 2000 funds have generally served as an additional resource for ongoing state reform efforts; (8) the districts' Goals 2000 activities appear to be aligned with state education reform initiatives; (9) many state officials reported that Goals 2000 has been a significant factor in promoting their education reform efforts and, in several cases, was a catalyst for some aspect of the state's reform movement; (10) state and local officials said that Goals 2000 funding provided valuable assistance and that, without this funding, some reform efforts would not have been accomplished so quickly if at all; (11) state officials told GAO they supported the flexible funding design of the Goals 2000 state grants program as a way of helping them reach their own state's education reform goals, and the program was achieving its purpose of supporting systemic education reform in states and districts; (12) a number of state officials noted that Congress' discussions about combining Goals 2000 funding with other federal funding in block grant approach caused them concern, as they believe the increased flexibility of a block grant could increase the risk that the funds would not be spent on education reform; (13) however, Goals 2000 appears to be accomplishing what Congress intended; and (14) it is providing an additional and flexible funding source to promote coordinated improvements to state and local education systems. |
In the 1980s, FAA began considering how a satellite-based navigation system could eventually replace the ground-based system that had long provided navigation guidance to aviation. In August 1995, after years of study and research, FAA contracted with Wilcox Electric to develop WAAS. However, because of concerns about the contractor’s performance, FAA terminated the contract in April 1996. In May 1996, the agency entered into an interim contract with Hughes Aircraft. The interim contract with Hughes was subsequently expanded and became final in October 1996. Under the terms of the WAAS development contract, Hughes will deliver an initial operational capability (Phase 1 WAAS) to FAA by April 1, 1999. The original date written into the Wilcox contract was December 1997. Phase 1 WAAS will be able to support the navigation of aircraft throughout the continental United States for all phases of flight through Category I precision approaches. However, the Phase 1 system will not have sufficient redundancy to continue operations in the event of equipment failures and will have to be backed up by FAA’s current ground-based system. FAA expects to conclude the operational testing of Phase 1 WAAS in June 1999 and to commission the system by July 15, 1999. To make WAAS capable of serving as a “sole means” navigation system throughout the United States, FAA plans to expand the system in Phases 2 and 3 of the contract. The Phase 3, or full, WAAS is scheduled to be delivered by October 2001 and commissioned in early 2002. Our August 1997 report on WAAS to this Subcommittee and others provided details on the history of FAA’s cost estimates for WAAS. We found that although FAA knew that the facilities and equipment costs for WAAS could exceed $900 million, the agency presented to the Congress a figure that was some $400 million lower. In September 1997, FAA estimated the total life cycle cost of the WAAS program to be $2.4 billion. Of this amount, about $900 million is for facilities and equipment and $1.5 billion is for operations and maintenance through the year 2016. Accuracy, integrity, and availability are the major performance requirements for GPS/WAAS. Accuracy is defined as the degree of conformance of an aircraft’s position as calculated using GPS/WAAS to its true position. Integrity is the ability to provide timely warnings when the GPS/WAAS is providing erroneous information and thus should not be used for navigation. Availability is the probability that at any given time GPS/WAAS will meet the accuracy and integrity requirements for a specific phase of flight. WAAS is a system comprising a network of ground stations and geostationary (GEO) communications satellites. Reference stations (up to 54 sites) on the ground will serve as the primary data collection sites for WAAS. These stations receive data from GPS and GEO satellites. Master stations (up to 8 sites) on the ground will process data from the reference stations to determine and verify corrections for each GPS satellite. These stations also validate the transmitted corrections. Ground earth stations (up to 8 sites) will, among other things, receive WAAS message data from the master stations, and transmit and validate the message to the GEO satellites. GEO satellites will transmit wide-area accuracy corrections and integrity messages to aircraft and also serve as additional sources of signals similar to GPS signals. The ground communications system will transmit information among the reference stations, master stations, and ground stations. For pilots to use GPS/WAAS for navigation, their aircraft must be equipped with receivers that process the information carried by the GPS and GEO signals. The receivers will enable the pilots to determine the time and their aircrafts’ three-dimensional position (latitude, longitude, and altitude). While system developers and outside experts have confidence that WAAS can achieve most key performance requirements within current cost and schedule estimates, four concerns are worth noting: (1) the ability of WAAS to provide the level of service for precision approaches provided by existing ground-based systems; (2) the ability of computers to process the large quantities of GPS/WAAS data within a few seconds; (3) the vulnerability of GPS/WAAS signals to interference; and (4) the need for additional satellites to achieve the availability requirement. Regarding the first concern, it is uncertain whether WAAS can meet the requirement that the GPS/WAAS signal be available for precision approaches all but about 11 hours per year. Under current definitions based on ground-based navigation technology, a Category I system provides a level of service that allows aircraft to descend to an altitude (height) of not less than 200 feet when visibility is at least 1,800 feet. If WAAS cannot meet this requirement, FAA may incur additional costs to install local area augmentation systems at more airports than expected. The agency may also change the procedures by which pilots can make precision approaches. One procedural option under consideration is that FAA would require pilots to visually recognize additional approach markings before completing a landing. A decision is expected on any needed procedural changes by late 1998. A second concern is the integrity requirement that calls for the system to sound an alarm within 5.2 seconds when it receives hazardously misleading information, such as a correction that is wrong and would result in an aircraft operator being placed in a dangerous situation. The large volume of data that must be processed within a few seconds to meet this requirement is beyond the capabilities of computer data processors that are commercially available. However, FAA is testing newly developed processors and is confident that they will meet the agency’s needs. A third concern exists about the possibility that the GPS/WAAS signal could prove vulnerable to unintentional or intentional radiofrequency interference that could affect the signal’s availability or accuracy and, ultimately, flight safety. These vulnerabilities are common to ground- and satellite-based navigation aids. Because GPS broadcasts its signal at a very low power level, its signal is somewhat more vulnerable to interference. FAA expects to complete a vulnerability assessment for WAAS in October 1997. Once the assessment is completed, countermeasures, if needed, would be identified. Because of the sensitivity of this issue, we cannot go into details in this public hearing. FAA has stated that it will offer a private briefing for the Subcommittee. A fourth concern is whether FAA may have to add more GEO satellites to meet the availability requirement. FAA requires that GPS/WAAS be available virtually 100 percent of the time—all but about 5 minutes a year—for the phases of flight leading up to precision approaches. Although FAA originally thought it could meet this requirement by using four geostationary communications satellites, the agency may need five or six. If so, FAA could continue using one or two of the GEO satellites currently in space or obtain others. FAA intends to decide on the need for additional satellites by late 2000. Even with the added satellites, there may be isolated areas of air space, such as the far northern and western areas of Alaska, where the requirements may not be met. In such areas, according to FAA officials, FAA intends to use ground-based systems or local area augmentation systems to provide a level of service that is at least equal to what is provided today. The addition of one or two GEO satellites would increase the program cost beyond the current estimate of $2.4 billion. FAA expects that adding one or two GEO satellites would cost between $71 million and $192 million over the WAAS life cycle (2001-2016). FAA faces a very tight time frame for putting the GEO satellites in space. FAA intends to work with the Defense Department to begin the acquisition process this month, but it typically takes 4 years to acquire, launch, and check out a GEO satellite. Given FAA’s October 2001 milestone for the delivery of the full WAAS, any delays in putting the GEO satellites in space could cause the WAAS program’s schedule to slip. To get the full cost savings from WAAS, FAA will need to decommission its ground-based network of navigation aids, which now costs the agency $166 million annually to maintain. FAA’s plan presumes that both its current ground-based system and the new satellite-based system will be in place from the time that the full, Phase 3 WAAS is commissioned until the decommissioning of the ground-based network is completed in 2010. FAA’s plan recognizes that a critical factor in the transition will be the widespread installation by commercial and general aviation operators of GPS/WAAS avionics aboard their aircraft. FAA believes that the safety and economic benefits of GPS/WAAS will motivate aircraft operators to install GPS/WAAS avionics in the 5- to 6-year period after the services become available in 2001. The safety improvements include the vertical guidance WAAS will give aircraft during approach and landing at airports where no precision approach capability currently exists. This guidance enables aircraft to follow a smooth glide path safely to the runway. Other benefits include the cost savings that aircraft operators could realize by using one type of navigation equipment in the cockpit for all phases of flight and by flying more direct, fuel-efficient routes. FAA also expects that when it begins decommissioning ground-based navigation aids, aircraft that are not equipped with GPS/WAAS avionics will have to fly less direct routes and will have limits on the precision approach options available to them. As a result, there will be added incentives for aircraft operators to switch to satellite technology. Nevertheless, FAA’s plans could be impeded if the WAAS program’s schedule slips or if safety and economic benefits are not sufficient to cause the aviation industry to switch quickly to satellite technology. As already discussed, the primary concern about whether the WAAS requirements can be achieved on time is the potential for delays in putting the communications satellites in space. Economic considerations, however, could cause commercial and general aviation aircraft operators to switch to GPS/WAAS avionics more slowly than FAA envisioned in its Transition Plan. According to the U.S. GPS Industry Council, the typical GPS receiver used by large commercial aircraft costs between $20,000 and $50,000, and the typical GPS receiver used by smaller general aviation aircraft capable of flying when visibility is limited costs between $5,000 and $15,000. Database changes needed to keep the receivers up to date now cost $70 to $100 a month. Expenses for installing the equipment and training the pilots to use it would be additional. “Airspace users must have a compelling reason to change from their current ground-based avionics to space-based avionics. Simply stating that the technology is better is not enough. There must be real operational benefits for changing or the equipment will have to mandated. Otherwise, avionics change will be extremely slow.” The organization representing general aviation, the Aircraft Owners and Pilots Association, has argued that the present cost of GPS/WAAS avionics, including the cost of maintaining a current database, is not affordable for all segments of the general aviation community. Representatives of the Association told us that FAA’s plan for decommissioning by 2010 would be realistic if (1) FAA provides routes that are more direct, (2) more inexpensive avionics are available, (3) FAA places a high priority on certifying approach procedures where none currently exist, (4) inexpensive database updates for GPS receivers can be obtained electronically from FAA, and (5) FAA does not require aircraft operators to incur the added expense of carrying redundant (dual) GPS/WAAS receivers. FAA is currently working with industry to resolve these concerns. Even if the Association’s concerns are satisfied, however, FAA could still face a slower-than-expected conversion to GPS/WAAS avionics if individual aircraft operators do not conclude that the benefits of installing the new navigation equipment outweigh their costs. FAA would then have to make a difficult choice—either slow down its decommissioning of ground-based navigation aids or, in effect, require conversion by proceeding with decommissioning as planned. In making investment decisions, FAA conducts benefit-cost analyses to determine if the benefits to be derived from acquiring new equipment outweigh the costs. In the case of WAAS, the benefits to the government include the cost savings from reduced maintenance of the existing, ground-based network of navigation aids and the avoidance of capital expenditures for replacing those aids. The benefits to aircraft operators—the users of the system—include the reduction in accident-related costs (from death, injury, and property damage) because WAAS landing signals would be available at airports that currently lack precision landing capability. Operators could also realize “direct route” savings that result from the shorter flight times on restructured, more direct routes that aircraft can fly using GPS/WAAS. The costs include the life cycle costs for WAAS facilities and equipment as well as operations and maintenance. Despite differing assumptions used in calculating benefit-cost ratios, FAA’s analyses dating back to 1994 have always found WAAS to be a cost-beneficial investment—that is, the benefits clearly exceeded the costs, resulting in benefit-cost ratios in excess of 1. The most recent 1997 analysis found (1) a 5.2 ratio of benefits to costs when passenger time savings were included in the direct route benefits and all aircraft would gain a savings of 1 minute per flight from shorter routes, and (2) a 2.2 ratio when passenger time savings were excluded and 30 percent of all aircraft would gain a savings of 1 minute per flight. When these two cases were evaluated in dollar terms, the net benefits of WAAS were $5.3 billion and $1.5 billion, respectively. (See app. II for details on FAA’s benefit-cost analyses for the WAAS program in 1994, 1996, and 1997.) To understand the impact of the potential cost increases and decommissioning delays previously discussed, we requested that FAA’s support contractor perform alternative runs of the benefit-cost analysis.FAA’s 1997 analysis served as the base case for comparison purposes. One pessimistic scenario that we requested made the following alternative assumptions from the base case: (1) the development cost of the primary WAAS contract would increase by 15 percent, (2) the leasing costs for communications satellites would increase by 50 percent, and (3) the decommissioning of the ground-based navigation aids would be delayed by 5 years. Using these assumptions, the contractor’s analysis found that the benefit-cost ratio would be 4.6 when passenger time savings were included and all aircraft gained savings from shorter flights and 1.7 when passenger time savings were excluded and 30 percent of all aircraft gained savings from shorter flights. In dollar terms, net benefits declined substantially—about $490 million—when going from the base case to the pessimistic scenario. When scenarios were run using the three assumptions in turn, the analysis showed that the decommissioning delay of 5 years caused about $370 million of the decline in net benefits. The cost increases for contract development and satellite leasing contributed the remainder. We also asked for a run with a more pessimistic scenario in which the contract development and satellite leasing costs would increase by the same amount but ground-based navigation aids would never be decommissioned. In this case, the decline in net benefits totaled about $700 million. Ultimately, even when pessimistic assumptions were used, the analysis found that the benefits of the WAAS program still clearly outweighed its costs. However, delays in decommissioning or the retention of ground-based navigation aids would cause substantial decreases in the net benefits of the WAAS program. We received comments on a draft of this testimony from officials of the Department of Transportation and FAA, including FAA’s Deputy Program Manager of the GPS Integrated Product Team and the WAAS Program Manager. These officials expressed general agreement with the findings of the testimony, considered it well-balanced, and provided clarifying and technical suggestions, which we incorporated as appropriate. Mr. Chairman, this concludes our statement. I would be happy to answer any questions that you or other Members of the Subcommittee may have. Availability: Probability that the system will provide an accurate and continuous navigation signal for each phase of flight En route through nonprecision approach: 99.999% availability (i.e., unavailable less than 5 minutes a year) FAA may need to add one or two GEO satellites to the four it planned to procure. Also, FAA is investigating the optimal placement of GEO satellites in orbit. But in isolated areas such as the far northern and western areas of Alaska the requirement may not be met. Precision approach: 99.9% available (i.e., unavailable 11 hours a year) FAA may field up to 54 ground stations, and Canada and Mexico may field up to 21. Between late 1998 and mid-1999, FAA will determine how many ground stations are needed based on system test results. FAA may be required to make changes to approach procedures to meet this requirement. Accuracy: Percentage of time that an aircraft’s GPS position is within a given distance of the aircraft’s true position En route through nonprecision approach: Within 100 meters 95% of the time—During periods when this standard cannot be met (up to a cumulative 72 minutes a day), system safety will be guaranteed by a proposed 2-mile horizontal protection limit. Within 500 meters 99.999% of the time—During periods when this standard cannot be met (up to a cumulative 6 seconds a day), system safety will be guaranteed by a proposed 2-mile horizontal protection limit. No major concerns have been raised by system developers or outside parties about these requirements because the existing GPS already guarantees this level of performance. Feasibility testing at FAA’s National Satellite Test Bed (NSTB) has validated that these requirements have been met. FAA will revalidate whether the WAAS software and hardware will achieve these requirements. Precision approach: Within 7.6 meters 95% of the time—During periods when this standard cannot be met (up to a cumulative 72 minutes a day), system safety will be guaranteed by a proposed 63-foot horizontal and vertical protection limit. No major concerns have been raised by system developers or outside parties about this requirement. FAA’s NSTB has achieved this level of accuracy. During WAAS software and hardware testing, FAA will validate that this requirement can be met. (continued) Integrity: Ability of the system to provide users with timely warnings about erroneous information Probability that the system will not detect hazardously misleading information En route through nonprecision approach: 1 chance in 10 million during 1 hour of system operation Precision approach: 1 chance in 400 million per approach (an approach is the final 2-1/2 minutes of flight) No major concerns have been raised by system developers and outside parties about these requirements. FAA plans to acquire safety-certified equipment and software, and during hardware and software testing also plans to collect and analyze data to provide increased assurance that the requirements will be met. The feasibility of meeting the 5.2-second requirement (and, therefore, the 8-second requirement) has been demonstrated at FAA’s NSTB. But as WAAS processes more data, its ability to meet the requirement may decline. FAA’s present analysis shows that the requirement is being marginally satisfied. FAA is looking at faster processing equipment to accommodate the expected increase in data. FAA may need to add one or two GEO satellites to the four it planned to procure or it may have to relax the requirement. Experts believe relaxing the requirement may be possible, but FAA has to determine the impact on safety if, in the event of a catastrophic loss of both GPS and WAAS, air traffic controllers might have to rely on radar to separate and direct aircraft. No major concerns have been raised by system developers or outside parties because existing aircraft systems have demonstrated this ability. During testing, FAA will review contractor data to validate that the integrity requirement can be met. Precision approach: Per approach, 1 chance in 550,000 that the accuracy and integrity requirements will not be met (an approach is the final 2-1/2 minutes of flight) No major concerns have been raised by system developers or outside parties about this requirement on the basis of the preliminary analysis. But because of the volume of data needed to validate compliance with this requirement, FAA is gathering additional data and exploring alternative methods for validating that the requirement can be met. (continued) FAA may need to add one or two GEO satellites to the four it planned to procure. Also, FAA is investigating the optimal placement of GEO satellites in orbit. But in isolated areas such as eastern Canada and oceanic airspace the requirement may not be met. FAA may field up to 54 ground stations, and Canada and Mexico may field up to 21. Between late 1998 and mid-1999, FAA will determine how many ground stations are needed based on system test results. FAA may be required to make changes to approach procedures to meet this requirement. The results of FAA’s benefit-cost analyses of the WAAS program in 1994, 1996, and 1997 are summarized in table II-1. On the benefit side, benefits to the government accrue from the reduced maintenance of the existing, ground-based network of navigation aids and the avoidance of capital expenditures for replacing these aids. Benefits to users—the aircraft operators—fall into five categories: Efficiency benefits derive from having precision landing capability at airports where it does not now exist. Avionics cost savings reflect how GPS/WAAS will enable users to reduce the proliferation of avionics equipment in their cockpits. Fuel savings reflect the use of less fuel to fly aircraft that carry less avionics equipment. Safety benefits stem from the reduction in accident-related costs (death, injury, and property damage) because of the availability of WAAS landing signals at airports that presently lack a precision landing capability. Direct route savings result from the shorter flight times associated with restructured, more direct routes that aircraft can fly. FAA’s 1997 benefit-cost analysis took a more conservative approach than previous versions of the model in estimating the benefit-cost ratio. That is, compared with the previous analyses, the assumptions underlying the current study increased the expected costs of WAAS and simultaneously reduced the expected benefits, which resulted in a lower benefit-cost ratio than found in the previous versions of the study. The higher total costs in the 1997 version were largely due to the inclusion of the costs of decommissioning land-based navigation systems that were not included in any earlier versions of the study. On the benefit side, several changes in key assumptions led to reduced expected benefits including (1) a shorter life cycle for the project, (2) a reduction in the assumed “saved” costs from phasing out ground-based navigation systems, (3) a reduction in estimated safety benefits based on the use of the more recent accident data, and (4) a reduction in the expected flight time savings resulting from more direct routes. National Airspace System: Questions Concerning FAA’s Wide Area Augmentation System (GAO/RCED-97-219R, Aug. 7, 1997). Air Traffic Control: Improved Cost Information Needed to Make Billion Dollar Modernization Investment Decisions (GAO/AIMD-97-20, Jan. 22, 1997). Global Positioning System Augmentations (GAO/RCED-96-74R, Feb. 6, 1996). National Airspace System: Assessment of FAA’s Efforts to Augment the Global Positioning System (GAO/T-RCED-95-219, June 8, 1995). Air Traffic Control: Status of FAA’s Modernization Program (GAO/RCED-95-175FS, May 26, 1995). Aviation Research: Perspectives on FAA’s Efforts to Develop New Technology (GAO/T-RCED-95-193, May 16, 1995). National Airspace System: Comprehensive FAA Plan for Global Positioning System Is Needed (GAO/RCED-95-26, May 10, 1995). Global Positioning Technology: Opportunities for Greater Federal Agency Joint Development and Use (GAO/RCED-94-280, Sept. 28, 1994). Airspace System: Emerging Technologies May Offer Alternatives to the Instrument Landing System (GAO/RCED-93-33, Nov. 13, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Federal Aviation Administration's (FAA) Wide Area Augmentation System (WAAS) program, focusing on: (1) the likelihood of WAAS satisfying key performance requirements within current program cost and schedule estimates; (2) the importance of avoiding delays in FAA's timetable for shutting down (decommissioning) ground-based navigation aids; and (3) the potential impact of cost increases and decommissioning delays on the benefit-cost analysis for the WAAS program. GAO noted that: (1) while the developers of WAAS and outside experts are confident that WAAS is likely to satisfy most key performance requirements within current program cost and schedule estimates, some concerns are worth noting; (2) specifically, FAA may make some procedural changes for aircraft landings if WAAS is not able to deliver the level of service provided by existing ground-based landing systems; (3) also, FAA may add more space-based equipment to meet performance requirements; (4) FAA expects to make decisions on these matters by late 1998 and late 2000, respectively; (5) if the space-based equipment is added, program costs would grow between $71 million and $192 million above the current total program cost estimate of $2.4 billion; (6) the program's schedule can be expected to slip if arrangements are not made immediately to put this equipment in space; (7) to realize the full cost savings from WAAS, FAA will need to avoid delays in decommissioning its ground-based network of navigation aids; (8) FAA estimates that it incurs costs of $166 million annually to maintain this ground-based network; (9) FAA's plans--which envision complete decommissioning of the network by 2010--presume that the full WAAS will become operational (commissioned) in 2001 and that the aviation industry will install the necessary equipment in its aircraft during the remainder of that decade; (10) however, the planned decommissioning could be delayed if the WAAS program's schedule slips or if safety and economic benefits, such as an aircraft's ability to take advantage of more fuel-efficient routes, are not sufficient to cause the industry to switch to satellite-based navigation technology by the end of the next decade; (11) cost increases and decommissioning delays, if they occur, would reduce the net benefits of the WAAS program, but program benefits would still outweigh costs; (12) FAA's July 1997 benefit-cost analysis found that benefits were: (a) more than five times greater than costs when passenger time savings were included and all aircraft gained savings from shorter flights; and (b) more than two times greater than costs when passenger time savings were excluded and 30 percent of all aircraft gained savings from shorter flights; (13) additional analyses done at GAO's request, using pessimistic cost and decommissioning assumptions, found that the WAAS program's benefits are still significantly greater than the costs; and (14) however, if the ground-based navigation network is not decommissioned or must remain in place much longer than expected, the net benefits from WAAS would be substantially reduced. |
Americans rely on wastewater systems to protect public health and the environment. These systems are composed of a network of pipes, pumps, and treatment facilities that collect and treat wastewater from homes, businesses, and industries before it is discharged to surface waters. EPA sets standards for the quality of wastewater that can be discharged under the Clean Water Act. Under this law, the National Pollutant Discharge Elimination System (NPDES) program limits the types and amounts of pollutants that industrial and municipal wastewater treatment facilities may discharge into the nation’s surface waters. During the wastewater treatment process, solid materials, such as sand and grit; organic matter from sewage; and other pollutants are removed from wastewater before it is discharged to surface waters. This treatment helps to ensure that the quality of surface water is not degraded and that it can continue to be used for drinking water, fishing, and swimming. About 16,000 publicly owned wastewater treatment plants exist in the United States, and the American Society of Civil Engineers estimates that between 600,000 and 800,000 miles of sewer pipe help to deliver wastewater to these treatment plants. These systems are primarily publicly owned and provide wastewater service to more than 220 million Americans. Local communities have the primary responsibility to provide funding for wastewater infrastructure. According to U.S. Census Bureau (Census) estimates, in fiscal year 2006 local communities spent about $38 billion on wastewater operations and capital projects, while states spent about $1.3 billion. In addition, the federal government provides financial assistance for wastewater infrastructure, with EPA providing the largest amount through its CWSRF program. Under the CWSRF program, which was established in 1987, the federal government provides capitalization grants to states, which in turn must match at least 20 percent of the federal grants. The states then use the money to provide low-interest loans to fund a variety of water quality projects, and loan repayments are cycled back into the program to be loaned out for other projects. In 2007, states provided CWSRF loans totaling about $5.3 billion to communities and other recipients. Several studies have documented the deterioration in the condition of the U.S. wastewater infrastructure. According to EPA, the majority of the nation’s sewer pipe network was installed after World War II and is reaching the end of its useful life. Similarly, many of the wastewater treatment plants that were upgraded in the 1970s to comply with the Clean Water Act are aging and will need to be upgraded or replaced in the future. The American Society of Civil Engineers recently described the condition of the nation’s wastewater infrastructure as “poor,” and cited a lack of investment in critical components of this infrastructure as a contributing factor to this condition. The deteriorating condition of the nation’s wastewater infrastructure has direct impacts on human and aquatic health. Specifically, many older wastewater systems lack the capacity to treat increasingly large volumes of wastewater, particularly during periods of wet weather. In addition, cracks in sewer pipes allow rain or snowmelt to enter the wastewater system and overwhelm its capacity to adequately treat wastewater. Untreated wastewater can be released during the resulting sewer overflows associated with these wet weather events and introduce significant levels of pollution into local water bodies, which can pose risks to human health and result in beach closures and fish kills. EPA estimates that over 850 billion gallons of untreated wastewater are released annually into U.S. surface waters. Although local, state, and federal governments have invested billions in wastewater infrastructure over the years, studies by EPA and the Congressional Budget Office (CBO) suggest a potential gap exists between what is currently being spent on wastewater infrastructure and estimated future infrastructure needs. EPA’s 2002 analysis estimated a potential gap for wastewater infrastructure capital improvements, along with operations and maintenance, of about $150 billion to $400 billion over the period from 2000 to 2019. CBO estimated a gap of about $60 billion to $220 billion in capital funding alone over this same period. Without additional investment in the nation’s wastewater infrastructure, EPA and other groups have asserted that the environmental and public health gains made under the Clean Water Act during the last three decades could be at risk. However, these studies by EPA and CBO note that this gap is not inevitable, and policy makers and wastewater groups have proposed a variety of approaches to help bridge this gap, including the following: Implement EPA’s Sustainable Water Infrastructure Initiative. This initiative, which is called the Four Pillars, encourages wastewater and drinking water utilities to improve the management of their systems, to systematically plan ahead for infrastructure needs, and to charge the full cost of the service they provide to customers. Charging the full cost would require utilities to charge prices that reflect the costs of building, maintaining, and operating a wastewater system over the long term. Increase funding for the CWSRF. Federal CWSRF capitalization grants to the states had been declining in recent years, despite growing wastewater infrastructure needs. In both fiscal years 2008 and 2009, $689 million was appropriated for the CWSRF program, which was below the average from 2000 to 2007 of about $1.2 billion. Some proponents of the CWSRF have recommended increasing federal appropriations for this program and the program has recently received additional federal funding. The American Recovery and Reinvestment Act of 2009 appropriated $4 billion in funding for the CWSRF program, and the President’s budget request for fiscal year 2010 asks for an increase in funding for the program. In addition, some have suggested increasing the pool of available CWSRF funds by encouraging more states to use their federal capitalization grants as collateral in the public bond market. This practice, known as “leveraging” allows states to borrow additional money to lend out through the CWSRF. Currently, about 27 states leverage their capitalization grants. Establish a national infrastructure bank. Three bills were introduced in the 110th Congress that proposed establishing a national infrastructure bank or other entity that would provide financing for a variety of infrastructure projects, including wastewater infrastructure projects. This entity would independently evaluate projects and determine the m appropriate way—through loans, grants, or other financial tools—to finance them. Encourage public-private partnerships. Historically, wastewater infrastructure has commonly been owned and operated by public entities, such as local municipalities. However, other approaches exist where private entities can provide services such as designing, constructing, or operating infrastructure projects, including wastewater systems. In recent years, these partnerships have become more common in the transportation sector. Lift private activity bond restrictions on wastewater projects. Private activity bonds are tax-exempt bonds issued by state or local governments to provide special financing benefits for qualified projects. These bonds are used to provide financing to private businesses for certain facilities, such as airports, electric and gas distribution systems, mass transit systems, solid waste disposal sites, and wastewater plants. Because private activity bonds are exempt from federal tax, states and municipalities can borrow money at lower interest rates. However, states are limited in the amount of private activity bonds that they can issue annually. While certain projects such as airports and solid waste disposal facilities are exempt from this cap, wastewater infrastructure facilities are subject to this cap. Removing this restriction could increase the level of low-interest financing available for wastewater projects. Create a federal clean water trust fund. Establishing a clean water trust fund could help to provide a dedicated source of federal funding for wastewater infrastructure. Federal trust funds, such as the Highway and the Airport and Airways Trust Funds, are used to account for funds that are dedicated for spending on a specific purpose. Unlike trustees of private trust funds, a federal agency may exercise a greater degree of control over its trust fund. As authorized by law, the federal government may control the fund as well as its earnings and raise or lower future trust fund collections and payments or change the purposes for which collections are used. Three main issues would need to be addressed in designing and establishing a clean water trust fund, according to stakeholders. These issues include: how a trust fund should be administered and used; what type of financial assistance should be provided for projects; and what activities should be eligible for funding. Administration and use of a trust fund. Stakeholders told us that designing a clean water trust fund would involve deciding what agency or entity would administer the fund and whether the trust fund would be used to fund the CWSRF or a separate program. A majority of stakeholders (15 of 20) responding to our questionnaire expressed the view that a trust fund should be administered through an EPA-state partnership like the current CWSRF program. However, as figure 1 shows, stakeholders differed in their views on how a trust fund should be used. About a third of stakeholders (7 of the 20) expressed the view that a trust fund should be used only to fund the existing CWSRF. Stakeholders cited several reasons for this view, including their interest in building on the success of the CWSRF program, avoiding the redundant administrative costs associated with establishing a new wastewater infrastructure program, and providing a dedicated funding source to increase available funding for the CWSRF program. Three of 20 stakeholders that responded to our questionnaire said that a trust fund should not be used to support the existing CWSRF, but rather to fund a separate and distinct wastewater infrastructure program. One of these stakeholders told us that the CWSRF does not prioritize funding to wastewater systems with the greatest needs. Stakeholders we interviewed said that CWSRF loan amounts can sometimes be inadequate to meet the needs of large urban areas that have large and costly infrastructure projects and that smaller communities may lack the administrative capacity to go through the process of applying for a CWSRF loan. In addition, our past work has found that states vary in the way they allocate CWSRF funds for small or economically disadvantaged communities and that some states have placed limits on the amount of CWSRF funding any one borrower can receive in a single year. Twenty-five percent of questionnaire respondents (5 of 20) supported using a trust fund to both fund the CWSRF and establish a separate and distinct program. These stakeholders said the CWSRF needed a dedicated source of funding, but that the flexibility of a new program could help to address some of the CWSRF’s limitations. Finally, 3 of 20 stakeholders responding to our questionnaire were opposed to the creation of a clean water trust fund to support the nation’s wastewater infrastructure. According to these stakeholders, utilities should be self-sustaining through the rates they charge their customers and by more efficiently managing their systems. These stakeholders also attribute the potential gap between projected future wastewater infrastructure needs and current spending to the reluctance of wastewater utilities to charge the full cost of the services they provide. Charging the full cost would require utilities to charge prices that reflect the costs of building, maintaining, and operating a wastewater system over the long term. Our past work has highlighted similar concerns with the management of local wastewater utilities. Specifically, we found that many utilities were not routinely charging the full cost for wastewater services and that the practice of systematically identifying and planning for infrastructure improvements, known as asset management, could help utilities better address their infrastructure needs. Type of financial assistance. Another design issue that stakeholders identified was specifying the type of assistance—grants or loans—that a clean water trust fund would provide. Over half of the stakeholders responding to our questionnaire (13 of 21) favored distributing funding to wastewater infrastructure projects using a combination of loans and grants. According to many of these stakeholders, the type of assistance provided by a trust fund should be tailored to the applicant’s needs and capacity. Some of these stakeholders explained that while some communities can take on debt and pay back loans for wastewater projects, others may need grants because they are unable to pay back loans. Other stakeholders who we talked to also stated that loans impose discipline on borrowers, who are responsible for repayment, but that grants may be needed for certain communities that cannot make loan repayments, such as those with declining or low-income populations. These stakeholder views are consistent with some of the policy debate surrounding the reauthorization of the CWSRF, in which certain groups have supported the distribution of grants, as well as loans, for certain wastewater projects, through the CWSRF as is currently allowed under the Drinking Water State Revolving Fund. A provision allowing some funding to be distributed as grants would be similar to recent legislation; specifically, some of the funding provided to the CWSRF by the 2009 American Recovery and Reinvestment Act can be distributed in the form of grants. In contrast, 3 of 21 stakeholders who responded to our questionnaire told us that funding to support wastewater infrastructure projects should be distributed using loans only while 2 said that only grants should be used. The stakeholders supporting the use of loans said that the funds from the repayment of these loans provide a source of funding to meet future infrastructure needs, and that below-market interest rates can be offered on these loans as an affordable way for communities to fund wastewater infrastructure. One of the stakeholders who said that funding to support wastewater infrastructure projects should be distributed using grants stated that a grant program will help lower costs for municipalities and allow them to offer more affordable wastewater utility rates. Eligible activities. Finally, stakeholders said that designing and implementing a clean water trust fund would involve determining the type of wastewater infrastructure activities that the fund would support. Most stakeholders who responded to our questionnaire supported using a trust fund for planning and designing wastewater projects (18 of 21) and for capital costs (19 of 21). Some stakeholders noted that these two activities are closely linked—planning and designing are essential components of carrying out capital projects. Stakeholders that supported using the trust fund for capital costs identified many of the activities that are currently eligible for funding under the CWSRF as those that should be eligible to receive support under a clean water trust fund. These activities include expanding wastewater systems to meet existing needs, replacing or rehabilitating wastewater collection systems or treatment facilities, and correcting wastewater overflows from wastewater systems. Many of these stakeholders said that capital costs should be given priority because these are major costs and represent the most pressing needs for utilities. Moreover, according to some stakeholders, capital costs should be eligible for funding because communities may incur significant costs when upgrading or rehabilitating their wastewater systems in order to comply with Clean Water Act requirements or other federal mandates. In addition to capital costs, stakeholders identified other activities that should be eligible for funding, including providing rate-payer assistance to low- income households, supporting green infrastructure and nonpoint source pollution projects, and training wastewater plant operators. Only 2 stakeholders responded that a trust fund should be used to support operations and maintenance for wastewater utilities. Appendix II provides the full range of stakeholder responses to the questionnaire on design issues. Appendix III provides a list of stakeholder groups that responded to our questionnaire. Although a variety of options have been proposed in the past to generate revenue for a clean water trust fund, generating $10 billion from any one of these alone may be difficult. In addition, each funding option poses various implementation challenges, including defining the products or activities to be taxed, establishing a collection and enforcement framework, and obtaining stakeholder support. Various funding options, including excise taxes on products that may contribute to the wastewater stream, an additional tax on corporate income, a water use tax, and an industrial discharge tax, could generate a range of revenues for a clean water trust fund. However, it may be difficult to raise $10 billion for a clean water trust fund from any one of these options because of the small size of the tax bases of many of these options. Excise taxes on products that may contribute to the wastewater stream could be used to generate revenue for a clean water trust fund. These products include beverages, fertilizers and pesticides, flushable products, pharmaceuticals, and water appliances and plumbing fixtures. While past proposals for funding a clean water trust fund have identified these products as contributing to the wastewater stream, limited research has been done on their specific impact on wastewater infrastructure, according to EPA. See table 1 for a description of these product groups and how these products may contribute to the wastewater stream. The tax base for each group of products in 2006—the value of products manufactured domestically as well as those imported, but excluding exports—varied from about $26 billion for water appliances and plumbing fixtures to about $156 billion for pharmaceuticals, after adjusting these tax bases to 2009 dollars. In addition, raising $10 billion from a tax on any individual product group would require tax rates varying from a low of 6.4 percent for pharmaceuticals to a high of 39.2 percent for water appliances and fixtures. Alternatively, a lower tax rate could be levied on a number of these product groups that would collectively generate about $10 billion. Table 2 shows the tax bases for the product groups along with the revenue that could be generated from a range of tax rates. Appendix IV presents additional information on the tax bases for these funding options. Alternatively, a per unit excise tax could be levied on these products. For example, according to the Container Recycling Institute, there were about 215 billion bottled and canned beverages sold in 2006. Levying a 1 cent tax on these bottles and cans could yield about $2.2 billion, and raising $10 billion would require a tax of about 5 cents. Another option that could be used to fund a clean water trust fund is to levy an additional tax on the incomes of corporations. This tax would be similar to the Corporate Environmental Income Tax (CEIT) that helped fund the Superfund program until 1995. Increasing the current corporate income tax by levying an additional 0.1 percent on the $1.4 trillion in corporate taxable income reported in 2006, after adjusting for inflation, could raise about $1.4 billion annually. Higher tax rates would need to be levied to generate a larger amount of revenue. For example, a 0.5 percent tax could raise $6.9 billion and to raise $10 billion from this option, an additional tax of about 0.7 percent would need to be levied. However, this level of taxation would exceed the 0.12 percent CEIT that was in place under Superfund when it expired in December 1995. Another option to fund a clean water trust fund is a tax on water usage. A tax on water use could involve a volume-based charge or a flat charge added to local residential, commercial, and industrial water utility rates paid by water customers. For a volume-based charge, levying a tax of 0.01 cent per gallon on the 13.4 trillion gallons of water that were delivered to domestic, commercial, and industrial users from public supplies in 2000 could raise $1.3 billion annually, while a tax of about 0.1 cent per gallon could raise about $13 billion annually. Alternatively, a flat charge could be added to household wastewater bills, similar to Maryland, which charges households $30 annually to help fund wastewater infrastructure in the state. At a national level, imposing a flat charge of $30 annually on the approximately 86 million households that receive wastewater service from wastewater utilities could raise about $2.6 billion annually. Raising $10 billion from a flat charge on households would require a charge of about $116 per year per household. Based on EPA estimates from 2003, American households paid about $474 annually for water and wastewater services; therefore, imposing an annual charge of $116 on households would represent an approximately 25 percent increase in customers’ water and wastewater bills. A final option that we identified that could raise revenue to fund a clean water trust fund is an industrial discharge tax. A tax on industrial discharge could potentially be levied in two ways. The first would be to levy a fee on National Pollutant Discharge Elimination System (NPDES) permits. These permits, required under the Clean Water Act, allow a point source to discharge specified pollutants into federally regulated waters. A second approach would be to levy a tax on toxic chemical releases to water reported by industrial facilities to the Toxics Release Inventory (TRI), which contains data on the quantities of toxic discharges to air, water, or land for 581 chemicals and 30 chemical categories. However, it is unclear what level of taxation could be levied to generate $10 billion from either of these approaches because of data limitations. Specifically, EPA lacks complete and reliable data on the number of NPDES permits issued nationwide. Similarly, EPA does not have complete data on all of the toxic releases because TRI data are based on self-reporting by facilities that release chemicals above certain thresholds. In addition, these reports can be based on estimates of their toxic releases instead of actual measurements. Implementing any of the funding options discussed above poses a variety of challenges, including defining the products or activities to be taxed and establishing a collection and enforcement framework, according to interviews we had with agency officials and other stakeholders. According to Internal Revenue Service (IRS) officials, implementing excise taxes on products requires the agency to develop clear and precise definitions of the products to be taxed, as authorized by Congress. These definitions determine whether taxpayers are required to pay excise taxes and how much tax they owe. In implementing excise taxes in the past, the IRS has developed these definitions after receiving comments from relevant industries. As part of this process, a decision also would need to be made regarding whether the tax would be levied on a per unit basis or a percentage of sales basis. Of the $71.3 billion that the federal government collected from federal excise taxes in 2007, many items are taxed on a per unit basis—a gallon of gasoline, for example—but some items are taxed on a percentage of sales—such as an airline ticket, which is taxed at 7.5 percent of the ticket price. The larger the number of taxable products covered by an excise tax, the greater the challenge of defining these products, according to IRS officials. In addition, any exemptions to the excise tax would also need to be defined. According to IRS officials, a large number of exemptions could present additional implementation challenges because the agency would have to process applications from taxpayers seeking refunds for taxes paid on exempted products. IRS officials told us that the administrative costs associated with designing and implementing any new excise taxes could be substantial and this process could take more than a year to complete. In addition, once the taxable product(s) have been defined, IRS also would need to modify its excise tax collection and enforcement framework. Implementing new excise taxes would require the IRS to update the forms currently used to submit excise taxes and its computer systems to document these receipts, as well as training agency staff on administering the new excise taxes. Moreover, implementing new taxes would increase the auditing and enforcement responsibility of the IRS. In addition, to increase compliance the IRS conducts outreach to those who would be required to pay these excise taxes. All of these activities—making changes to forms and computer systems, training staff, and conducting outreach— would need to occur well in advance of the start of the tax filing season to eliminate possible confusion and could increase the agency’s administrative costs, according to IRS officials. In addition, the challenge in collecting and enforcing excise taxes can be impacted by the point at which the tax is collected and the number of taxpayers. According to IRS officials, collecting and enforcing an excise tax at the manufacturing level is preferable because it involves fewer taxpayers than a tax that is levied at the retail level. According to IRS officials, implementing an additional tax on corporate income would require defining the types of corporations and the portions of their income that would be subject to this tax. For example, under Superfund, the CEIT was levied only on corporations that had income in excess of $2 million. In addition, while the current collection system for corporate income taxes could be used to collect this additional tax, this change would need to be communicated to both corporate taxpayers and IRS tax examiners to promote compliance. Implementing a tax on water use also would pose challenges such as developing a collection system, deciding how to structure the tax, and determining the tax base or which users to tax. Collecting this tax could be difficult, because according to water and wastewater officials we spoke with, it would most likely involve relying on some of the billing systems in place for the nation’s existing 50,000 community water systems and over 16,000 publicly owned wastewater plants along with other local government entities. However, all of these water and wastewater suppliers do not uniformly bill their customers based on the volume of water use. Instead, some charge a flat fee or have other types of rate structures. Some stakeholders said that a flat charge on households would be easier to administer, but that a volume-based charge on water use would be more equitable. In addition, decisions would need to be made regarding which users of the system—households, commercial, and industrial—would be subject to the tax. Implementing an industrial discharge tax also could be difficult because there is no federal system currently in place to charge and collect such a tax. As a result, key steps, including defining the tax base—whether to tax discharge permits or actual discharge—determining a tax rate, and developing a collection and enforcement framework, would need to be completed before such a tax could be implemented. These efforts would likely be complicated by a lack of complete and accurate data on the number of permit holders and quantity of industrial discharge. Implementing such a tax would include the following specific challenges: Permit-based tax. Determining which of the two types of NPDES permits—individual or general—would be taxed and setting a tax rate could be difficult. Individual permits are typically issued for single facilities, such as wastewater treatment plants, while a single general permit can cover multiple facilities that are engaged in similar types of activities and located in a specific geographic area, such as construction sites. According to EPA officials, the types of effluent and levels of discharge covered by these two types of permits can vary significantly and charging a flat tax to all permit holders may not be equitable. In addition, because EPA currently does not collect any taxes or fees on NPDES permits, the agency would have to develop a basis for establishing a tax rate and put in place a collection and enforcement framework before a permit-based tax could be implemented. Discharge-based tax. Currently, EPA does not collect any taxes on industrial discharges, and to implement such a tax would require EPA to put in place a collection and enforcement framework. Developing such a framework could be difficult because EPA does not have complete data on the industrial discharges that are occurring or on the environmental and human health hazards posed by such discharges. For example, while the TRI has information on approximately 265 chemicals that are discharged to water, these data are based on annual reports submitted by industrial facilities. Moreover, EPA has limited national data on the discharge of conventional pollutants to water because many facilities that discharge these pollutants are not required to report this information to EPA. In addition, determining a basis for a tax rate could be difficult because of the potentially large number of chemicals and their varying characteristics. While EPA has developed toxic weighting factors that provide a relative measure of the toxicity for most of the TRI chemicals, EPA officials told us that there are inherent scientific difficulties in using existing toxicity weighting systems to compare toxicity among chemicals. Specifically, they told us that these systems may not adequately distinguish between cancer and non-cancer hazards and considering all such hazards together can be misleading. In addition, EPA has not developed toxic weighting factors for all chemicals in the TRI. EPA officials pointed out that these weighting factors were not developed for taxation purposes, and they expressed concern that using the TRI for this purpose could potentially discourage industries from reporting their full discharges to the TRI. Such an outcome would be a significant concern given that one of the TRI program’s primary goals is to increase the public’s access to the best available information on toxic chemical releases in their communities. Consideration of stakeholders’ and industry views is important in developing a new taxation system, because voluntary compliance with any tax is influenced by whether taxpayers view a tax as being transparent, credible, and logical. While a majority of stakeholders supported three of the eight funding options, we identified some stakeholders who had not yet taken a position on these options, making it difficult to gauge their level of support for these options. In addition, industry groups representing most of the product groups that we identified as potential funding options were generally opposed to levying excise taxes on these products. Furthermore, obtaining widespread stakeholder support may be difficult because many stakeholders do not perceive a strong connection between most of these funding options and wastewater infrastructure use. The proportion of stakeholders supporting excise taxes on the five product groups ranged from over a half to about a third. Specifically, over half of stakeholders responding to our questionnaire supported excise taxes on fertilizers and pesticides and flushable products, and about half supported excise taxes on beverages and pharmaceuticals. In contrast, only about a third of stakeholders supported an excise tax on water appliances and plumbing fixtures. More importantly, we identified some stakeholders who had not yet taken a position on any of the five excise tax options—they neither supported nor opposed these options or did not know or had no opinion on these options—making it unclear what their level of support would be if excise taxes on these product groups were proposed. Specifically, half of stakeholders responding to our questionnaire had not yet taken a position on taxing water appliances and plumbing fixtures, while about a third of stakeholders did not have a position on taxing beverages or pharmaceuticals. Table 3 shows the level of stakeholders’ support for excise taxes on each of the five product groups that we identified. Obtaining stakeholder support for some of these excise taxes may be difficult because stakeholders did not always see a strong connection between these products and wastewater infrastructure use. For example, about half of stakeholders did not see a strong connection between pharmaceuticals and water appliances and plumbing fixtures and wastewater infrastructure use. On the other hand, stakeholders saw a strong connection between fertilizers and pesticides and flushable products and wastewater infrastructure use. Taxing these two product groups to fund a clean water trust fund also garnered the greatest level of stakeholder support. Table 4 shows stakeholders’ views on the extent of the connection between wastewater infrastructure use and the five product groups. In addition, industry groups were consistently opposed to a tax on their specific product groups to support a clean water trust fund. In their view, their products did not contribute significantly to the deterioration of wastewater infrastructure and therefore should not be taxed. Stakeholder and industry reasons for their support or opposition to these excise taxes, along with the views of wastewater utility operators, are summarized in table 5. About a third of stakeholders responding to our questionnaire (6 of 19) opposed or strongly opposed this option. Another 7 stakeholders had not taken a position on this funding option, making it unclear what their level of support would be. Furthermore, of the eight funding options, stakeholders saw the least connection between this funding option and wastewater infrastructure use, with nearly two-thirds of stakeholders (11 of 18) responding that there was little or no connection. In fact, stakeholders’ inability to see the connection was one of the reasons they cited for their opposition to this funding option. Other reasons that stakeholders provided for opposing this option were the current economic crisis and that corporations already pay taxes and fees to local systems for wastewater treatment services. Among the reasons that stakeholders gave for supporting this option were that the nation, and all industrial sectors, benefit from clean water, and this tax would be spread across a number of different polluting industries. Stakeholder opposition to this funding option was the strongest of the eight funding options we identified. Over half of stakeholders (11 of 21) that responded to our questionnaire opposed a water use tax to fund a clean water trust fund. Some of these opponents said that such a tax would infringe on the ability of local utilities to raise rates for their own needs. Drinking water industry officials said that many communities have adopted comprehensive asset management plans and raised their water rates to pay for infrastructure needs, and it would be unfair to tax all communities and then distribute money to those communities that have not managed their systems well. In addition, stakeholders we interviewed said that redistribution of tax revenue would be a concern with this option if communities contributed more to the trust fund than they received back in funding. They also told us that a water use tax could disproportionately affect low-income households because these households pay a larger portion of their income for their water bills. On the other hand, 5 stakeholders supported this funding option and some said that rates are still relatively low in many parts of the country and local ratepayers should pay for the costs of the infrastructure they use. Over a third of stakeholders (7 of 19) supported or strongly supported an industrial discharge tax, while another 7 stakeholders neither supported nor opposed this option. The most common reasons that stakeholders gave for supporting this option was that industries should pay for the pollution they discharge. Among the reasons that stakeholders provided for opposing this option was that industrial facilities already pay for wastewater services. We provided a draft of this report to EPA and IRS for review and comment. Neither agency provided written comments to us. EPA provided technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of EPA, the Commissioner of IRS, and interested congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine stakeholders’ views on the issues that need to be addressed in designing and establishing a clean water trust fund, we reviewed past legislative proposals and wastewater industry position papers on establishing a clean water trust fund. In addition, we interviewed over 50 different stakeholders with knowledge of a variety of wastewater infrastructure issues, including individuals and groups from the wastewater industry; industry associations; and federal, state, and local government; and obtained their views on establishing and designing a clean water trust fund. During this process, we identified other relevant stakeholders to speak to by asking interviewees to identify other knowledgeable stakeholders in this area that we should contact, a process known as the “snowball” approach. Based on the information obtained through these interviews and our review of reports, we developed and sent a questionnaire to 28 national organizations with expertise in one or more of the following areas: financing of wastewater projects, constructing and maintaining wastewater infrastructure, local and state wastewater infrastructure needs, and environmental protection. Prior to sending out this questionnaire, we pretested the questionnaire with stakeholders and made changes based on their input. This questionnaire asked for their views on how a clean water trust fund should be administered, the types of activities it should fund, and how funding should be distributed. We received responses from 22 of these stakeholders. Of the 6 stakeholders that did not respond, 4 of these told us they could not come to a consensus on behalf of their organization. For a list of the groups that responded to the questionnaire, see appendix III. We also reviewed information on the Clean Water State Revolving Fund (CWSRF) program and interviewed federal and state officials responsible for implementing this program to gain an understanding for how this program might interact with a clean water trust fund. We also visited three states—Arizona, Maryland, and Wisconsin—and the District of Columbia where we interviewed state and local officials about their wastewater infrastructure needs and how a clean water trust fund could be designed to meet these needs. We selected these states because they were geographically dispersed, had different wastewater infrastructure needs, and used various approaches to finance wastewater projects. On these visits, we toured wastewater facilities in large and small cities and spoke with local and state officials about how they were financing wastewater projects. To identify and describe potential options for funding a clean water trust fund that could generate $10 billion annually, we reviewed past legislative proposals and position papers from wastewater industry groups that discussed specific funding options for such a fund. We also reviewed reports on how existing federal trust funds that support environmental and infrastructure projects are funded and conducted Internet searches to identify funding options that some states were using to finance wastewater projects. Finally, we interviewed stakeholders with knowledge of wastewater infrastructure issues, including those from the wastewater industry and federal, state, and local government to identify other options that could be used to generate revenue for a clean water trust fund. To estimate the revenue that these options could potentially generate, we used the most recent government data available to estimate the value of products or activities that could be subject to a federal tax—the tax base—and applied a range of tax rates to these bases, which were based on current or past taxation policies. For the five excise taxes we identified, we used the U.S. Census Bureau (Census) data from the 2006 Annual Survey of Manufactures, which provides data on the value of products manufactured domestically by different industrial codes, known as North American Industry Classification System (NAICS) codes. We identified specific NAICS codes for the five groups of products that could be subject to an excise tax. For three of our excise taxes—beverages, fertilizers and pesticides, and pharmaceuticals—these products are captured in a discrete set of NAICS codes according to Census officials. For the two other product groups— flushable products, and water appliances and plumbing fixtures—we examined prior reports to examine how these products were defined, analyzed these NAICS codes along with their descriptions, and worked with Census officials to ensure our list of NAICS codes was reasonable. To this value of products produced domestically, we added the value of products imported and subtracted the value of products that were exported to determine the tax base for these product groups. We made this calculation because according to Internal Revenue Service (IRS) officials, federal excise taxes are generally levied on imports but not on exports. We then converted the values of these tax bases to 2009 constant dollars. Certain limitations exist with regard to our use of these data to estimate potential revenue from the funding options. Specifically, our use of NAICS codes for these groups of products may include a wider range of products than would be part of actual excise taxes on these products. In addition, due to data limitations, there are certain products that are not captured in our tax bases. For example, toilet paper is not included in our tax base for flushable products because this product is grouped under a NAICS code with other sanitary paper products that most likely would not impact wastewater infrastructure, such as disposable diapers. To determine the reliability of these data, we reviewed documentation from Census, interviewed relevant officials, and conducted some basic logic testing of the data, and we determined the data were sufficiently reliable for our purposes. For our estimate of a per container charge on bottled and canned beverages, we used Container Recycling Institute data on the number of packaged beverages sold in the United States in 2006. To determine the reliability of these data, we spoke with officials familiar with these data and reviewed relevant documentation on the data. We determined the data were sufficiently reliable for our purposes. For our estimate of the corporate income tax, we used data from the IRS 2006 Statistics of Income and identified the value of taxable income that corporations had in this year. The amount of income subject to tax at the corporate level includes taxable income less certain deductions such as a corporation’s net operating loss or other special deductions. To determine the reliability of these data, we reviewed documentation from IRS and interviewed relevant officials. We determined the data were sufficiently reliable for our purposes. For our estimate of the water use tax, we used 1995 and 2000 data from the United States Geological Survey (USGS) on estimates of water delivered by public and private suppliers to domestic, commercial, and industrial users. After consulting with USGS officials, we estimated the use for residential, commercial, and industrial uses for 2000 based on information available in 1995. Specifically, we used the 2000 estimate for total public supply water deliveries and the 1995 estimate of the proportion of total water deliveries to domestic, commercial, and industrial users in 1995 because the 2000 USGS report included information on total water deliveries but did not include information on types of users. To determine the reliability of these data, we interviewed USGS officials and reviewed relevant documentation on the data. We determined the data were sufficiently reliable for our purposes. For our estimate of a flat charge on household wastewater bills, we used Environmental Protection Agency (EPA) data on the population served by publicly owned treatment works to estimate the number of households that receive wastewater services. To determine the reliability of these data, we spoke with EPA officials and reviewed relevant documentation on the data. We determined the data were sufficiently reliable for our purposes. For our estimate of an industrial discharge tax, we examined data from the National Pollutant Discharge Elimination System (NPDES) permit system and the 2006 Toxics Release Inventory (TRI). For the NPDES permit system, we determined there were not reliable national data on the total number of NPDES permits issued. For the TRI, we determined that these data were based on self-reported information from only certain facilities that discharged above a certain level. Moreover, these reports can be based on estimates rather than actual measurements. The TRI also does not contain data on discharges of conventional pollutants. Due to these data limitations, we determined that these data were not sufficiently reliable to make an estimate of the revenue that could be generated from a tax on industrial discharge. After identifying the taxable bases for these different funding options, we applied various tax rates to these bases based in part on existing or past taxation policies. Our review of existing federal excise taxes found that most excise taxes levied as a percentage of sales range from 3 percent to 12 percent so we applied the rates of 1 percent, 3 percent, 5 percent, and 10 percent to our tax bases. For the tax on corporate income, we used 0.1 percent because a 0.12 percent on corporate income had been used to fund Superfund. For the water use tax, we used existing and proposed water taxes as the basis for the tax rates we applied. For all of the funding options, we also calculated the tax rate that would be needed to generate $10 billion annually. The revenue estimates presented in our report are not official revenue estimates as would be prepared by the Joint Committee on Taxation, and they are subject to various limitations. For example, we did not model consumer or market responses to these funding options, the potential extent of noncompliance, or estimate the cost of implementing and enforcing these options. As a result, our revenue estimates may be higher than actual receipts that would be generated from these funding options. Ultimately, the amount of revenue that any of these options would generate would depend heavily on the number of products that would be taxed, the tax rate used, and the compliance with the tax. To identify the challenges associated with implementing these different funding options, we interviewed federal and state officials who might be involved in collecting and enforcing these taxes. At the federal level, we spoke with IRS officials who collect and enforce excise taxes and corporate income taxes. For the water use tax, we also spoke with representatives of wastewater and drinking water utilities to learn about how they collect fees from the users of their systems and how a federal tax on water might make use of these systems. We also spoke with officials who were involved in taxing some of these products already. At the federal level, we spoke with officials in the Alcohol and Tobacco Tax Trade Bureau regarding the federal excise tax on alcoholic beverages, and we also spoke with EPA officials about the fees the agency levies on pesticides. On our state visits, we spoke with officials who had experience with implementing some of these funding options as well. To identify stakeholders’ views of these funding options, we examined position papers that discussed these funding options. We also used our questionnaire to gauge stakeholder support for these options and to learn about their views on the connection between these options and wastewater infrastructure use. In addition, we spoke with industry groups that represented some of the products that could be targeted by excise taxes for their views. In particular, we spoke with groups representing many of the manufacturers in the following industries: beverages, fertilizers and pesticides, flushable products, pharmaceuticals, and water appliances and plumbing fixtures. We conducted our work from June 2008 to May 2009 in accordance with all sections of GAO’s quality assurance framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. This appendix provides information on stakeholders’ responses to our questionnaire about their views on the issues that need to be addressed in designing and establishing a trust fund as well as their views on the potential funding options that could be used for this fund. A list of stakeholders that responded to the questionnaire is presented in Appendix III. The following stakeholders responded to our questionnaire regarding the issues that need to be addressed in designing and establishing a national clean water trust fund as well as potential funding options that could be used for this fund. To estimate the tax base for products that may contribute to the wastewater stream, we added the value of products manufactured domestically and the value of products imported and subtracted the value of products exported. This appendix provides information on (1) the specific industrial classification codes we used to define product groups, (2) the value of products manufactured from the U.S. Census Bureau’s (Census) 2006 Annual Survey of Manufactures, and (3) the value of imports and exports from Census’ Foreign Trade Division that we used to develop the tax bases for the five product groups discussed in this report. In addition to the individual named above, Sherry L. McDonald, Assistant Director; Janice Ceperich; Nancy Crothers; Cindy Gilbert; and Scott Heacock made significant contributions to this report. Also contributing to this report were George Bogart, Richard Eiserman, Carol Henn, Sarah Reyneveld, Anne Stevens, Jack Warner, and James Wozny. | The Environmental Protection Agency (EPA) has estimated that a potential gap between future needs and current spending for wastewater infrastructure of $150 billion to $400 billion could occur over the next decade. A number of entities are involved in planning, financing, building, and operating this infrastructure. Some of these stakeholders have suggested a variety of approaches to bridge this potential gap. One such proposal is to establish a clean water trust fund. In this context, GAO was asked to (1) obtain stakeholders' views on the issues that would need to be addressed in designing and establishing a clean water trust fund and (2) identify and describe potential options that could generate about $10 billion in revenue to support a clean water trust fund. In conducting this review, GAO administered a questionnaire to 28 national organizations representing the wastewater and drinking water industries, state and local governments, engineers, and environmental groups and received 22 responses; reviewed proposals and industry papers; interviewed federal, state, local, and industry officials; and used the most current data available to estimate the revenue that could potentially be raised by various taxes on a range of products and activities. GAO is not making any recommendations. While this report identifies a number of funding options, GAO is not endorsing any option and does not have a position on whether or not a trust fund should be established. In designing and establishing a clean water trust fund, stakeholders identified three main issues that would need to be addressed: how a trust fund should be administered and used; what type of financial assistance should be provided; and what activities should be eligible to receive funding from a trust fund. While a majority of stakeholders said that a trust fund should be administered through an EPA partnership with the states, they differed in their views on how a trust fund should be used. About a third of stakeholders responded that a trust fund should be used only to fund the existing Clean Water State Revolving Fund (CWSRF), which is currently funded primarily through federal appropriations, while a few said it should support only a new and separate wastewater program. A few stakeholders supported using a trust fund to support both the CWSRF and a separate program, while others did not support the establishment of a trust fund at all. In addition, more than half of the stakeholders responded that financial assistance should be distributed using a combination of loans and grants to address the needs of different localities. Finally, although a variety of activities could be funded, most stakeholders identified capital projects as the primary activity that should receive funding from a clean water trust fund. A number of options have been proposed in the past to generate revenue for a clean water trust fund, but several obstacles will have to be overcome in implementing these options, and it may be difficult to generate $10 billion from any one option by itself. Funding options include a variety of excise taxes. In addition, Congress could levy a tax on corporate income. An additional 0.1 percent corporate income tax could raise about $1.4 billion annually. Congress also could levy a water use tax. A tax of 0.01 cent per gallon could raise about $1.3 billion annually. Regardless of the options selected, certain implementation obstacles will have to be overcome. These include defining the products or activities to be taxed, establishing a collection and enforcement framework, and obtaining stakeholder support for a particular option or mix of options. |
The DPA is intended to facilitate the supply and timely delivery of products, materials, and services to military and civilian agencies in times of peace as well as in times of war. Since it was enacted in 1950, DPA has been amended to broaden its definition beyond military application. Congress has expanded DPA’s coverage to include crises resulting from natural disasters or “man-caused events” not amounting to an armed attack on the United States. The definition of “national defense” in the Act has been amended to include emergency preparedness activities conducted pursuant to Title VI of the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act) and critical infrastructure protection and restoration. In 2003, the DPA was reauthorized through September 30, 2008. Currently, only Titles I, III, and VII are in effect: Title I authorizes the President to require priority performance on contracts or orders and allocate materials, services, and facilities as necessary or appropriate to promote the national defense or to maximize domestic energy supplies for national defense needs. The authority allows priority-rated contracts or orders to take preference over any other unrated contract or order if a contractor cannot meet all required delivery dates. The authority is delegated among various agencies, including DOD, USDA, and Commerce, with respect to different types of resources such as water, food and agriculture, and industrial resources. Currently, Commerce administers the only priorities and allocations system that is actively used—the Defense Priorities and Allocations System (DPAS)—which is used for industrial resources. Commerce has delegated its authority to use priority ratings for industrial resources to DOD, DOE, and DHS in support of approved national defense, energy, and homeland security programs. Title III allows agencies to provide a variety of financial incentives to domestic firms to invest in production capabilities to ensure that the domestic industrial and technological base is capable of meeting the critical national security needs of the United States. It may be used when domestic sources are required and firms cannot, or will not, act on their own to meet a national defense production need. Title III financial incentives are designed to reduce the risks for domestic suppliers associated with the capitalization and investments required to establish, expand, or preserve production capabilities. Executive Order 12,919 authorized the authority to implement Title III actions to the Secretary of Defense and the heads of other federal agencies and designates the Secretary of Defense as the DPA Fund Manager. DOD’s Office of Technology Transition provides top-level management, direction, and oversight of the DPA Title III program. The Air Force serves as the Executive Agent for DOD’s Title III program, and maintains a program office to execute the authority under the guidance of the Office of the Secretary of Defense. Title VII provides for a range of authorities, which include giving private firms that participate in voluntary agreements for preparedness programs, defenses from aspects of the antitrust laws and protecting contractors who honor priority-rated contracts from lawsuits brought by other customers. Title VII allows for establishing a National Defense Executive Reserve (NDER) composed of recognized experts from the private sector and government, which could be activated in the event of an emergency. Title VII also provides for investigative authority to collect information on the U.S. industrial base, which has been used by Commerce to conduct surveys and prepare reports at the request of the armed services, Congress, and industry. DPA also requires the President to report annually to Congress on the effect of offsets—a range of incentives or conditions provided to foreign governments to purchase U.S. military goods and services—on U.S. defense preparedness, industrial competitiveness, employment, and trade. Additionally, DPA requires Commerce to prepare a report to the Congress on the cumulative effects of offsets on defense trade, with a focus on the U.S. defense subcontractor base. Defense offsets include coproduction arrangements and subcontracting, technology transfers, in-country procurements, marketing and financial assistance, and joint ventures. Foreign governments use offsets to reduce the financial effect of their defense purchases, obtain valuable technology and manufacturing know-how, support domestic employment, create or expand their defense industries, and make the use of their national funds for foreign purchases more politically palatable. Views on defense offsets range from beliefs that they are both positive and an unavoidable part of doing business overseas to beliefs that they negatively affect the U.S. industrial base. U.S. prime contractors have indicated that if they did not offer offsets, export sales would be reduced and the positive effects of those exports on the U.S. economy and defense industrial base would be lost. Critics charge that negative aspects of offset transactions limit or negate the economic and industrial benefits claimed to be associated with defense export sales. The effect of offsets on the U.S. economy has been a concern for many years, and Congress has on numerous occasions required some federal agencies to take steps to define and address offset issues. DPA, as amended, provided for an interagency team to consult with foreign nations on limiting the adverse effects of offsets in defense procurement—without damaging the U.S. economy or defense preparedness—and provide an annual report on their consultations and meetings. Other steps have been taken to address offsets outside of the DPA, including establishing a national commission to report on the extent and nature of offsets. DOD is the primary user of the DPA, but other agencies can use Title I priorities and allocations authority for emergency support functions. Agencies other than DOD generally do not apply the authority and would do so after an issue affecting delivery needs had been identified, which could add delays to the delivery of critical products during emergencies. We found a lack of developed policies or guidance also may limit agencies’ ability to use the authority in emergencies. Other authorities in the DPA have had limited use. Specifically, Title III authority to expand production capabilities for industrial resources or critical technology essential to the national defense has been used almost exclusively for defense needs, and circumstances have not required use of some Title VII authorities, such as the National Defense Executive Reserve or voluntary agreements. While other agencies have used or have considered using Title I’s priorities and allocations authority, DOD has been the primary user. DOD places priority ratings as a proactive measure on almost all of its contracts for industrial resources, which number approximately 300,000 annually, to facilitate timely execution of the authority. The Title I authority allows rated contracts or orders to take preference over any other unrated contract or order if a contractor cannot meet all required delivery dates. DOD has used the authority in recent years to prioritize the delivery of material for body armor for the Army and Marine Corps and to ensure that the military’s Counter-Improvised Explosive Device systems and the Mine Resistant Ambush Protected Vehicle program receive high industrial priority. DOD employs this approach to ensure that its use of the priorities and allocations authority is self-executing, which it reports should mitigate the risk of not having critical items to meet defense requirements. The U.S. Army Corps of Engineers (USACE), which executes the authority for water resources, also uses this approach to procure water through advance contracts for emergency response purposes, according to a USACE official. In contrast, officials from other agencies indicated that they would decide to place priority ratings on contracts or modifications on a case-by-case basis after a triggering event had identified an issue affecting delivery. DHS reported that it has authorized or endorsed to the Department of Commerce the use of priority ratings 15 times since 2003, which includes endorsing other federal agencies’ use of priority ratings in support of homeland security programs. DHS makes endorsements with respect to programs, not specific contracts. Over half of these have been in support of critical infrastructure protection and restoration requirements. For example, a railroad company used a priority-rated contract to procure switch equipment and generators to help restore rail service in the Gulf Coast region following Hurricane Katrina. DHS also endorsed the use of a priority rating for a Department of State continuity of operations facility for which Commerce authorized a priority rating for a contract to procure a generator to provide emergency power. However, DHS officials told us that, unlike DOD, its contracts, including those placed for emergency preparedness purposes, do not automatically receive priority ratings. DOE, HHS, USDA, and DOT have had little or no experience using Title I priorities and allocations authority. The National Nuclear Security Administration, a separately organized agency within DOE, has applied priority ratings to contracts primarily in support of defense and atomic energy programs. Aside from these purposes, DOE has not encountered a need requiring the use of its priority and allocations authority for energy resources in the past several years. However, DOE also reported that it has considered using the authority in response to a number of emergency preparedness and disaster response cases, such as the restoration of refinery services affected by fire and flooding in 2007. Upon consideration, DOE determined that use of the authority was not necessary. DOT has not used the Title I authority since the DPA was reauthorized in 2003, but has used it in the past for airport security and in support of DOD during the Gulf War. Appendix II describes DOT’s current and past use. While HHS and USDA officials said that they have not encountered circumstances to date that would require the use of priority rated orders, HHS officials anticipated that they could use the authority to place priority ratings on contracts prior to an emergency for a selected number of health resources needed in an emergency, such as masks, respirators, and antibiotics. In contrast, USDA and DOT officials indicated that they would place a rating on a contract once it was determined that the private sector could not otherwise respond to a need. Most of the agencies we reviewed play a key role in an emergency—under the National Response Framework—to execute contracts to procure needed goods and services in areas such as transportation, human services, and energy. We have previously recommended that DHS provide guidance on advance procurement practices and procedures for those federal agencies with roles and responsibilities under the National Response Plan. Our prior work identified a number of emergency response practices in the public and private sectors that provide insight into how the federal government can better manage its disaster-related procurements, including developing knowledge of contractor capabilities and prices, establishing vendor relationships prior to the disaster, and establishing a scalable operations plan to adjust the level of capacity to match the response with the need. DHS and Commerce officials recognized that while the priorities and allocations authority cannot be used for procurement of items that are commonly available in sufficient quantities, emergency situations can quickly affect the availability of items. Further, priority ratings can be placed on procurement documents that provide for items as needed but would not be considered rated until a specific delivery date was identified and received by the supplier. However, agencies have generally not considered placing priority ratings on contracts for critical emergency response items before an emergency occurs and, instead, would wait until there is an issue that affects delivery of needed goods and services. Agency officials acknowledged that there is a need to have policies and guidance in order to implement the Title I priorities and allocations authority, but the degree to which agencies have accomplished this varies. Currently, the Defense Priorities and Allocations System (DPAS) is the only active system for implementing the authority in operation, used primarily by Commerce, DOD, DOE, and DHS for industrial resources. Commerce has established a regulation governing DPAS, and DOD and DOE have internal policies and procedures for using the authority. DHS is in the process of establishing policies and procedures to fully implement its priorities and allocations authority. Despite these efforts, gaps remain. Currently, there is no system for using the priorities and allocations authority for food and agriculture, health, and civil transportation resources. USDA and HHS officials told us they are in the process of developing regulations for using Title I for food and agriculture and health resources, respectively, modeled on DPAS. DOT officials acknowledged that they do not have an established system for using the authority for civil transportation needs, but have internal protocols in place to contact Commerce and DHS should the need arise. DOT officials said they have not yet begun the process to develop regulations for a priorities and allocations system. Table 1 provides a summary of the status of agencies’ priorities and allocations policies and guidance. DOD, DHS, and DOE have supplemented their policies and guidance with training and outreach efforts to increase awareness of the authority and its potential applications. DOD has developed online training on the use of its priorities and allocations authority, and DHS is currently developing a Web site and a training program for its personnel and other groups such as contractors and state and local government personnel. DOE is updating its energy emergency support function operations manual with references to the authority, and has incorporated information on use of the authority in its emergency responder training. Given the status of available policies and guidance for certain resources, additional time could be required to react to emergency situations as agencies determine the proper procedures for using the authority. In addition, agencies may have to rely on less-efficient means for using the authority. For example, a DOD official stated that the Defense Logistics Agency has been working with HHS to establish a Memorandum of Understanding to use priority ratings to procure auto-injector medical devices for the military, as DOD’s priorities and allocations authority does not apply to health resources. Further, DOT lacks a system for exercising the authority for civil transportation that could help facilitate more timely delivery of critical items and services and could avoid additional steps to identify the appropriate processes each time an emergency situation arises. DOD has generally been the exclusive user of Title III’s authority to stimulate investment and expand production capabilities and is currently the only agency with a program office prepared to readily use the authority. DOD has used the authority, for example, to modernize and preserve two domestic manufacturing sources for next-generation radiation-hardened microelectronics for space and missile systems and to reestablish a domestic production source for high-purity beryllium metal that was lost when the sole domestic production facility was shut down. It is also being used to establish a domestic source for lithium ion battery production and to expand production of lightweight, transparent armor for the military. Appendix III includes examples of DOD’s use of the Title III authority. DOE officials stated that they have worked with the DOD Title III Program Office on cooperative projects. For example, they noted that they actively managed a project to supply high temperature superconductors. Additionally, DOE and National Aeronautics and Space Administration (NASA) have contributed money in support of DOD- managed projects. Other agencies have considered using the authority for non-defense needs but pursued other alternatives. For example, DHS had committed funds toward a potential project on biological agents, but pulled back planned funding because DHS was pursuing an alternative project. Similarly, HHS considered using Title III authority to expand production of vaccines, but no project resulted. USDA officials stated that, based on the availability of suppliers for items they typically purchase, they did not see a need to use Title III. DOD officials noted that statutory limitations on the use of Title III authority present challenges to efficient use of the authority. For example, the requirement that Congress be notified of new projects via the annual budget cycle creates a waiting period of up to one year before new projects can be initiated and can hinder use of the authority to meet rapidly evolving defense industrial base needs. To address this and other challenges, DOD has proposed amendments to DPA. These include allowing for notification to Congress of new projects in writing throughout the year rather than through the budget cycle, as well as reducing the required waiting period for awarding contracts from 60 to 30 days and increasing the statutory limitation on actions under Title III from $50,000,000 to $200,000,000 before specific authorization in law is required. According to officials, past Title III projects were primarily initiated and funded through DOD based on the needs of particular programs or through information received from industries. However, as shown in figure 1, a growing number of projects have been funded through Congressionally directed projects. Some civilian agency officials identified other limitations in initiating new projects under Title III, such as a lack of institutional willingness to use the authority and available funds. For example, DOE officials stated that it would be difficult to defend, fund, and manage a project from a departmental standpoint. However, they added that DOE’s involvement in current projects suggests that Title III may be used to enhance production capabilities for industrial resources needed for energy production and distribution. The Title VII authority to collect information on the U.S. industrial base has been used by Commerce almost exclusively to address capabilities of industries supplying DOD. Because DOD has a diverse supplier base, these assessments have covered a range of industries from biotechnology to textiles and apparel. While Commerce officials recognized that the DPA’s definition of national defense has been expanded to include emergency preparedness and the protection of critical infrastructure, they stated that an assessment at the request of agencies other than DOD would require additional resources based on current and projected workloads. In general, agencies have policies and guidance on using Title VII’s other authorities but have never had to employ them in an actual event. The Federal Emergency Management Agency (FEMA), under DHS, has interim guidance and is preparing a new regulation on forming and activating NDER units—reserves composed of government and industry experts—in the event of an emergency, yet has not activated its NDER and is currently assessing the need for it. While DOE and DOT no longer have active NDER units, which were associated with Cold War threats, a DOE official stated that the department is interested in continuing to work with DHS to restore its unit while DOT officials expressed similar interest in re- establishing an NDER should a justifiable reason be established under existing crisis management programs and authorities. DOT officials stated that it is positioned to use Title VII’s authority to develop voluntary agreements and plans of action for preparedness programs and expansion of production capacity and supply that make defenses from antitrust laws available to participating industry representatives. DOT currently has voluntary agreements with commercial tanker and maritime shipping industries to rapidly mobilize resources in support of defense needs, but noted that events have not triggered the activation of the established plans of action. DHS reported that because of the time needed to use this authority it could take 21 to 50 days to establish a voluntary agreement following a disaster, affecting the usefulness of the authority in an emergency. HHS officials told us that another statute provides similar authority that the agency could implement more quickly for certain health-related purposes. Agencies have taken steps towards fulfilling their offset reporting requirements, but the information in these reports does not provide a basis for fully evaluating the effect of offsets on the U.S. economy or take steps to address them. In its annual reports to Congress, Commerce provides useful summaries of offsets issues, but the type of data collected from prime contractors limits their analysis. Efforts from an interagency team chaired by DOD to consult with other countries on limiting harmful effects of offsets have resulted in a consensus with other nations that negative effects exist, but not yet on best practices to address them. Other related efforts to report on offsets have yet to be completed and are limited in their assessments of economic effects. The DPA requires Commerce to provide an annual report to Congress on the impacts of offsets on the defense preparedness, industrial competitiveness, employment, and trade of the United States. Commerce’s annual reports provide a summary of total offset agreement and transaction activity entered into between U.S. defense contractors and foreign governments in connection with U.S. defense related exports. Commerce’s efforts to quantify the employment effects of offsets are based on limited data. For example, the employment analysis relies on aggregated defense aerospace data, which do not include other defense sectors nor delineate between subsectors of the aerospace industry. Further, the most recent annual report on offsets noted that its analysis does not include the potential effects of nearly $1 billion of technology transfer, training, and overseas investment offset transactions, representing nearly 24 percent of average annual offset transactions. The 2003 DPA reauthorization also requires Commerce to report on the impact of offsets on domestic prime contractors and, to the extent practicable, the first three lower tier subcontractors. These reports are to address domestic employment, including any job losses on an annual basis. The August 2004 report, produced in response to the 2003 DPA reauthorization, provided useful data on the scope of offset agreements and transactions during the preceding 5-year period, but data collected in surveys of prime and subcontractors limited the analysis on employment effects. To assess the effect of offsets on domestic employment, Commerce surveyed prime contractors and three tiers of subcontractors. While Commerce acknowledged that it could have requested documentation for all of the nearly 700 weapon systems and components contracts for the 5-year period (1998 through 2002), documentation was requested for only two weapon systems from each of the 13 U.S. prime contractors. Commerce cited sensitivity to not burdening contractors and a desire to be responsive to reporting time frames as a cause. The analysis was further limited by a less than 40 percent response rate to the survey of the three tiers of subcontractors. Moreover, this survey used subjective measurements by asking for subcontractors’ perceptions of the influence of offsets on employment, asking respondents to rank offsets among a variety of factors as they related to increases or decreases in U.S. employment. We have previously stated that, in evaluating offsets and identifying their effects on the U.S. economy as a whole, it is difficult to isolate the effects of offsets from the numerous other factors affecting specific industry sectors. Despite such difficulties, Commerce officials stated that they could request more specific product data from prime contractors that would allow for more detailed analysis of the effect of offsets on the U.S. economy. Under DPA, the Secretary of Commerce is given authority to promulgate regulations to collect offset data from U.S. defense firms entering into contracts for the sale of defense articles or services to foreign countries or firms that are subject to offset agreements exceeding $5 million in value. The Secretary of Commerce designated this authority to the Bureau of Industry and Security (BIS), which published its first offset regulations in 1994. The regulations, which have never been updated, require companies to annually report information such as a name or description of the weapon system; defense item or service subject to the offset agreement; the name of the country of the purchasing entity; the approximate value of export sale subject to offset; and the total dollar value of the offset agreement. The regulations also require prime contractors to report on the broad industry category, based on outdated four-digit Standard Industry Classification (SIC) codes, in which offset transactions are fulfilled. Currently, 84 percent of the value of export contracts involving offsets submitted by prime contractors are for the aerospace industry and there is no delineation among subsectors of the aerospace industry. According to Commerce officials, their analysis of the economic effect of offsets could be improved by requesting more detailed sector and product information based on updated six-digit North American Industry Classification System (NAICS) codes from prime contractors. As the NAICS has replaced the SIC, such improvements would allow Commerce to provide greater insight into the effects of offsets on specific subsectors of the economy and would more closely match employment data already used in their analysis. BIS is currently conducting a review of the data and methodology used to assemble their annual reports on offsets. BIS officials have stated that they will review additional sources of data from sources such the Bureau of Labor Statistics and Commerce’s Bureau of Economic Analysis. Commerce officials anticipate the outcome of this review to be represented in their next annual report. However, changes to the regulation would not affect data collection for the next annual report. In the 2003 DPA reauthorization, Congress created an interagency team to consult with foreign nations on limiting the adverse effects of offsets in defense procurement—without damaging the U.S. economy, defense industrial base, defense production, or defense preparedness—and prepare an annual report detailing the results of their foreign consultations. In February 2007, the interagency team—chaired by DOD, as designated by the President—issued its third and final report, which identified concerns shared by the United States and foreign nations about the adverse effects of offsets. This report, developed in consultation with representatives from U.S. government agencies, U.S. industry, and foreign nations, provided findings, recommendations, and strategies for limiting these adverse effects. The interagency working group went on to engage in bilateral dialogue with Australia in May 2007 and multilateral dialogue with six other countries in November 2007 and reached consensus to pursue the possibility of developing a statement of best practices for limiting the adverse effects of offsets. However, participants identified challenges including a lack of agreement on terminology and differences in views between national defense sectors and government agencies. While the interagency working group established a goal of producing a preliminary statement by the latter half of 2008, participating nations noted in the report that it will be difficult and time-consuming to do so. Additionally, the Defense Offsets Disclosure Act of 1999 established a national commission, requiring the President to submit a report to Congress addressing all aspects of the use of offsets in international defense trade within a year of its establishment. The commission, whose members included representatives from government, business, labor, and academia, produced an interim report in 2001 that described the extent and nature of defense-related offsets in both defense and commercial trade. It also described a variety of effects of offsets on the U.S. defense supplier base. For example, the commission reported that while offsets may facilitate defense export sales—which can help maintain the economic viability of certain U.S. firms—offsets can also supplant a significant amount of work and jobs that would go to U.S. firms if export sales occurred without offsets. The commission also reported that U.S. technology transfers through offsets often improved foreign firms’ competitiveness, but rarely resulted in technology transfer back to the United States. The commission was to provide a final report with areas for additional study including the effects of indirect and commercial offsets, the effects of offsets on industries other than aerospace as well as concrete policy recommendations. Due to the 2001 change in the presidential administrations, which resulted in vacancies in the five executive branch positions on the commission, the final report and recommendations were never produced and no further activity by the commission occurred. Since the DPA was last reauthorized in 2003, there has been little use of its authorities for areas other than defense. Lessons learned from catastrophic events have emphasized the importance of ensuring that needed capabilities and contracts for key items are in place in advance of a disaster. Without an established system for considering and acting on requests to use priorities and allocations authority, additional time could be required to react to emergency situations as agencies determine the proper procedures for using the authority. Placing priority ratings on contracts only after a delivery problem has arisen could also limit agencies’ ability to make timely use of the authority in an emergency. Agencies’ efforts could be strengthened by placing priority ratings on contracts for critical emergency response items before an event occurs. DPA also requires Commerce to report on the potential impact of offsets on the U.S. economy, which has been a concern for many years. The lack of usable data in the Department of Commerce’s reports limits the government’s ability to gain knowledge on the economic effects of offsets and to take steps to address them. To ensure that the full range of Defense Production Act authorities can be used in an effective and timely manner, we recommend the Secretaries of Agriculture, Health and Human Services, and Transportation, in consultation with the Department of Commerce, develop and implement a system for using the priorities and allocations authority for food and agriculture resources, health resources, and civil transportation respectively. To maximize effective use of the priorities and allocations authority, we recommend the Secretaries of Agriculture, Energy, Health and Human Services, Homeland Security, and Transportation consider, in advance of an emergency, approving programs and placing priority ratings on contracts for items that are likely to be needed in an emergency. To position the Department of Commerce to respond to offset reporting requirements, we recommend the Secretary of Commerce update regulations to, for example, request more specific industry information from prime contractors that would improve the assessment of the economic effects of offsets. We provided a draft of this report to USDA, Commerce, DOD, DOE, HHS, DHS, and DOT for comment. In official comments, USDA generally concurred with our findings and recommendations. Other agencies did not officially comment on our recommendations, but provided technical comments that were incorporated as appropriate. In its technical comments HHS noted that it is beginning to develop a regulation to establish a framework for considering requests for priority ratings. In line with GAO’s recommendations, the regulation would allow for priority ratings for health resources to be approved in advance of an emergency situation. DOT noted in its technical comments that, based on its review of the draft, it will develop regulations for a priorities and allocations system. We are sending copies of this report to the Secretaries of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, and Transportation. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or calvaresibarra@gao.gov if you have any questions regarding this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were John Neumann, Assistant Director; Marie Ahearn; Julie Hadley; Lauren Heft; Kevin Heinz; Marcus Lloyd Oliver; and Karen Sloan. To determine the extent to which agencies use the authorities in the Defense Production Act of 1950 (DPA), we reviewed the current legislation and recent amendments. In defining our scope we referred to Section 889 of the National Defense Authorization Act for Fiscal Year 2008 and focused on use of authorities since the 2003 reauthorization to respond to defense, energy, domestic security, disaster response, and critical infrastructure protection and restoration requirements. We reviewed and analyzed applicable regulations, policies, and guidance from seven agencies that have been delegated authority to use the DPA by Executive Order or federal regulation or have exercised the authorities. These agencies included the Departments of Agriculture (USDA), Commerce, Defense (DOD), Energy (DOE), Health and Human Services (HHS), Homeland Security (DHS), and Transportation (DOT). At each of these agencies, we met with officials to discuss agency-specific DPA policies and guidance, recent use and implementation of the authorities, and challenges related to the authorities. Where available, we collected and reviewed documentation on circumstances in which agencies have used the DPA. In examining use of the Title I priorities and allocations authority, we met with several agencies to discuss experiences, policies, and guidance. We met with officials from the Department of Commerce, Bureau of Industry and Security, to examine and discuss the delegation of the authority as well as the regulations that guide several agencies’ use. We also met with officials from the U.S. Army, Navy, and Air Force and DOD’s Office of the Deputy Under Secretary of Defense (Industrial Policy) to review DOD’s policies for use of the authority and specific policies at each service. These discussions addressed specific application of the authority as well as challenges in implementation. Further meetings were held with other agencies regarding experiences, policies, and guidance related to use of the Title I authority for specific types of items including, USDA, Farm Service Agency, to discuss food and agriculture resources; U.S. Army Corps of Engineers, to discuss water resources; DOE, Office of Electricity Delivery and Energy Reliability, to discuss HHS, Biomedical Advanced Research and Development Authority and Office of the Assistant Secretary for Preparedness and Response, to discuss health resources; and DOT, Office of Intelligence, Security and Emergency Response to discuss civil transportation. In reviewing the use of the Title I authority by DHS, we reviewed documents from the Federal Emergency Management Agency (FEMA) including reports on its use of the authority, a consolidated report to Congress on use by it and other agencies, and endorsement and approval documents related to specific uses of the authority. Additional discussions were held with officials at each of the seven agencies on experiences and awareness of DPA authorities in Titles III and VII. We specifically met with the Air Force DPA Title III Program Office to obtain documents related to the management of the program and discuss efforts to coordinate with other agencies. We reviewed documents related to authorities in Title VII, including the National Defense Executive Reserve and voluntary agreements and discussed both with each agency. Specific voluntary agreements were discussed with the Maritime Administration. We also reviewed industrial capability assessments from the Department of Commerce, for which the Department of Commerce uses the Title VII authority to collect information. To identify the efforts of U.S. government agencies in assessing the economic effect of foreign offsets, we reviewed the DPA and other statutes to determine specific reporting requirements. To determine the extent to which the Department of Commerce has assessed the economic effects of offsets, we analyzed their annual offset reports since 2005 as well as a 2004 special report on the impact of offsets on the U.S. subcontractor base. We also spoke with officials from the Commerce Department’s Bureau of Industry and Security to identify the methodology used in assessing the economic effect of offsets as well as additional efforts that could allow for a more detailed analysis in the future. To determine DOD’s response to the offsets reporting requirements contained in the DPA, which provides for an interagency team to consult with foreign nations on limiting the harmful effects of offsets in defense procurement, we reviewed and analyzed the interagency team’s annual reports since 2004. We also contacted DOD’s Office of International Cooperation to discuss experiences and challenges associated with the interagency team and to determine their future plans with respect to foreign consultations. Finally, we reviewed prior GAO reports to identify challenges associated with assessing the economic effect of offsets. We conducted this performance audit from January 2008 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Title I of the Defense Production Act of 1950, as amended, authorizes the President to require priority performance on contracts or orders and to allocate materials, services, and facilities to promote the national defense. Executive Order No. 12,919, as amended, delegates the President’s priorities and allocations authority for various resources to the following agency heads: Secretary of Agriculture: food and agriculture resources Secretary of Energy: all forms of energy Secretary of Health and Human Services: health resources Secretary of Transportation: all forms of civil transportation Secretary of Defense: water resources Secretary of Commerce: all other materials, services, and facilities, including construction materials, known as industrial resources. The Department of Commerce (Commerce) administers the only priorities and allocations system that is actively used—the Defense Priorities and Allocations System (DPAS)—which is used for industrial resources. Commerce has delegated authority to use priority ratings on contracts for industrial resources to the Departments of Defense (DOD), Energy (DOE), and Homeland Security (DHS) for use in support of approved national defense, energy, and homeland security programs. Under the DPAS, agencies can assign a “DO” or a “DX” priority rating to orders. DO ratings are used for items critical to national defense, while a DX rating denotes the highest national defense urgency. Priority rated orders have preference over all unrated orders as needed to meet required delivery dates, and among rated orders, DX-rated orders have preference over DO- rated orders. While Commerce has delegated the ability to use DPAS authority to four agencies, it may also provide Special Priorities Assistance to authorize other government agencies, foreign governments, owners and operators of critical infrastructure, or companies to place priority ratings on contracts on a case-by-case basis, or to resolve any problems that may arise in the use of priorities and allocations authority. Commerce reported that since late 2003, it has taken approximately 180 actions to provide Special Priorities Assistance, primarily to support foreign government requirements related to DOD-approved programs. With some exceptions, all DOD contracts for industrial resources receive a priority rating under DPAS, which amounts to approximately 300,000 contracts annually that receive priority ratings. For example, DOD has used its authority in the past several years to prioritize the delivery of ballistic material used in body armor for the Army and Marine Corps. In addition, DOD has worked to manage DOD-wide demand for armor plate steel and helped steel firms manage schedules in order to prevent armor plate shortages resulting from a surge in production on the Mine Resistant Ambush Protected (MRAP) Vehicle Program. The U.S. Army Corps of Engineers, which executes DOD’s priorities and allocations authority for water resources, also uses rated orders to procure water in advance of or during emergency events. Since 2003, DHS reported that it has authorized or endorsed to the Department of Commerce the use of priority ratings for 15 programs, which includes approval of other federal agencies’ use of priority ratings: restoration of rail service in the Gulf Coast region after Hurricane Katrina; construction of an FBI facility in Northern Virginia; construction of the Department of Justice’s Terrorist Screening Center; procurement of perimeter security equipment for a major airport and upgrades to cargo seaport security; procurement of encrypted radio equipment for use in U.S. Park Police acquisition of generator transfer switches and transformers for state upgrade of State Department domestic facility security; procurement of a generator for the State Department’s Continuity of Operations facility; construction of an emergency Federal Support Center; procurement of equipment for a FEMA emergency facility; DHS procurement of encrypted emergency communications equipment procurement of FBI night vision equipment; FEMA’s Communications Support Infrastructure Program; and The Geostationary Operational Environmental Satellite, R-Series Program of the National Oceanic and Atmospheric Administration. DOE reported that it has used priority ratings primarily on contracts and orders supporting atomic energy or defense. Outside of this use, DOE has not encountered emergency conditions requiring the use of priorities and allocations authority to reduce interruptions in energy supplies since 2003. USDA officials reported that they have not made use of priorities and allocations authority, and that use of the authority would be needed only in very catastrophic circumstances. USDA is in the process of developing regulations to implement an Agriculture Priorities and Allocations System to support use of priority ratings to maintain agricultural operations during a national emergency. A memorandum of understanding relating to foods that have industrial uses and the domestic distribution of farm equipment, sets the priorities and allocations jurisdiction and responsibilities of USDA and Department of Commerce for defense mobilization in the event of a national security emergency. HHS officials reported that they have not encountered circumstances requiring use of the authority, but have identified some health resources for which HHS could potentially use the authority in the future. HHS is currently developing a priorities and allocations system regulation for health resources, modeled on DPAS. DOT officials reported that they have not routinely used priorities and allocations authority for civil transportation needs, explaining that the market has traditionally responded to civil transportation requirements without the need for priority-rated orders. For example, DOT reported that it consulted with Commerce and DHS on the possible use of DPAS authority during planning following the I-35W bridge collapse in Minnesota, but found there was no resource shortage that would have required using the authority. In 2002, DOT obtained a priority rating to support the procurement of approximately 1,800 Explosives Detection Systems (EDS) machines for use in U.S. airports. According to DOT officials, this rating was necessary for the Transportation Security Administration (TSA), then part of DOT, to meet a statutory obligation to install a specified number of EDS machines in U.S. airports by December 31, 2002. In addition, DOT has worked with DOD to secure priority ratings under DOD’s authority. During the first Gulf War, the FAA, working through DOD, sought use of DPAS to support activation of the Civil Reserve Air Fleet. This request was made after the Air Mobility Command determined that air carriers could provide more resources if they could get priority for parts. Additionally, a DOD priority rating was used to expedite one carrier’s airframe modifications to enable it to transport pallets used by DOD. Title III of the Defense Production Act of 1950 (DPA), as amended, allows agencies to provide financial incentives to domestic firms to invest in production capabilities to ensure that the domestic industrial and technological base is capable of meeting the critical national security needs of the United States. It is used when domestic sources are required and firms cannot, or will not, act on their own to meet a national defense production need. Title III financial incentives are designed to reduce the risks for domestic suppliers associated with the capitalization and investments required to establish, expand, or preserve production capabilities. The candidate projects are evaluated in terms of four criteria: 1. The industrial resource or critical technology item is essential to the 2. Without the Title III authority, United States industry cannot reasonably be expected to provide the capability for the needed industrial resource or critical technology item in a timely manner; 3. Title III incentives are the most cost-effective, expedient, and practical alternative methods for meeting the need involved; and 4. The combination of the U.S. national defense demand and foreseeable nondefense demand for the industrial resource or critical technology item is not less than the output of domestic industrial capability, as determined by the President, including the output to be established with the Title III incentives. As shown in Table 2, Title III has been used to promote a variety of technologies with dedicated funds ranging from $88,000 to approximately $164 million. | Congress enacted the Defense Production Act of 1950 (DPA) to ensure the availability of industrial resources to meet defense needs. Amendments to the Act allow its use for energy supply, emergency preparedness, and critical infrastructure protection and require agencies to report on foreign offsets, which are incentives to foreign governments to purchase U.S. goods and services. Only Titles I, III, and VII remain in effect. In the National Defense Authorization Act for Fiscal Year 2008, Congress directed GAO to review recent agency efforts to implement the DPA. This report (1) examines the extent to which agencies use DPA authorities and (2) assesses agencies' response to reporting requirements on the economic impact of foreign offsets. GAO's work is based on a review of policies and guidance for the use of DPA authorities, instances in which agencies have exercised the authorities, and the analysis used in required reports on foreign offsets. The Department of Defense (DOD) routinely exercises the DPA Title I priorities and allocations authority, which allows rated contracts and orders to be delivered before others, to ensure the availability of defense resources. However, civilian agencies have generally not used the Title I authority and most differ from DOD in deciding when to apply it. For example, DOD places ratings on most of its contracts before critical defense items are needed. In contrast, agencies such as the Department of Homeland Security (DHS) generally request ratings after delivery needs are identified, potentially delaying critical items during emergencies. Also, agencies responsible for responding to domestic emergencies and procuring resources in the areas of food and agriculture, health resources, and civil transportation, lack policies and guidance that could facilitate execution of the Title I authority and delivery of items needed in an emergency. While the Departments of Agriculture (USDA) and Health and Human Services are developing regulations to establish a framework for considering priority ratings, the Department of Transportation (DOT) has not yet begun to do so. Other DPA authorities have been used exclusively by DOD or have not been triggered by recent events. For example, DOD has generally been the sole user of the Title III authority for expansion of production capabilities, while events that would activate some Title VII authorities--such as the National Defense Executive Reserve and voluntary agreements--have not occurred. Agencies have taken steps towards fulfilling their offset reporting requirements to Congress, but data collected by the Department of Commerce limits the analysis of the economic effect of offsets. Commerce officials noted that a more detailed analysis could be provided if they requested more specific product data from prime contractors. Also, a DOD-chaired interagency team--required to report on its consultations with foreign nations on limiting the adverse effects of offsets--has reached consensus with other nations that adverse effects exist, but not yet on best practices to address them. Actions by the National Commission on Offsets have similarly been limited in the assessment of economic effects. |
Rail transit is an important component of the nation’s transportation network, particularly in large metropolitan areas. Rail transit systems provide around 4.3 billion passenger trips annually. The five largest heavy rail systems carried 3.2 billion passengers in 2008, 90 percent of all heavy rail trips. The NYCT system surpassed all the other heavy rail systems by carrying almost 2.4 billion passengers—2.1 billion more than the next largest heavy rail system. Conversely, the five largest light rail systems are much smaller, collectively carrying 244 million passengers in 2008. The largest light rail system, operated by MBTA, carried 74 million passengers. Public transit is seen as an affordable mode of transportation and a means to alleviate roadway congestion and emissions. Increases in gasoline prices over the past decade also have resulted in higher ridership, which peaked in fall 2008. Although ridership declined in 2009 by about 4 percent, following the 2008 economic recession and a decrease in gasoline prices, transit ridership is expected to grow in years to come. Heavy and light rail transit systems have developed throughout the nation over the past 100 years. The oldest systems in cities such as Boston, New York, and Chicago, among others, were generally built by private companies which eventually went out of business, requiring the systems’ respective local governments to provide financial help to keep the systems operating. During the 1960s, Congress established a federal capital assistance program for mass transportation. With federal capital assistance, many other cities constructed rail transit systems, including heavy rail systems in Atlanta, San Francisco, and Washington, D.C. Heavy rail systems tend to be larger and carry many more passengers than light rail systems. While there are currently more than twice as many light rail systems as there are heavy rail systems, the heavy rail systems carry about seven times as many passengers and cover more than 50 percent more miles of track than light rail systems (see fig.1). The types of safety risks associated with each rail mode differ somewhat. For example, the higher volume of passengers, the higher speed of the trains, and the third rail on the track pose safety risks for heavy rail systems; the numerous interfaces between rail cars and vehicular traffic and pedestrians pose safety risks for light rail systems. Since the 1980s, newly constructed systems have been predominantly light rail systems. Rail transit systems are managed by public transit agencies accountable to their local government. However, rail transit agencies rely on a combination of local, state, and federal funds, in addition to system- generated revenues such as fares, to operate and maintain their systems. Some states and local governments provide a dedicated revenue source for transit, such as a percentage of the state or local sales tax, or issue bonds for public transportation. In 2008, about 57 percent of all funds for both operating expenses and capital investments were from local and state government. Other sources, such as farebox revenues, provided 26 percent. The federal government’s share was about 17 percent. Even though federal funding has predominantly been for capital investments, by 2008 local government replaced the federal government as the largest source of capital investment funds. However, in the past few years there have been decreases in the amounts of state and local funding available to transit agencies, especially for those agencies that depend on tax revenues, which have experienced decreases as a result of the general economic slowdown faced by the nation. As a result, many transit agencies have faced budget cutbacks. FTA uses many funding programs to support transit agencies. In particular, two FTA programs—the Urbanized Area Formula Program and the Fixed Guideway Modernization Program—provide funding that can be used by existing transit agencies in urbanized areas to modernize or improve their systems. Specifically, these funds can be used for purchasing and rehabilitating rail cars and preventive maintenance, among other things. In 2009, additional funds were made available through the American Recovery and Reinvestment Act (Recovery Act). Recovery Act funds are used primarily for capital projects, although some funds were made available for and have been used for operating expenses. In comparison with other modes of transportation, rail transit is relatively safe. For example, occupants of motor vehicles are more than 70 times more likely to die in accidents while traveling as are passengers of rail transit systems. However, several large rail transit agencies in recent years have had major accidents that resulted in fatalities, injuries, and significant property damage. NTSB has investigated a number of these accidents and has issued reports identifying the probable causes of and factors that contributed to them. Since 2004, NTSB has reported on eight rail transit accidents that, collectively, resulted in 13 fatalities, 297 injuries, and about $29 million in property damages. In five of these accident investigations, NTSB found the probable cause to involve employee errors, such as the failure of the train operator to comply with operating rules and of track inspectors to maintain an effective lookout for oncoming trains while working on the tracks. Of the remaining three accidents, NTSB found that problems with equipment were a probable cause of two accidents and that weaknesses in management of safety by the transit agency were a probable cause in all three accidents. In six of these investigations, NTSB reported that contributing factors involved deficiencies in safety management or oversight, such as weaknesses in transit agencies’ safety rules and procedures, lack of a safety culture within the transit agency, and lack of adequate oversight by the transit agency’s state safety oversight agency and FTA. See appendix I for further information on these accident investigations. Transit agencies are responsible for the operation, maintenance, and safety and security of their rail systems but are subject to a tiered state and federal safety oversight program. The Intermodal Surface Transportation Efficiency Act of 1991 mandated FTA to establish a State Safety Oversight Program for rail fixed guideway public transportation systems that are not subject to FRA regulation. Through this program, FTA monitors 27 state safety oversight agencies that oversee the safety of rail transit operations in 25 states, the District of Columbia, and Puerto Rico. While FTA has discretionary authority to investigate safety hazards at transit systems it funds, it does not have authority to directly oversee safety programs of rail transit agencies. FTA, however, does have the authority and responsibility for overseeing transit agencies’ workplace drug and alcohol testing programs. FTA also collects safety data, including data on types of accidents and causes, from the state safety oversight agencies and the transit agencies they oversee. Transit agencies provide safety data for FTA’s National Transit Database while the state safety oversight agencies provide safety data through annual reports to FTA. Under FTA regulations, state safety oversight agencies must develop a program standard that outlines transit agencies’ safety responsibilities. In particular, transit agencies are required to develop and implement safety programs that include, among other things, standards and processes for identifying safety concerns and hazards, and ensuring that they are addressed; a process to develop and ensure compliance with rules and procedures that have a safety impact; and a safety training and certification program for employees. Moreover, FTA requires state safety oversight agencies to perform safety audits of their transit agencies at least once every 3 years, investigate transit accidents, and ensure that deficiencies are corrected. FTA, however, does not fund state safety oversight agencies to carry out this work. Our earlier work found that many state safety oversight agencies lacked adequate staffing, employed varying practices, and applied FTA’s regulations differently. As noted earlier, FTA’s role in overseeing safety on rail transit systems is relatively limited, which is reflected in the number of staff that it employs to fill that role. FTA’s Office of Safety and Security has 15 to 17 staff members managing safety, security, and emergency management programs. They are supported by contractor staff. In December 2009, DOT proposed to Congress major changes in FTA’s role that would shift the balance of federal and state responsibilities for oversight of rail transit safety. DOT proposed the following: FTA, through legislation, would receive authority to establish and enforce minimum safety standards for rail transit systems not already regulated by FRA. A state may continue to have a state safety oversight program to oversee public transportation safety—by “opting in”—given that its program complies with the federal laws, regulations, and policies that FTA would implement if it receives expanded authority proposed in the legislation. DOT would provide federal assistance to states with FTA-approved state safety programs to enforce the federal minimum safety standards. Participating states could set more stringent safety standards if they chose to do so. In states that decided to “opt out” of participation or where FTA has found the program proposals inadequate, FTA would oversee compliance with and enforce federal safety regulations. Subsequently, during the 111th Congress, several bills including these changes were proposed. Instilling a safety culture agencywide is a challenge the largest transit agencies face that can impact their ability to ensure safe operations. The concept of safety culture can be defined in different ways and the level of safety culture in an organization can be difficult to measure. As we have previously reported, safety culture can include: organizational awareness of and commitment to the importance of safety, individual dedication and accountability for those engaged in any activity that has a bearing on safety in the workplace, and an environment in which employees can report safety events without fear of punishment. According to NTSB officials, in organizations with effective safety cultures, senior management demonstrates a commitment to safety and a concern for hazards that are shared by employees at all levels within the organization. Furthermore, such organizations have effective safety management systems that include appropriate safety rules and procedures, employee adherence to these rules and procedures, well- defined processes for identifying and addressing safety-related problems, and adequate safety training available for employees and management. FTA officials told us that it is difficult to define safety culture but noted that attributes of a strong safety culture include open communication about safety throughout the agency, nonpunitive safety reporting by employees, and the identification of safety trends based on agency- collected data. In addition, APTA officials told us that another attribute of safety culture is the accountability of individuals for how their actions and the actions of others affect safety. According to FTA, a strong safety culture can energize and motivate transit employees to improve safety performance. As we subsequently discuss, FTA currently has efforts underway that may more clearly communicate what a strong safety culture entails. All 12 of the rail transit experts we interviewed agreed that safety culture was important in helping transit agencies lower their accident rates. The experts we consulted offered several views about safety culture at large transit agencies. Seven experts noted that the extent of safety culture varies at large transit agencies across the country. Four experts stated that the extent of safety culture was generally low throughout the rail transit industry and needed to be improved. Some experts also noted that despite system differences, a major reason why certain systems have more or fewer incidents is the extent of safety culture present at the transit agency. One expert in particular said that all the other safety challenges transit agencies faced flow from safety culture issues. Some experts we interviewed identified the importance of training to help instill a safety culture at all levels of a transit agency. We have reported that training should support an agency’s goal of changing workplace culture to increase staff awareness of, commitment to, and involvement in safety. Thus, the challenge faced by the largest transit agencies in providing sufficient training for staff—discussed below—can increase the challenge of instilling a safety culture at those same agencies. FTA officials have identified the need to improve safety culture as a continuing problem for the transit industry as a whole, which requires changing behaviors and processes that have become engrained over decades of service. FTA has reported that, to get to the root of safety culture, transit agency management and employees need to understand the current state of their safety programs, how employees perceive management’s commitment to safety, how employees actively follow established safety rules and procedures and how they are held accountable for doing so, and how management monitors employees’ safety performance. FTA officials noted that limitations in transit agencies’ collection and analysis of safety data impede their ability to improve their safety culture, because these limitations affect their ability to identify and address safety hazards. Safety culture can have a significant impact on safety performance. In two of its reports on accidents since 2004, NTSB has noted that an inadequate safety culture contributed to the accidents. Probable causes in the accidents that the NTSB investigated included employee errors, such as failure to comply with operating rules, and inadequate safety management and oversight by transit agencies. Problems such as these may reflect a poor safety culture, as employees may not be motivated to follow operating rules and management may not be properly managing safety programs to ensure that hazards are identified and addressed. In its report on the 2008 accident on MBTA’s system that resulted in one fatality and eight injuries, NTSB found that the probable cause was the failure of the train operator to comply with a controlling signal resulting from an episode of micro-sleep, and noted an MBTA report of an internal audit that stated the success of any new safety plan was largely dependent on the safety culture that MBTA fostered within each agency department and work group. Additionally, NTSB cited this report as stating that MBTA management needed to define, understand, and integrate effective practices into day-to-day work activities to ensure that the safety of employees and passengers remained a top priority. In its report on CTA’s 2006 derailment that resulted in 152 injuries, NTSB found that ineffective management and oversight of its track inspection and maintenance program was a probable cause. Specific problems included ineffective supervisory oversight of track inspections, lack of complete inspection records and follow-up to ensure defects were corrected, and insufficient training and qualification requirements for track inspectors. NTSB found that these identified problems were all part of a deficient safety culture that allowed the agency’s track infrastructure to deteriorate to an unsafe condition. In its report on WMATA’s June 2009 collision that resulted in nine fatalities and 52 injuries, NTSB identified the lack of an effective safety culture as a contributing factor to the accident. According to NTSB, shortcomings in WMATA’s internal communications, recognition of hazards, assessment of risk from those hazards, and implementation of corrective actions were all evidence of an ineffective safety culture and were symptomatic of a general lack of importance assigned to safety management functions across the WMATA organization. NTSB made recommendations to WMATA to improve its safety culture. In response to NTSB’s recommendations to improve its safety culture, WMATA is taking a number of actions, including: the development of procedures to ensure clear communication and distribution of safety-related information and the monthly review of data and trend analyses, the establishment of a safety hotline and email for employees to report safety concerns, an updated whistleblower policy to encourage employee participation and upper management review of identified safety concerns, an amended mission statement to reflect the agency’s commitment to a newly formed committee of WMATA’s Board of Directors to make recommendations monthly on assuring safety at WMATA. Some other transit agencies have also made efforts to increase the extent of safety culture present in their agencies. For example, officials from three transit agencies we spoke with stated that their transit agencies created and supported nonpunitive safety reporting programs such as whistleblower policies and anonymous tip hotlines to encourage employees to keep management aware of safety problems. One agency told us they have a close call reporting program. These programs can encourage employees to voluntarily and confidentially report close call incidents without fear of reprisal. We have previously reported that it is unlikely that employees would report safety events in organizations with punishment-oriented cultures in which employees are distrustful of management and each other. Blaming individuals for accidents not only fails to prevent accidents but also limits workers’ willingness to provide information about systemic problems. To promote reporting in such environments, systems can be designed with nonpunitive features to help alleviate employee concerns and encourage participation. In addition, some transit agencies we visited are reaching outside of the organization for support to further instill safety culture at their agencies. For example, officials at three transit agencies told us they had hired or planned to hire consultants to audit the system and make recommendations for improvements to increase the safety culture at all levels of the organization. According to APTA officials, the transit industry recognizes that labor organizations must be engaged in a visible partnership at all stages of safety culture development. In addition to instilling safety culture at transit agencies, maintaining an adequate level of skilled staff and ensuring that they receive needed safety training are also challenges the largest transit agencies face in ensuring safety. Staffing challenges involve recruiting and hiring qualified employees to fill positions with safety responsibilities—such as safety department staff, maintenance staff, track workers, and operation managers—and adequately planning for the loss of such staff through vacancies and retirements. For example, several transit agencies told us it has been difficult to hire maintenance employees with the necessary expertise and knowledge of both aging and new technology systems. Officials from two transit agencies noted the difficulty in hiring maintenance employees who have experience working with older electronic technology—some of which dates from the 1960s—and who are also knowledgeable of current computer technology. In addition, many transit agencies face an aging workforce and the potential for large numbers of upcoming retirements. For example, one transit agency we visited identified more than 50 percent of its staff as eligible for retirement within the next 5 years. FTA officials told us that staffing is a challenge facing transit agencies nationwide due to the large number of employees nearing retirement eligibility and the difficulty in retaining and replacing qualified employees. In addition, a recent APTA report identified that the transit industry has an experienced but aging workforce, with a significant number of potential retirements expected in the next 10 years. The staffing challenge has been further exacerbated for transit agencies by recent budget cutbacks as a result of flat or decreased funding from state and local governments. Officials at six of the seven transit agencies we visited stated that their staffing levels have been or will be cut, including some safety staff at three of these agencies. For example, at one transit agency we visited officials stated that, due to their current budget shortfall, staffing levels would be reduced, including in the safety department where 3 positions from the overall 93 positions were cut. In addition, at another transit agency we visited, one official cited staffing levels being stretched to the point where it is difficult to conduct the necessary rail car maintenance to keep the system running. Training challenges for large transit agencies have included difficulties in ensuring that staff receive needed safety-related training—such as training in track safety, fire and evacuation, risk assessment, and the inspection and maintenance of track and equipment—due to financial constraints as well as the limited availability of technical training. Some experts identified ensuring adequate levels and frequency of training as key challenges for large transit agencies. Some cited training cuts as being commonplace when budget cutbacks occur despite its importance and link to safety. All the transit agencies we visited identified a challenge in having employees participate in safety training either due to the inability of their agencies to pay for training or to cover employees’ positions while they attend training, or a combination of both. For example, some officials explained that if a train operator attends safety training another train operator must work an extra shift to cover for the operator attending training. The transit agency pays for overtime hours for the extra shift worked by the train operator. Officials at some of the transit agencies we visited told us that these additional costs for training can be prohibitive. A recent APTA report identified safety training as well as supervisory and leadership training as top training needs for the industry. The large transit agencies we visited have different types of training programs available for their staff. For example, one transit agency has a large in-house training program that provides safety training and certification for their staff. Each department within the agency tracks employee training schedules, participation, and goals. At another transit agency, officials explained that, while they do some of their training in- house, they rely to a great extent on on-the-job training. Officials from three transit agencies noted that the availability of apprenticeship programs and external technical training, such as training in how to inspect rail and signals, is limited. One transit agency official and one state safety oversight agency official mentioned that the transit industry often relies on on-the-job training. According to APTA officials, on-the-job training is a vital part of transit agencies’ training programs and can mitigate institutional knowledge loss as attrition occurs. However, they also noted that transit agencies often have not formalized their on-the-job training by documenting key elements to be covered and that this type of training is not carried out consistently among transit agencies. The transit agencies we visited also sent staff to training courses offered by DOT’s Transportation Safety Institute and FTA’s National Transit Institute. However, due to the high costs of traveling for training—including lodging and transportation costs—most of the transit agencies we visited cited difficulty in participating in such training opportunities. Transit agencies have attempted to find more cost effective ways of addressing this problem. For example, officials from three transit agencies told us they have offered to host DOT and FTA training at their agencies to reduce the travel costs associated with staff attending safety training courses. Employees who have not had adequate safety-related training may be more likely to commit errors that can cause accidents. For example, in a 2009 investigation on how NYCT inspectors identified and reported defects in subway platform edges—which caused three transit riders within 3 years to fall onto the tracks after defective boards broke under their weight—the transit system’s Office of the Inspector General identified the lack of training on accurately and consistently identifying safety hazards at platform edges as contributing to the accidents. The office recommended that NYCT provide intensive and continuing training for platform inspectors. In response, NYCT developed and implemented a training program in May 2009 on identifying platform edge defects for all station managers and supervisors. In addition, in five of the eight rail transit accident investigations conducted by the NTSB since 2004, employee errors, such as not following procedures, were identified as a probable cause of the accidents. According to one expert we interviewed, training can help prevent accidents by preventing employee complacency and inattention in regards to safety rules and procedures. Some experts noted that attention to safety becomes more, not less, important as employees gain experience, as system familiarization leads some workers to drop their focus on safety. NTSB officials cited the importance of periodic refresher training for employees to ensure that staff maintain the skill set needed to identify and resolve safety issues. Another benefit of adequate training is helping to prepare the transit workforce to handle pending retirements. Currently, no industry standards exist for what an adequate level of safety- related training should be for transit agency staff. According to APTA, the transit industry lacks a standard training curriculum for transit employees and, as a result, transit safety-related training at transit agencies lacks consistency and is not always of high quality. FTA officials have also identified a lack of consistent training throughout the transit industry. According to one expert we interviewed, because of the lack of consistent training standards, the management of individual transit agencies has to determine on its own what safety training is needed for agency employees. According to NTSB officials, without minimum training requirements, the level of training available at each transit agency will vary, which can result in differing safety outcomes for each agency. Achieving a state of good repair is a challenge the largest transit agencies face that can impact their ability to ensure the safety of their heavy and light rail systems. In general, state of good repair is a term that transit officials use to refer to the condition of transit assets—for example, rail tracks, elevated and underground structures, rail cars, signals, ties, and cables (see fig. 2). In a study of the seven largest rail transit systems completed in 2009, FTA determined that more than a third of these agencies’ assets were in poor or marginal condition, indicating that they were near or had already surpassed their expected useful life. At six of the large transit agencies we visited, according to FTA estimates, the proportion of rail transit assets considered to be in poor or marginal condition ranged from zero percent, at LA Metro’s relatively new system, to 41 percent, at the much older and larger NYCT system. Efforts to achieve a state of good repair include maintaining, improving, rehabilitating, and replacing assets. The delay of some of these efforts can affect safety. Officials at one transit agency identified potential safety risks that could arise from delayed repairs, including worn tracks that could contribute to derailments, failures with the signal system that could allow for collisions, and failures with the traction power cable that could cause fires in subway tunnels. However, according to FTA and transit agency officials, transit agencies prioritize funding for state of good repair efforts to ensure that repairs important for safety are not delayed. All the transit systems we visited reported taking measures to ensure that their systems are safe in planning their state of good repair efforts. For example, one transit agency has reduced cleaning and other maintenance not critical for system safety as it continues to fund safety improvements. According to officials from this transit agency, less critical system safety items, such as escalator and elevator maintenance, have been put on a prolonged maintenance schedule. However, officials at this transit agency also stated that the agency had reached a point where further budget cuts would cause deterioration in system safety. In another example, one transit agency we visited has delayed the approximately $500 million replacement of subway fans which would provide for better ventilation because the agency determined that this was not a high safety priority. Agencies have made efforts to maintain safe operation of their system despite delays in addressing identified state of good repair maintenance or replacement needs. For example, officials at one transit agency we visited told us that they have implemented “slow zones” where trains run at lower speeds to help ensure safe operating conditions on aging track. In some cases, unaddressed poor asset conditions have contributed to accidents. For example, in its investigation of a 2006 derailment on the CTA system that injured 152 people, NTSB found that rail track problems that should have placed the tracks out of service were not identified and repaired. NTSB found that the track problems were readily observable and should have been identified and corrected. According to FTA officials, the transit industry has been slow to adopt asset management practices that would allow transit agencies to efficiently manage state of good repair needs. Officials noted that reasons for this slowness include the cost of development and implementation of asset management practices as well as the diversity of assets across and within transit systems. Transit asset management is a strategic approach for transit agencies to manage their transit assets and plan appropriately for rehabilitation and replacement. Asset management practices can help agencies decide how best to prioritize their investments, which can help ensure that safety needs are addressed. Such practices include tracking assets and their conditions and using this information to conduct long- term capital planning. However, no common standards for asset management practices exist and transit agencies use varying methods for determining the condition of their assets. A recent FTA study found that the use of these asset management practices at large transit agencies varied widely. Another component of asset management is the compilation of asset inventories by transit agencies. FTA defines an asset inventory as a current and comprehensive listing of all major assets used in the delivery of transit services, compiling the attributes of asset type, location, condition, age, and history, among other things. According to FTA, while some of the nation’s larger transit systems, among others, have developed asset inventories specifically to assist with capital planning purposes, not all have done so and currently no industry standard or preferred method for retaining asset inventory data exists. Furthermore, not all large transit agencies conduct comprehensive assessments of their asset conditions on a regular basis. Investments that transit agencies have made in previous years on state of good repair efforts have not kept pace with asset deterioration. According to FTA’s 2009 study, an estimated $50 billion is needed to bring the seven largest rail transit systems into a state of good repair. FTA found that these agencies were investing $500 million less than the annual investment needed to prevent this state of good repair backlog from increasing. Based on FTA’s estimates, the proportion of these agencies’ assets exceeding their useful life would increase from 16 percent to more than 30 percent by 2028 if funding levels remain unchanged. The state of good repair backlog for six of the seven transit agencies that we visited varies, in part due to system characteristics such as age, size, and use of the system (see fig. 3). According to NTSB and FTA officials, having a large state of good repair backlog does not necessarily mean that a transit system is unsafe. NYCT has a considerably higher backlog in comparison with the other transit agencies we visited. For example, its backlog is more than five times that of CTA, the next largest backlog of the agencies we visited. The backlog for the five remaining transit agencies ranges from $5 million to about $5 billion. LA Metro’s state of good repair backlog is much smaller in comparison to the other transit agencies we visited in part due to the young age of its heavy and light rail systems. These backlogs can be much larger than these agencies’ capital budgets. For example, the state of good repair backlog for NYCT is $27.31 billion while its 5-year capital budget is $12.32 billion. According to a 2010 FTA study of the transit industry as a whole, state of good repair investment backlogs are higher for heavy rail than light rail, reflecting the relatively young age of light rail assets in comparison to heavy rail assets. Recent budget cutbacks and budgetary shortfalls have negatively impacted transit agencies’ ability to sufficiently invest to prevent the worsening of their state of good repair backlogs and asset conditions. All of the rail transit agencies we visited cited financial constraints as affecting their ability to achieve a state of good repair. FTA has various efforts underway that may help instill a more robust safety culture at transit agencies. Through the Transit Cooperative Research Program, FTA has recently begun a study on safety culture at transit agencies. Given the difficulty of defining safety culture, this effort has the potential to more clearly communicate what a strong safety culture at transit agencies entails. The project will look at the culture of the working environment in which serious accidents occur, elements of an effective safety culture in a transit agency, and best practices for transit organizations to implement an effective safety culture. DOT’s draft Strategic Plan also notes the importance of encouraging DOT, government partners, safety advocates, and industry leaders to adopt a strong and consistent safety culture that does not accept the inevitability of fatalities on the nation’s transportation systems. According to FTA officials, their safety guidance, outreach, and training provided by the National Transit Institute and Transportation Safety Institute, have helped encourage transit agencies to discuss and examine institutional safety culture. An example of these efforts cited by FTA officials is a FTA-produced video, “A Knock at Your Door.” The video re- enacts fatal rail transit accidents to underscore the importance of safety procedures. FTA officials also mentioned that they have encouraged discussions about the importance of safety culture at roundtable meetings with transit agency management and other officials, teleconferences, and training classes. In addition, FTA has also sent letters to transit agencies following incidents to, among other things, bring incidents and safety culture trends to the attention of transit agency management. FTA officials were uncertain how much transit agencies use such guidance and outreach, as well as what impact these efforts have on safety. FTA has distributed nearly 500 copies of its safety video to rail transit agencies, state safety oversight agencies, and others. More information on current and planned efforts by FTA to address safety culture challenges at transit agencies is available in appendix III. Proposed legislation would give FTA the authority to set and enforce safety standards, which could also strengthen transit agencies’ safety culture through increased oversight, in addition to assistance. If passed, this legislation would result in FTA receiving authority to directly regulate rail transit safety and, in cooperation with the states, to oversee and enforce rail transit systems’ compliance with these regulations. We testified in December 2009 that these changes in oversight would bring FTA’s authority more in line with that of some other modal administrations within DOT, such as FRA. Additionally, the DOT Secretary has testified that with such authority, FTA would become more proactive in setting safety thresholds that would result in greater consistency and uniformity across transit systems in the United States. In our testimony, we noted that providing FTA and participating states with such authority could help ensure compliance with standards and improved safety practices, and might prevent some accidents as a result. However, we also noted that Congress may need to consider a number of issues in deciding whether and how to implement such legislation. These include how best to balance federal versus state responsibilities, how to ensure that FTA has adequate qualified staff to carry out such a program, and what level of resources to devote to the program. In addition to these efforts, FTA has recently formed the Transit Rail Advisory Committee for Safety. The committee is expected to provide information, advice, and recommendations—including recommendations for instilling a safety culture at transit agencies—to the Secretary of Transportation and the FTA Administrator on all matters relating to the safety of U.S. public transportation systems and activities. Members of the committee include representatives with expertise in safety, transit operations, or maintenance; representatives of stakeholder interests that would be affected by transit safety requirements; persons with policy experience, leadership, or organizational skills; and regional representatives. The committee held its first meeting on September 9–10, 2010, and established two workgroups, one tasked with researching safety planning models for transit agencies and the other with identifying the best model for organizing a state safety oversight agency organization. Both of these workgroups were tasked to produce recommendations based on their work in May 2011. The safety planning model workgroup could help strengthen safety culture through its work to determine the best safety management system principles for transit agencies of any size to enhance rail transit safety, including policy practices, stakeholder relationships, and any desired changes to current law or regulations. NTSB officials, transit agency officials, experts we met with, and others have proposed that FTA take additional steps to help transit agencies address safety culture challenges. These have included: Develop nonpunitive safety reporting programs. As previously discussed, nonpunitive systems can alleviate employee concerns and encourage participation in safety reporting. Nonpunitive systems can include voluntary, anonymous reports by employees that are reviewed by an independent, external entity. NTSB has recommended that FTA facilitate the development of nonpunitive safety reporting programs at all transit agencies that would collect safety reports and operations data from employees in all divisions. Safety department representatives from their operations, maintenance, and engineering departments and representatives from labor organizations would regularly review these reports and share the results of those reviews across all divisions of their agencies. FRA is piloting a voluntary confidential reporting program for workers in the railroad industry consistent with NTSB’s recommendation and the Federal Aviation Administration has established such a program for air carrier employees, air traffic controllers, and others. FTA officials told us that identifying operating errors in a nonpunitive way is important and that they have begun research through the Transit Cooperative Research Program to examine ways to improve compliance with safety rules at transit agencies, including the use of nonpunitive reporting models. FTA plans to report on the results of this work by late 2011. Increase efforts to encourage a strong safety culture. In addition, APTA and some transit agency officials have called on FTA to do more to develop and share information on establishing a strong safety culture at transit agencies. One expert we met with noted that establishing and enforcing regulations will not necessarily bring about an improvement in safety culture in the rail transit industry. APTA officials and officials at one large transit agency noted that FRA pilot projects aimed at addressing accidents caused by human error and identifying ways to better manage safety have helped encourage a strong safety culture in the freight railroad industry and that FTA could foster positive changes in safety culture in the rail transit industry through such methods. While FTA has various efforts underway to instill safety culture at transit agencies, these do not include pilot projects to evaluate or test safety culture concepts and ideas. FTA has provided some assistance to help transit agencies address staffing challenges, but its safety-related assistance has focused primarily on providing training. FTA has reported that it has a compelling interest in transit workforce development given its large investment in and oversight of transit. FTA has supported research on transit workforce challenges— including recruitment and retirement issues—through its Transit Cooperative Research Program. FTA’s Southern California Regional Transit Training Consortium has worked to establish a model mentor/internship program that can be used by transit agencies of any size. These programs run in conjunction with local community colleges, where a primary objective is to introduce students to transit work, particularly maintenance and other support. Ultimately, this program allows transit agencies to hire from a greater pool of transit-trained interns. FTA’s fiscal year 2011 budget request also described a proposed effort to design programs to help transit agencies build and develop a workforce with sufficient skills to fill transit jobs of the future. These efforts can help transit agencies recruit and hire qualified employees and address staffing challenges involving an aging workforce. To help address transit agency safety training challenges, FTA has provided funding to support a variety of training classes. Through programs managed by the National Transit Institute and the Transportation Safety Institute, FTA has supported training for transit agency employees. Both of these organizations offer safety classes attended by transit agency employees, as well as by state safety oversight agency staff. To avoid duplication, the National Transit Institute focuses on training for frontline employees, such as track workers and operators, while the Transportation Safety Institute provides classes for supervisory and management personnel. Classes have included current rail system safety principles and online fatigue awareness. In fiscal year 2010, the National Transit Institute and the Transportation Safety Institute held 220 training sessions related to safety and more than 6,700 transit agency staff took part in this training. FTA has also provided specialized training aimed at transit agencies that have experienced recent safety incidents. For example, FTA recently concluded training on rail incident investigation and system safety for WMATA staff. In all, FTA has delivered seven courses to assist WMATA staff in receiving critical safety training. In another example, through the Transit Technology Career Ladder Partnership Program, FTA has funded partnerships in four states aimed at training transit employees to become proficient in safety practices and procedures. Currently, FTA is drafting a 5-year safety and security strategic plan for training. The plan will cover safety technical training for staff working at FTA, state safety oversight agencies, and transit agencies. While one aim of the plan will be to prepare FTA and state staff to handle new responsibilities should legislation be enacted that would change their oversight role for rail transit safety, FTA also intends to use the plan to identify improvements needed in the training it provides to transit agencies. Potential improvements include re-evaluating the levels and types of training that FTA supports. FTA officials estimated the training plan would be completed in May 2011. Officials also told us that they are collaborating with officials at APTA, state safety oversight agencies, and FRA to obtain their views on how to better provide training to transit agencies. In its fiscal year 2011 budget request, FTA has proposed additional resources to provide training for transit agencies, state safety oversight agencies, and FTA officials. More information on current and planned efforts by FTA to address staffing and training challenges at transit agencies is available in appendix III. A legislative proposal, as well as some APTA officials and others, identified additional efforts that, if adopted, might improve transit agencies’ abilities to address their staffing and training challenges. These include: Formulate a national approach to staffing and training. In 2009, the House of Representatives Committee on Transportation and Infrastructure issued draft legislation to reauthorize surface transportation programs that would require FTA to form a national council to identify skill gaps in transit agency maintenance departments, develop programs to address the recruitment and retention of transit employees, and make recommendations to FTA and transit agencies on how to increase apprenticeship programs, among other things. Furthermore, this proposed legislation as well as APTA and the Transportation Learning Center called for a national curriculum or certification program that would establish some level of training standardization for transit agency employees. APTA and transit agency officials have noted that potential benefits include achieving a level of consistency in safety training across the country as well as minimum thresholds for transit agency staff. FTA has created curriculum development guidelines to help transit agencies establish their own training curricula. Due in part to differences in transit agencies’ operating environments and system technologies, FTA officials reported that in developing their upcoming safety and security strategic plan for training, they may examine whether setting standards for a national training curriculum would be appropriate. Increase technical training. NTSB officials and some of the experts and transit agency officials we met with stated that FTA should increase the technical components of the training for transit agency employees that it supports. Transit agency officials reported that training provided by the National Transit Institute and Transportation Safety Institute includes valuable safety information, but overall the training provided is introductory and does not cover enough technical aspects of safety. According to NTSB officials, transit agency safety staff need periodic, refresher training to continue to learn and more technical training to adequately understand and perform their job. Technical aspects could include the overall mechanics and engineering involved in rail transit operations, as well as how problems with equipment can lead to unsafe conditions. Some state safety oversight and transit agency officials we met with said that available technical training is limited and that FTA could create a training curriculum that other organizations, such as local community colleges, could use to teach safety-related classes. Similarly, APTA has reported the need to develop core curricula to be used at universities and community colleges and to enhance partnerships between transit agencies and higher education in order to provide additional training and educational opportunities for current and future transit workers. Increase federal support for training. In a past report, the Transportation Learning Center has noted that, of the billions of dollars the federal government provides to transit agencies annually, little is invested in human capital—that is, the people, knowledge, and skills necessary to provide reliable and safe service. In response, the center has recommended that federal funding provide support for transit agencies’ workforce training. In addition, officials at APTA and transit agencies, as well as some experts we met with, favored increasing federal support to cover training and related travel costs for transit agency employees. FTA has provided funding to state safety oversight agency staff to cover such costs to attend training offered by the National Transit Institute and the Transportation Safety Institute, but this support generally has not been extended to transit agency staff. FTA officials reported that they support training offered around the country and that demand is high. Transit agencies also have the option of hosting training to reduce travel and other costs. FTA’s assistance to transit agencies to help achieve a state of good repair—and therefore help ensure safe operations—has primarily consisted of providing grant funding, although FTA has also conducted studies and is taking steps to provide more guidance to agencies on asset management. The two major FTA grant programs transit agencies have used to help achieve a state of good repair are the Fixed Guideway Modernization Program and the Urbanized Area Formula Program. In fiscal year 2010, these FTA grants provided nearly $6 billion for transit agencies’ capital projects and related planning activities. This support has helped transit agencies maintain system facilities such as stations and other equipment. Funding also has assisted transit agencies in rehabilitating or purchasing rail vehicles and modernizing track and other infrastructure to improve operations. Besides supporting achieving a state of good repair, FTA’s grant funding programs can support other safety- related improvements, such as upgrading signal and communications systems. In its fiscal year 2011 budget request, FTA has proposed increasing assistance to transit agencies through a new $2.9 billion state of good repair program for bus and rail systems. This program would, for the first time, provide funding to transit agencies that exclusively focus on achieving a state of good repair. Besides providing funds, another activity FTA has recently engaged in involves helping transit agencies improve their asset management practices in order to enhance their ability to achieve a state of good repair and ensure safety. As previously discussed, FTA officials reported that the transit industry has been slow to adopt asset management practices that would allow efficient management of state of good repair and some related safety needs. As a result, transit agencies may have limited knowledge of asset conditions and how to best use scarce resources to ensure an efficient and safe operation. In DOT’s fiscal year 2010 appropriation, $5 million was made available to FTA to develop standards for asset management plans, provide assistance to grant recipients engaged in the development or implementation of an asset management system, improve data collection, and conduct a pilot program designed to identify best practices for asset management. FTA has begun to undertake these efforts. It has reviewed national and international asset management practices and concluded that major opportunities for improvements exist in the United States. FTA is also currently soliciting for projects with transit agencies of various modes and sizes to demonstrate different aspects of good asset management practices. According to FTA officials, improved asset management by transit agencies will include better approaches for prioritizing rehabilitation and replacement projects and will therefore allow agencies to better ensure safety. Other FTA technical assistance in this area includes the development of capital planning tools and asset inventory guidelines, research on integrating maintenance management with capital planning, training and guidance to educate transit agency staff on asset management, and enhanced asset data collection. As previously discussed, while no common standards exist for asset management, it can include tracking asset condition and use, as well as planning appropriately for rehabilitation and replacement. The National Surface Transportation Policy and Revenue Study Commission has reported that, to achieve a state of good repair, local governments, states, and other entities must develop, fund, and implement an asset management system to ensure the maximum effectiveness of federal capital support. We have previously reported that in some surface transportation programs, including transit programs, agencies often do not employ the best tools and approaches to ensure effective investment decisions, an area where asset management can help. See appendix III for other current and planned efforts by FTA to help transit agencies address state of good repair challenges. Legislative proposals, one FTA study, and several organizations we met with have identified additional efforts that, if adopted, might hold transit agencies accountable for improving the management of their assets and therefore better ensure safety. These included: Linking grant funding to the establishment of asset management systems. Congress has considered legislation that would direct DOT to establish and implement a national transit asset management system. This legislation would direct FTA to define a state of good repair and for the first time require transit agencies that receive federal funding to establish asset management systems. This would help transit agencies to prioritize which assets to maintain, rehabilitate, and replace to help ensure safe operating conditions. Separately, a report by the Senate Committee on Appropriations directs FTA to issue a notice of proposed rulemaking by September 30, 2011, to implement asset management standards requiring transit agencies that receive FTA funds to develop capital asset inventories and condition assessments. FTA officials told us that they have no plans to develop such a rulemaking at this time, but would do so if required by statute. FTA is to report to Congress in June 2011 on its investigations into asset management. We have previously identified principles that could help drive re-examination of federal surface transportation programs, including ensuring accountability for results by entities receiving federal funds and using the best tools and approaches, such as grant eligibility requirements, to emphasize return on targeted federal investment. Increasing available transit agency asset data. Another option that FTA has reported on for it and Congress to consider involves establishing a system that ensures regular reporting of transit agencies’ capital assets and a consistent structure and level for this reporting. FTA officials noted that they already collect transit vehicle data from agencies, but that they need more information to effectively report on transit agency assets. FTA is considering expanding transit agency reporting requirements to include data on local agency asset inventory, holdings, and conditions. FTA has reported that these data would support better national needs assessments and transit asset condition monitoring than is currently possible. Also, this would encourage transit agencies to develop and maintain their own asset inventory and condition monitoring systems. Besides the additional efforts outlined above, there are other proposals that would make more grant funding available to large transit agencies. FTA and transit agency officials reported that transit agencies maintaining older systems have received a smaller percentage of available federal funding as the number of transit systems competing for the same amount of funding has increased. For example, in its 2009 study on the state of good repair in the transit industry, FTA reported that the seven largest rail transit systems, which carry 80 percent of the nation’s rail transit riders and maintain 50 to 75 percent of the nation’s rail transit infrastructure, have received 23 percent of the total federal funding eligible for rail state of good repair investment. These agencies’ percentage share of total federal funding for achieving a state of good repair has declined. In short, while total federal support for transit infrastructure has increased, the share allocated to the nation’s oldest and largest systems has shrunk. To address this, FTA included an option in its 2009 study for it and Congress to consider modifying existing funding formulas to factor in system age, among other things. Congress is also considering new ways to potentially fund transit and other surface transportation projects, including the formation of a National Infrastructure Bank. The various proposals suggesting additional steps that FTA could take to provide safety-related assistance to transit agencies or strengthen their accountability for effectively managing their assets have the potential, if implemented, to enhance rail transit safety. Past reauthorizations of surface transportation programs have provided an avenue for Congress to identify programs to address problems, including those involving transportation safety. DOT is currently developing a surface transportation reauthorization proposal. As part of its effort to develop a surface transportation reauthorization proposal, DOT officials have conducted outreach events to collect input from experts on possible surface transportation initiatives to include in the proposal and have held internal discussions to develop the proposal. Additionally, DOT is considering options for improving transportation safety, including rail transit safety. Therefore, the proposal that DOT eventually puts forward may address some or all of the safety challenges that we cite. Furthermore, FTA’s 5- year safety and security training plan, when it is completed, may include improvements that help address the training challenges that transit agencies face. As FTA undertakes efforts to help transit agencies address their safety culture, staffing and training, and state of good repair challenges, setting performance goals and measures can help it target these efforts and track results. Performance goals can help organizations clearly identify the results they expect to achieve, prioritize their efforts, and make the best use of available resources. Performance measures can help organizations track the extent to which they are achieving intended results. In the case of FTA, such prioritization is essential, given the relatively small number of staff it has devoted to safety and state of good repair efforts. For example, while FTA has requested 30 additional staff in fiscal year 2011 in anticipation of receiving authority to strengthen its safety oversight role, it currently has 15 to 17 full-time employees working in its Office of Safety and Security, as well as staff from other FTA offices working on state of good repair efforts. The ability to prioritize efforts and track progress will become even more important in the event that Congress enacts legislation that would give FTA greater oversight authority of transit agencies and expand its transit safety responsibilities. Furthermore, as FTA is faced with proposals to assume even more responsibility for transit safety in the future—through, for example, setting asset management or training curriculum standards for transit agencies—it is even more essential that it clearly identify the specific results it is trying to achieve, target its efforts, and track progress toward achieving those results. We have identified a number of leading practices agencies can implement to help set or enhance performance goals and measures. While FTA has created plans and other tools to help guide and manage its safety efforts, it has not fully adopted these practices. The next sections discuss these leading practices and the extent to which FTA has followed them. We have found that successful organizations try to link performance goals and measures to strategic goals and that, in developing these goals and measures, such organizations generally focus on the results that they expect their programs to achieve. DOT has identified an overall strategic safety goal of reducing transportation-related injuries and fatalities, including rail transit injuries and fatalities, and FTA has identified measures in its fiscal year 2011 budget request related to that goal. In its Annual Performance Plan for fiscal year 2011, FTA identified a general safety goal of further defining its leadership role in the area of surface transportation safety as well as some desired outcomes of its safety efforts, such as increased public confidence in the safety of public transportation and improved safety culture at transit agencies nationwide. It also identified some strategies for achieving this goal and these outcomes, such as establishing the Transit Rail Advisory Committee for Safety and assessing safety and security training. However, FTA has not identified specific performance goals that make clear the direct results its safety activities are trying to achieve and related measures that would enable the agency to track and demonstrate its progress in achieving those results. Without such specific goals and measures, it is not clear how FTA’s safety activities contribute toward DOT’s overall strategic goal of reducing transportation-related injuries and fatalities, including rail transit injuries and fatalities. In addition, in its fiscal year 2011 budget request FTA included the goal of improving the rail transit industry’s focus on safety vulnerabilities. FTA also identified some activities associated with this safety goal, such as submitting legislation to Congress. However, FTA did not clearly articulate the expected results associated with this goal and activities. Nor did FTA explain how such results would be measured and how they relate to DOT’s strategic goals. Linking FTA’s performance goals to departmental goals can provide a clear, direct understanding of how the achievement of annual goals will lead to the achievement of the agency’s strategic goals. We have previously reported that a clear relationship should exist between an agency’s annual performance goals and long-term strategic goals and mission. FTA officials told us that it can be difficult to set performance goals and measures for the agency’s safety efforts due to its limited authority over safety in the transit industry. In past work, we have reported that developing goals and measures for outcomes that are the result of phenomena outside of federal government control is a common challenge faced by many federal agencies. However, despite this challenge, measuring program results and reinforcing their connection to achieving long-term strategic goals can create a greater focus on results, help hold agencies and their staff accountable for the performance of their programs, and assist Congress in its oversight of agencies and their budgets. Performance goals and measures that successfully address important and varied aspects of program performance are key aspects of a results orientation. While FTA has identified various activities aimed at improving rail transit safety, it has not established clear results-oriented goals and measures that address key dimensions of the performance of its various efforts related to safety, such as its training and state of good repair programs. FTA could address important dimensions of program performance in different ways. For example, the agency could set goals and measures to address identified safety challenges, such as those identified in this report, or to capture results of its various safety-related efforts, such as its training programs or asset management initiatives. Alternatively, performance goals and measures could relate to the causes behind certain types of transit accidents, such as setting a goal of reducing the number of accidents where human error is a probable cause in a given year. Without goals related to various dimensions of program performance, FTA has not identified the intended results of its various safety-related efforts. Limited use of performance measures by FTA makes it difficult to determine the impact of these efforts on safety. While FTA has identified overall measures of transit safety—the number of transit injuries and fatalities per 100 million passenger-miles traveled—its annual performance plan lacks quantifiable, numerical targets related to specific goals, against which to measure the performance of its efforts. FTA’s fiscal year 2011 budget request did include a performance measure to track the percentage of federal formula funding that transit agencies used for replacement versus new capital purchases by the end of fiscal year 2011 and related this measure to its goal of improving the rail industry’s focus on safety vulnerabilities. However, this measure captures only one of the types of results FTA might expect to achieve from its various safety efforts. In the past, FTA safety planning documents have linked specific FTA performance goals and measures with DOT’s overall strategic safety goals; however, FTA is no longer using these documents. For example, FTA’s 2006 Rail Transit Safety Action Plan included safety goals and measures, such as reducing total derailments per 100 million passenger miles, major collisions per 100 million passenger trips, and total safety incidents per 10 million passenger trips. These goals and measures are clearly linked to DOT’s overall strategic goal of working toward the elimination of transportation-related injuries and fatalities, including rail transit injuries and fatalities. The plan also included a number of supporting priorities, such as reducing the impact of fatigue on transit workers, and how the agency planned to achieve them. The plan also included performance measures and target goals for FTA’s state safety oversight program, such as the number of dedicated state personnel and necessary levels of training and certification. FTA officials reported that the goals and measures captured in this and other past planning documents were no longer in use because of changes in safety environments. At present, FTA has no active strategic plan, and FTA officials estimated the new strategic plan would be completed in late 2011. Other agencies are presently making use of practices to enhance performance goals and measures for safety activities. For example, FRA has created a set of performance goals and measures that address important dimensions of program performance. In its proposed fiscal year 2011 budget, FRA included specific safety goals to reduce the rate of train accidents caused by various factors, including human errors and track defects. These goals are numeric, with a targeted accident rate per every million train miles. Collecting such accident data equips FRA with a clear way to measure whether or not those safety goals are met. FRA’s budget request has also linked FRA’s performance goals and measures with DOT’s strategic goals. Another DOT agency, the Federal Motor Carrier Safety Administration, has a broad range of goals and related performance measures that it uses to provide direction to—and track the progress of— its enforcement programs, including measures of the impact of its enforcement programs on the level of compliance with safety regulations and on the frequency of crashes, injuries, and fatalities. The agency’s end goal—to reduce crashes, injuries, and fatalities through its reviews—aligns with and contributes to DOT’s overall strategic safety goals. While these leading practices are useful, problems with FTA’s rail transit safety data could hamper the agency’s ability to measure its safety performance. We have found that data contained in FTA’s State Safety Oversight Rail Accident Database—which is compiled from data provided by state safety oversight agencies and transit agencies—are unreliable. Specifically, we found unverified entries, duplicative entries, data discrepancies, and insufficient internal controls. Without reliable data, it is difficult for FTA to measure performance based on goals to be captured in annual performance plans or agency strategic plans. Baseline and trend data also provide context for drawing conclusions about whether performance goals are reasonable and appropriate. Establishing procedures that ensure reliable data is an important internal control necessary to validly measure performance based on numerical targets. Additionally, decision makers can use such data to gauge how a program’s anticipated performance level compares with past performance. FTA officials have acknowledged the important role that data play in making decisions regarding how to address challenges to rail transit safety. FTA has implemented changes to the data collection process to address some of the data problems we identified and plans to take additional actions to validate and correct discrepancies contained in its State Safety Oversight Rail Accident Database, but these plans do not identify specific efforts to establish procedures that would improve data reporting in the future. To ensure the accuracy and reliability of the State Safety Oversight Rail Accident Database, we have recommended that FTA develop and implement appropriate internal controls to ensure that data entered are accurate and incorporate an appropriate method for reviewing and reconciling data from state safety oversight agencies and other sources. Without clear, specific, and varied performance goals and related measures linked to DOT’s strategic goal of reducing transportation-related injuries and fatalities, including rail transit injuries and fatalities, the intended results of FTA’s safety efforts are unclear. Furthermore, the absence of clear goals and measures to guide and track progress limits FTA’s ability to make informed decisions about its safety strategy and its accountability for its safety performance. Finally, without reliable data, FTA cannot establish useful performance measures, making it difficult to determine whether safety programs are accomplishing their intended purpose and whether the resources dedicated to program efforts should be increased, used in other ways, or applied elsewhere. Rail transit systems will remain vital components of the nation’s transportation infrastructure and will need to continue to provide safe service for the millions of commuters that rely on them daily. Through its assistance efforts, FTA has worked with transit agencies to foster a safer operating environment for these passengers. Planned, new assistance efforts by FTA, as well as legislative proposals to enhance FTA’s regulatory authority over transit safety, have the potential to further enhance safety on rail transit systems. Some additional proposals concerning new steps FTA could take to address safety challenges facing transit agencies also have the potential to improve rail transit safety. For example, while FTA is already working to instill safety culture at transit agencies, creating pilot projects to examine new approaches for instilling a strong safety culture at these transit agencies may have merit. Setting standards for a national training curriculum for transit employees may also ensure that a minimum threshold of training is achieved across the transit industry, if such standards could account for differences in transit agencies’ environments and technologies. Asset management shows promise in both helping transit agencies and protecting federal investment. Similarly, holding agencies that receive federal funds accountable for using asset management practices could help ensure that federal funds aimed at addressing this problem are effectively used. DOT is uniquely positioned to examine various proposals to discern any worthwhile options for implementation going forward, given available resources and other competing priorities, and to propose in its draft surface transportation reauthorization legislation any options deemed worthwhile. We are not recommending at this time that DOT take actions on proposals for improving rail transit safety, as the department is considering various options for improving transportation safety, including rail transit safety, in developing its reauthorization proposal. As FTA helps transit agencies ensure safety, setting clear performance goals and related measures for its safety efforts, based on leading practices, will be vital to improve FTA’s ability to set priorities and determine progress—both in overseeing transit agencies and in helping them maintain safety on their systems. Setting clear performance goals will help FTA to communicate a direction for its safety efforts and establish benchmarks for performance. Tracking progress through performance measures will help FTA in planning its future efforts and will help hold the agency accountable for achieving results. However, FTA must take further actions to improve the reliability of its safety data before it can track its safety performance based on new measures and goals. To ensure that FTA targets its resources effectively as it increases its safety efforts and is able to track the results of these efforts, we recommend that the Secretary of Transportation direct the FTA Administrator to use leading practices as FTA develops its plans for fiscal year 2011 and in the future. In particular, the Administrator should create a set of clear and specific performance goals and measures that (1) are aligned with the department’s strategic safety goals and identify the intended results of FTA’s various safety efforts and (2) address important dimensions of program performance. We provided a draft of this report to DOT and NTSB for their review and comment. Both provided technical comments and clarifications, which we incorporated into the report as appropriate. DOT agreed to consider our recommendation. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, and the Chair of the National Transportation Safety Board. In addition, this report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834, or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Of the 48 rail accident investigations that the National Transportation Safety Board (NTSB) has reported on since 2004, 7 were on heavy rail transit systems operated by the Chicago Transit Authority (CTA) and the Washington Metropolitan Area Transit Authority (WMATA), and one was on the light rail transit system operated by the Massachusetts Bay Transportation Authority (MBTA). As shown in table 2, these accidents collectively resulted in 13 fatalities, hundreds of injuries, and millions of dollars in property damage. In its reports, NTSB identified the probable causes of accidents as well as factors that contributed to these accidents. In five of these eight accident investigations, NTSB found the probable cause to involve employee errors, such as the failure of the train operator to comply with operating rules and of track inspectors to maintain an effective lookout for trains. Of the remaining three accidents, NTSB found that problems with equipment were a probable cause of two accidents and that weaknesses in management of safety by the transit agency, such as its management of maintenance and of equipment quality controls, were a probable cause of all three accidents. For six of these eight accidents, contributing factors identified involved deficiencies in safety management or oversight, including weaknesses in transit agencies’ safety rules and procedures and in their processes for ensuring employees’ adherence to these rules and procedures, lack of a safety culture within the transit agency, and lack of adequate oversight by the transit agency’s state safety oversight agency and the Federal Transit Authority (FTA). In one accident report, NTSB found as a contributing factor the lack of safety equipment or technologies, such as a positive train control system that can prevent trains from colliding. In addition, as shown in table 2, NTSB has ongoing investigations on six accidents that occurred on heavy and light rail transit systems. To determine the challenges that the largest rail transit systems face in ensuring safety, we conducted site visits, examined documents, conducted interviews, and consulted relevant literature. We obtained documents from and interviewed officials at five large heavy rail transit systems and three large light rail transit systems operated by seven transit agencies. The five heavy rail systems are those operated by the Metropolitan Transportation Authority New York City Transit (NYCT), WMATA, CTA, MBTA, and the Bay Area Rapid Transit (BART). The three light rail systems are operated by MBTA, the San Francisco Municipal Transportation Agency (SF Muni), and the Los Angeles County Metropolitan Transportation Authority (LA Metro). We obtained budget documents, accident and audit reports, corrective action plans, and staffing and training information, among other information and documentation, from each system. Also, we interviewed representatives from these transit agencies and their respective state safety oversight agencies about the transit agencies’ challenges. We also analyzed published NTSB investigations of accidents on heavy and light rail transit systems since 2004 to help us determine the causes of and factors contributing to rail transit accidents in recent years. We used data from Federal Transit Administration’s (FTA) National Transit Database (NTD) to select these eight transit systems. The NTD data we used for our selection criteria were (1) annual ridership, as measured by unlinked passenger trips and passenger miles, (2) the number of rail transit vehicles in revenue service operations, and (3) total track mileage. To determine whether these NTD data were reliable for our purposes, we interviewed FTA officials who are knowledgeable about the database and assessed the accuracy of these data elements. We determined that these specific data elements were sufficiently reliable to be used as selection criteria. To determine the extent to which FTA’s assistance addresses the safety challenges faced by the largest transit agencies, we reviewed FTA documents on funding, state of good repair initiatives, technical assistance programs, and guidance and outreach related to rail transit safety. We also obtained information on transit safety training from the National Transit Institute and the Transportation Safety Institute. We interviewed officials from FTA and NTSB and representatives of the American Public Transportation Association (APTA). We asked officials from the transit systems we visited and their respective state safety oversight agencies for their assessment of FTA’s assistance efforts. We reviewed applicable federal regulations, laws, and legislative proposals. In addition, we consulted our prior work on performance management and rail transit issues. We further contracted with the National Academies’ Transportation Research Board to identify rail transit safety experts from the transit industry, academia, labor unions, and the rail consulting community. We interviewed 12 experts on the challenges that large rail transit agencies face in ensuring safety, the factors that contribute to rail transit safety accidents, and potential ways that FTA could improve its safety assistance efforts (see table 3). We also interviewed officials from NTSB and representatives of the APTA on these topics. In addition, as part of this review, we assessed FTA’s safety data to determine whether they were sufficiently reliable for us to use to report on trends in rail transit accidents as well as causes of those accidents. During that assessment, we identified inaccuracies, discrepancies, and duplicative entries, and determined that these data were not sufficiently reliable for these purposes and decided to conduct a separate review of the data’s reliability. We are issuing a report on our findings and recommendations based on this review. Appendix III: DOT Safety-Related Assistance Efforts That Address Transit Agencies’ Safety Culture, Staffing, and Training Challenges .4 billion to fund public transportation throughout the country. Recovery Act funds have primarily supported grants in capital projects at transit agencies, although some funds have been used for operating expenses. As of August 25, 2010, approximately $190 million had been obligated for use as operating expenses. In addition to the contact named above, Judy Guilliams-Tapia, Assistant Director; Catherine Bombico; Matthew Cail; Martha Chow; Antoine Clark; Colin Fallon; Kathleen Gilhooly; Brandon Haller; Hannah Laufe; Grant Mallie; Anna Maria Ortiz; and Kelly Rubin made significant contributions to this report. | Although transit service is generally safe, recent high-profile accidents on several large rail transit systems--notably the June 2009 collision in Washington, D.C., that resulted in nine fatalities and 52 injuries--have raised concerns. The Federal Transit Administration (FTA) oversees state agencies that directly oversee rail transit agencies' safety practices. FTA also provides assistance to transit agencies, such as funding and training, to enhance safety. GAO was asked to determine (1) the challenges the largest rail transit systems face in ensuring safety and (2) the extent to which assistance provided by FTA addresses these challenges. GAO visited eight large rail transit systems and their respective state oversight agencies, reviewed pertinent documents, and interviewed rail transit safety experts and officials from FTA and the National Transportation Safety Board (NTSB). The largest rail transit agencies face several challenges in trying to ensure safety on their systems. First, according to some experts we interviewed, the level of safety culture--awareness of and organizational commitment to the importance of safety--varies across the transit industry and is low in some agencies. NTSB found that the lack of a safety culture contributed to the June 2009 fatal transit accident in Washington, D.C. Second, with many employees nearing retirement age, large transit agencies have found it difficult to recruit and hire qualified staff. It is also challenging for them to ensure that employees receive needed safety training because of financial constraints and the limited availability of technical training. Training helps ensure safe operations; NTSB has identified employee errors, such as not following procedures, as a probable cause in some significant rail transit accidents. Third, more than a third of the largest agencies' assets are in poor or marginal condition. While agencies have prioritized investments to ensure safety, delays in repairing some assets, such as signal systems, can pose safety risks. The transit industry has been slow to adopt asset management practices that can help agencies set investment priorities and better ensure safety. FTA has provided various types of assistance to transit agencies to help them address these challenges, including researching how to instill a strong safety culture at transit agencies, supporting a variety of safety-related training classes for transit agency staff, and providing funding to help agencies achieve a state of good repair. The Department of Transportation (DOT) has proposed legislation that would give FTA the authority to set and enforce rail transit safety standards, which could help improve safety culture in the industry. FTA is also planning improvements to its training program and the development of asset management guidance for transit agencies, among other things. Some legislative proposals, studies, experts, and agency officials have identified further steps that FTA could take to address transit agencies' safety challenges, such as requiring transit agencies to implement asset management practices. Some of these suggested further steps may have the potential, if implemented, to enhance rail transit safety. DOT is currently developing a legislative proposal for reauthorizing surface transportation programs and may include new rail transit safety initiatives in this proposal. In addition, clear and specific performance goals and measures could help FTA target its efforts to improve transit safety and track results. GAO has identified leading practices to establish such performance goals and measures, but FTA has not fully adopted these practices. For example, FTA has not identified specific performance goals that make clear the direct results its safety activities are trying to achieve and related measures that would enable the agency to track and demonstrate its progress in achieving those results. Without such specific goals and measures, it is not clear how FTA's safety activities contribute toward DOT's strategic goal of reducing transportation-related injuries and fatalities, including rail transit injuries and fatalities. Furthermore, problems with FTA's rail transit safety data could hamper the agency's ability to track its performance. GAO is making recommendations for improving these data in a separate report (GAO-11-217R). To guide and track the performance of FTA's rail transit safety efforts, DOT should direct FTA to use leading practices to set clear and specific goals and measures for these efforts. DOT and NTSB reviewed a draft of this report and provided technical comments and clarifications, which we incorporated as appropriate. DOT agreed to consider the recommendation. |
To identify which isotopes are produced, sold, or distributed either by the Isotope Program or NNSA and how the two agencies make isotopes available for commercial and research applications, we reviewed the DOE Isotope Program’s information on available isotopes, isotope sales data, and information on NNSA’s isotopes. We also visited Oak Ridge National Laboratory in Tennessee, where the Isotope Program’s business office and the program’s inventory of stable isotopes are located, to view production facilities and interview officials about isotope production and sales. We interviewed officials at the national laboratories that produce isotopes for the Isotope Program: Brookhaven National Laboratory in New York, Idaho National Laboratory in Idaho, Los Alamos National Laboratory in New Mexico, Oak Ridge National Laboratory in Tennessee, and Pacific Northwest National Laboratory in Washington State. We also interviewed headquarters officials with the Isotope Program and NNSA about isotope production and how the two entities work together. To determine what steps the Isotope Program takes to provide isotopes for commercial and research applications, we reviewed the Isotope Program’s production schedules, pricing policy, and documents related to how the program gathers information on customers’ needs. We also interviewed representatives from commercial companies and researchers who purchase isotopes from the Isotope Program. We interviewed officials from the National Isotope Development Center; the Isotope Program; and Brookhaven, Los Alamos, and Oak Ridge National Laboratories because officials at these locations are involved in producing and selling isotopes to customers. To determine the extent to which DOE is assessing risks facing the Isotope Program, we reviewed reports from the Nuclear Science Advisory Committee’s isotope subcommittee and the report from the isotope workshop DOE held in 2008. We also reviewed the strategic plans, risk assessment plans, and related documents from Brookhaven, Los Alamos, and Oak Ridge National Laboratories because the Isotope Program is the steward of isotope production at these sites. We reviewed and compared lists of high-priority isotopes that were prepared by the Isotope Program, the Nuclear Science Advisory Committee’s isotope subcommittee, the National Institutes of Health, and stakeholders at the 2008 isotope workshop. We also interviewed officials from the Isotope Program and Brookhaven, Los Alamos, and Oak Ridge National Laboratories to learn about risk assessment planning at each site and for the Isotope Program. In addition, we compared actions the Isotope Program is taking to assess risks with federal standards for internal control. We conducted this performance audit from June 2011 to May 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Isotope production and distribution have been part of DOE’s mission since at least 1954, when the Atomic Energy Act of 1954 specified the role of the U.S. government in isotope distribution. DOE’s Isotope Program fills this role by providing isotopes to support the national and international need for a reliable supply for use in medicine, industry, and research. The Isotope Program provides both radioactive isotopes, called radioisotopes, and stable isotopes, which are not radioactive. In addition, the Isotope Program provides a range of isotope-related services to customers worldwide. For example, the program may lease some stable isotopes and also provides irradiation and isotope- processing services for research and commercial applications. DOE transferred the Isotope Program from the department’s Office of Nuclear Energy to its Office of Science in 2009, at which time DOE restructured the program. The program currently consists of four DOE headquarters employees who oversee operations and set policy, plus the National Isotope Development Center, which is a virtual organization consisting of DOE contract employees located at Los Alamos National Laboratory and Oak Ridge National Laboratory. National Isotope Development Center employees carry out day-to-day operations of the Isotope Program, which include interacting with the isotope user community though a variety of outreach activities, monitoring short-term and long-term isotope demand, coordinating isotope production across DOE’s isotope production facilities, and distributing isotopes. The National Isotope Development Center includes DOE contract employees at the Isotope Business Office, located at Oak Ridge National Laboratory, who manage business operations involved in the production, sale, and distribution of isotopes. In addition, officials from the National Isotope Development Center and DOE headquarters coordinate with many federal programs, including the National Institutes of Health, to identify current and future isotope needs. The Isotope Program produces most of its radioisotopes at three DOE production sites: the linear particle accelerators at Brookhaven National Laboratory in New York and Los Alamos National Laboratory in New Mexico, and the nuclear reactor at Oak Ridge National Laboratory in Tennessee. The program also produces a small number of radioisotopes at the Pacific Northwest National Laboratory in Washington State and at Idaho National Laboratory. The DOE facilities associated with the Isotope Program are recognized as uniquely capable of producing radioisotopes. Although the Isotope Program uses these DOE sites to produce radioisotopes, the program does not manage all the sites’ operations. Rather, the Isotope Program shares the use of these sites with other missions, which consist of a diverse combination of DOE activities related to nuclear science, materials research, or defense. The production sites are therefore not always available to the Isotope Program, and at times the program may not control the timing and duration of isotope production. The Isotope Program relies on appropriations and revenues from isotope sales for funding its operations. Both yearly appropriations and sales revenues are deposited into a revolving fund from which the program draws funds to operate its facilities, produce isotopes, pay employees’ salaries, and fund research, among other activities. Funds remain available to the program in the revolving fund, which allows the program to carry over balances from year to year, giving the program budgeting flexibility. Table 1 shows the Isotope Program’s revolving fund balances, annual appropriations, annual sales revenues, and obligations to operate the program for fiscal years 2009 through 2011. The Isotope Program’s annual spending on research and development is generally aimed at developing new or more efficient isotope production techniques. The Isotope Program sold isotopes or provided isotope-related services to more than 100 customers in fiscal year 2011, both in the United States and internationally, with 6 of those customers accounting for more than 80 percent of all sales revenue in fiscal year 2011. More than 95 percent of the Isotope Program’s annual revenue came from the sale of eight different isotopes in fiscal year 2011; these eight isotopes generated almost $26 million in revenue (see table 2). DOE’s Isotope Program produces or makes available for sale and distribution over 300 different isotopes for research and commercial applications. NNSA generates or provides additional isotopes that are transferred to other federal agencies or sold by the Isotope Program (see app. I). The program may produce or make available to customers more than 300 different isotopes, but fewer than that number are sold in a given year. In fiscal year 2011, for example, the program sold less than 170 distinct isotopes. The isotopes sold by the Isotope Program can be categorized as (1) radioisotopes currently produced by the Isotope Program at DOE production sites; (2) stable isotopes from the Isotope Program’s inventory, which are no longer produced in the United States; and (3) isotopes generated or provided by NNSA as by-products of its nuclear weapons program (see table 3). The Isotope Program is responsible for the production and sale of 55 radioisotopes produced at five DOE laboratories—Brookhaven, Los Alamos, Oak Ridge, Pacific Northwest, and Idaho National Laboratories. In any given year, the Isotope Program does not produce all 55 radioisotopes; rather, it produces and sells those for which customer demand exists and is unmet by supply from commercial sources. At times, the Isotope Program may choose to begin or stop producing a given isotope depending on whether commercial entities are meeting demand, whether an isotope’s market price is so high that it inhibits research, or whether DOE has the facilities necessary to produce the isotope, among other considerations. For example, in 2009 the Isotope Program reestablished production of californium-252, which is used in a variety of applications, including oil exploration and medical applications, because of customer demand. Californium-252 was previously produced by the Isotope Program in partnership with NNSA and sold through the Isotope Program. When NNSA no longer needed californium-252 for its mission, it stopped supporting its production in 2007, according to an Isotope Program official. The Isotope Program worked with a coalition of commercial customers to continue producing californium-252 to meet the needs of the coalition and researchers. Isotope Program officials indicated, however, that a change like this in the program’s production portfolio does not happen often. In addition to the radioisotopes it produces, the Isotope Program also maintains an inventory of 243 stable isotopes that it sells to customers. These stable isotopes were produced by DOE until the late 1990s at DOE facilities that are no longer in use, and since these isotopes are stable, they can remain in storage almost indefinitely. Because stable isotopes are no longer produced, supplies of some of them have been exhausted, and supplies of others are dwindling. Specifically, according to current Isotope Program data, nine stable isotopes that were in the program’s inventory are no longer available, and six have less than 10 years’ supply at current rates of use (see table 4). According to program officials, the Isotope Program occasionally purchases quantities of some stable isotopes from foreign sources, such as Russia, in an effort to maintain the program’s supply. Isotope Program officials explained that the program buys stable isotopes from foreign sources and then resells them to domestic customers because the Isotope Program can take steps to ensure isotope quality and offer other services that foreign suppliers are unwilling to provide, such as leasing some stable isotopes for research or other applications. Given dwindling supplies in DOE’s inventory and increasing reliance on foreign sources, whose supplies for some isotopes are also dwindling, the Nuclear Science Advisory Committee recommended in 2009 that the Isotope Program reestablish capability to produce stable isotopes in the United States. The Isotope Program is funding several projects in response to this recommendation, including the development of stable isotope production at Oak Ridge National Laboratory, in part, using funds it received in fiscal year 2009 from the American Recovery and Reinvestment Act. Isotope Program officials stated that the project is expected to be completed in 2014. The Isotope Program sells an additional 10 isotopes that are provided by NNSA. The program does not control the supply of these isotopes but coordinates with NNSA to sell and distribute them. Isotope Program officials coordinate with NNSA’s Office of Nuclear Materials Integration, which was created in 2008 to work across DOE to, among other things, make NNSA’s isotopes and other materials available to government entities. For example, NNSA has a stockpile of lithium-6, some of which it provides to the Isotope Program to sell; lithium-6 is used in research and security equipment to detect neutrons given off by other nuclear materials. The Isotope Program also coordinates with NNSA to produce isotopes that the Isotope Program does not have the capability to produce, such as americium-241, which is used in smoke detectors and medical diagnostic devices. To provide isotopes for commercial and research applications, the Isotope Program takes steps to determine the demand for isotopes, coordinate production across production sites, and set prices for isotopes, but the program is not using thorough assessments to establish prices for commercial isotopes. The Isotope Program has flexibility to set prices at market levels for isotopes sold for commercial applications but instead, for most isotopes where the program is the only domestic supplier, sets prices at the level necessary to recover its cost to produce them. In setting prices for commercial isotopes, however, the Isotope Program is not assessing the value of the isotope to the customer or prices of alternatives, as permitted under its pricing policy. As a result, the Isotope Program may be forgoing revenue that could be used to further its mission and address unmet needs. To ensure the availability of isotopes for research and commercial applications, the Isotope Program annually determines demand, coordinates production across its production sites, and sets prices for selling isotopes. To determine annual demand, Isotope Program officials said they start with a general sense of demand based on historical sales data and frequent interaction with customers, through which they learn about changes in isotope needs. According to program officials, the Isotope Program asks customers to provide information on expected demand for the next year and as far as 5 years into the future, although some customers said such estimates are difficult to make. The Isotope Program also takes customers’ orders for isotopes throughout the year via e-mail, telephone, or the program’s website. These orders, for radioisotopes and stable isotopes, are received by the Isotope Program’s business office. To determine annual demand for strontium-82, for example, Isotope Program officials ask customers how much strontium-82 they need for the coming year, and each customer commits to a certain amount for that year. These customers then provide updates throughout the year to clarify actual strontium-82 quantities and delivery dates. Orders for stable isotopes are received and processed throughout the year by the Isotope Program, but producing radioisotopes to meet demand requires considerable planning, according to program officials. When the Isotope Program receives an order for a stable isotope, such as calcium-48, it can be filled from the existing inventory of stable isotopes. In contrast, orders for radioisotopes are taken throughout the year and used to plan production during the Isotope Program’s annual production planning meeting. The outcome of the meeting is a production schedule for the production sites, which identifies radioisotopes needed for the coming year. The production schedule outlines the projected dates when each isotope will be produced and which site will produce it, but the exact schedule depends on a variety of factors. Specifically, because the Isotope Program generally does not control the operation of reactors or accelerators, it uses the facilities at the same time as other DOE programs, thus limiting the Isotope Program’s capability to produce isotopes, according to program officials. For instance, according to program officials, the accelerator at Los Alamos National Laboratory typically operates from July through December, and the accelerator at Brookhaven National Laboratory typically operates from January through June. In addition, because many radioisotopes decay rapidly after production, they need to be delivered in a timely manner, and officials must consider customers’ desired delivery times when determining the production schedule. For instance, strontium-82 has a half-life26 days and, according to one customer, must arrive predictably to be used for its intended purpose. Other isotopes have even shorter life spans and need to be delivered on a precise day before they decay too much to be useful. An Isotope Program official told us that that the production schedule is adjusted throughout the year as customers’ demands change, as new isotopes are ordered, as facilities experience unanticipated shutdowns, or for other reasons. During our discussions with several Isotope Program customers, we found that they were generally satisfied with the timeliness of isotope delivery. To set prices for radioisotopes, program officials annually request detailed production cost data, including both direct and indirect costs, from the individual DOE and NNSA production sites that provided the isotope. According to program officials, direct costs include labor costs and costs for chemical processing, among others; indirect costs include facility maintenance costs and other infrastructure costs. These officials said that the Isotope Program uses cost data from the production sites to determine the sales price for each isotope and prices isotopes differently depending on whether the intended use is for research or commercial applications. For research applications, isotope prices are set to recover only direct costs. In addition, according to program officials, research isotopes are priced by unit, instead of batch, so researchers can buy small quantities of isotopes and not have to pay for production of an entire batch. Isotope Program, with indirect costs covered by the program’s yearly appropriation. Program officials told us that the intent of this subsidy is to promote independent research on uses of isotopes by making them more affordable to the research community. Overall, the result is that some research isotopes are priced significantly lower—from about 9 percent to 75 percent less, in some cases—than the same isotope used for commercial applications. A batch of isotopes is the amount produced by an entire production cycle. Researchers may require a smaller quantity of an isotope than what is produced in a batch. For isotopes used in commercial applications, prices are generally set to recover, at a minimum, the full cost of isotope production, including both direct costs and indirect costs. For orders of large quantities of commercial isotopes, the per-unit cost of production is lower, so the Isotope Program can provide volume discounts. In addition, according to program officials, the Isotope Program adds a nominal fee to isotopes sold commercially, which amounts to approximately 10 percent in additional costs for commercial isotopes—6 percent for administrative costs to process orders for isotopes and 4 percent as a contingency charge to cover unanticipated events. A recent unanticipated event, for example, occurred in fiscal year 2011. According to a program official, orders for strontium-82, which had accounted for more than a third of the program’s sales revenue in 2010, decreased significantly and unexpectedly as the result of a recall of the cardiac imaging device that was the main application for strontium-82. According to program officials, the Isotope Program sales revenue declined by over $5 million from July 2011 through January 2012 as a result, and program officials said they had to draw from the revolving fund to maintain operations. For stable isotopes that are sold from its existing inventory, Isotope Program officials told us that prices are based on historical production costs adjusted annually for inflation, rather than on current replacement costs; the prices are the same regardless of whether they are used for research or commercial applications. Officials told us that they do not base the prices of stable isotopes on current replacement costs because DOE does not have the capability to produce these stable isotopes. Isotope Program officials told us that market studies were in the early stages of being carried out in preparation for reestablishing the capability to produce stable isotopes in 2014; these studies are intended to help the program determine which stable isotopes to produce and in what quantities. The Isotope Program generally charges full cost recovery for commercial isotopes, but the program has not fully assessed the pricing of most of the commercial isotopes it sells, as required by its current policy, such as assessing the value of the isotopes to the customer or prices of similar isotopes. As a result, the program may be discouraging others from producing isotopes and, at the same time, forgoing sales revenue that could further support its mission to deliver needed isotopes, maintain isotope production infrastructure, and support research, in addition to addressing unmet needs. The Atomic Energy Act of 1954 states that the federal government should be reasonably compensated for isotopes it sells and that isotope prices should not discourage commercial isotope producers from entering the market. Aside from these constraints, the Isotope Program has broad authority in setting isotope prices. To this end, the Isotope Program established a pricing policy in 1990 that provides latitude for establishing prices at full cost recovery or at market prices that are higher or lower than full cost recovery, but also states that when a market price already exists that is higher than full cost recovery, the market price should be used. The policy also states that prices should be assessed annually and that additional factors may be considered when establishing prices, including the number of suppliers, demand, competitors’ prices, and the value of the isotope to the customer. This policy appears to be consistent with guidance from the Office of Management and Budget on the sale of government goods and services, which suggests that sales should be self-sustaining and based on market prices. In cases where no market currently exists, such as many of the commercial isotopes produced and sold by the Isotope Program, the Office of Management and Budget’s guidance states that prices can be set by taking into account the prevailing prices for goods that are the same as or substantially similar to those provided by the government and then adjusting the supply made available, prices of the goods, or both so that there will be neither a shortage nor a surplus. According to program officials, at present the Isotope Program has set the price above full cost recovery for helium-3 and two other isotopes. These three isotopes are priced above full cost recovery because, according to officials, market prices exist that are greater than the full cost of production, and setting the prices lower would distort their market prices. recovery for most of the commercial isotopes it sells. First, officials told us they believe many customers are sensitive to prices and already consider prices for isotopes to be too high. Isotope Program officials said that some potential customers are already unwilling or unable to pay current prices for many isotopes and that some existing customers have suggested that any price increases would make isotopes unaffordable and force them to seek other isotope sources. Second, Isotope Program officials stated that the program’s role is not to maximize revenue from isotope sales but to make isotopes widely available. Isotope Program officials told us that, consistent with the program’s mission and the Atomic Energy Act, the Isotope Program strives to supply isotopes at reasonable prices to encourage their use. For most of the isotopes it produces and sells, however, program officials told us that in instances where the Isotope Program is the only domestic supplier, the program has not formally determined the value of isotopes to commercial customers or prices of alternatives. Program officials told us that they gain a sense of customers’ value for isotopes through various interactions with these customers, although they did not provide a formal analysis as described in the pricing policy. According to documents provided by the Isotope Program, the program has also collected limited market information for a small number of isotopes, but these studies are outdated or do not consider pricing. For example, a market study provided by the Isotope Program that was conducted in 2002 projects the future demand and potential revenues for 25 different radioisotopes used in medicine over the next 5 to 10 years, but that study is now outdated. Additionally, according to one program official, the market study to be conducted for the Isotope Program’s isotopes beginning in 2012 is to provide information on which isotopes are in greatest demand so officials will know which stable isotopes to produce, although the study will not address isotope prices. Without formally assessing the value of isotopes to commercial customers or the prices of alternatives for isotopes where the Isotope Program is the only domestic supplier, the Isotope Program does not know if its full cost recovery prices for isotopes are in fact discouraging others from producing isotopes, discouraging commercial entities and researchers from developing alternatives, and/or encouraging overconsumption. If assessments of customers’ value for isotopes and the prices of potential alternatives show that prices can be increased above full cost recovery for some commercial isotopes, the additional revenue could be used to further the Isotope Program’s mission and address unmet needs. For example, revenues could be used to fund research for the development of new or more efficient production capabilities for additional isotopes. Also, the Nuclear Science Advisory Committee recommended in its report on opportunities and priorities for ensuring a robust national isotope program that the Isotope Program invest in a facility dedicated to producing radioisotopes. Such a facility, according to the advisory committee, is the most cost-effective option to position the Isotope Program to ensure continuous access to many of the needed radioactive isotopes. Program officials told us they were developing a new pricing policy, but because the policy is in draft form and subject to change, we were unable to determine, among other things, whether the new policy would provide direction on how commercial isotope prices are to reflect the value of the isotope to the customer, the prices of alternatives, or both. The Isotope Program has begun taking some actions to identify and mitigate risks to achieving its mission of producing isotopes, such as the risk of relying on sales of a small number of commercial isotopes for a large percentage of its revenues, but without first establishing clear, consistent program objectives, the program’s risk assessment efforts are not comprehensive. The Isotope Program is taking some actions to assess risks to achieving its mission, including identifying high-priority isotopes and using its revolving fund to mitigate risks from unforeseen events. Risk assessment first involves, according to federal standards for internal control, identifying and analyzing risks associated with achieving a program’s objectives and then determining how to manage such risks. Our analysis shows that the Isotope Program currently assesses risks through several methods. First, Isotope Program officials, National Isotope Development Center staff, and production site managers identify risks to providing isotopes by monitoring long-term changes in demand within the isotope community that could affect isotope supply. Unlike determining demand for annual production planning, these monitoring activities focus on changes that could influence isotope supply and demand in the longer term, such as new products that could eventually increase demand for a specific isotope, according to Isotope Program documents. According to program officials, long-term monitoring activities help them stay abreast of changes in the isotope community that may warrant adjustments to the program’s product portfolio. In addition, program officials told us that these activities play a role in long-range program planning, as well as informing decisions regarding research and development. Some monitoring activities are performed on a continuous basis, such as discussing new developments in isotope uses and production capacity with foreign isotope suppliers, while others occur once or a few times a year, such as attending industry conferences to collect information about new commercial products that use isotopes. To manage risks created by changes in demand, according to Isotope Program officials, the program gathers additional information on the issue and may convene workgroups that bring together isotope community stakeholders to discuss trends for one or several isotopes. For example, the program organized a working group in 2008 with representatives from the National Institutes of Health to explore supply and demand for medical research isotopes. It also convened a workshop of federal stakeholders in January 2012 to discuss isotope priorities, supply, and demand among federal entities. The Isotope Program also assesses risks to the program by identifying high-priority isotopes—those at risk of supply problems, either because the isotopes are already in short supply or are important to users. Five lists of high-priority isotopes have been created by isotope stakeholders, and Isotope Program officials said that they use the lists to set program priorities. The following describes each of the lists and the entity that created them: The 2008 workshop of isotope community stakeholders created an unranked list of more than 47 isotopes considered to be in short supply or unavailable from DOE for research and applications. In 2009, the National Institutes of Health isotope working group developed a list of important medical research isotopes that are not commercially available; the list was updated and ranked in order of priority in January 2012. In 2009, the Nuclear Science Advisory Committee’s isotope subcommittee produced a list of isotopes important for medical and scientific research purposes and prioritized them according to the importance of the research opportunities. In 2011, the National Isotope Development Center listed stable isotopes in priority order according to the importance of the isotopes in research and commercial applications. In 2011, the National Isotope Development Center listed specific isotopes called nuclear materials and heavy elements and prioritized them on the basis of importance of the isotopes in research and commercial applications. Program officials told us they use the high-priority lists to establish program priorities, such as determining what research and development initiatives to undertake. For example, according to program officials, for some of the listed isotopes, the program has reached out to universities to research new production methods. In addition, the Nuclear Science Advisory Committee’s isotope subcommittee’s list serves as a criterion for awarding research and development grants; research projects for isotopes on the list receive higher priority for funding than projects for isotopes not on this list. Four of the lists rank the isotopes in order of priority, and one does not; the prioritized lists rank isotopes according to different criteria. For example, the National Institutes of Health prioritized isotopes on the basis of their importance to medical research, while the National Isotope Development Center prioritized isotopes on the basis of their importance to research and commercial applications. In total, 104 different isotopes appear on the five lists—about 18 percent of the total number of isotopes currently available from the program. Although a few isotopes are found on more than one list, most isotopes are found on only a single list. The Isotope Program mitigates risks by using the flexibility of its revolving fund to help manage unexpected events, such as losses in revenues. The Isotope Program is authorized to carry over revenues and yearly appropriations in its revolving fund from fiscal year to fiscal year. The law authorizing the revolving fund provides the program broad discretion for managing the fund, stating that appropriations and revenues deposited into the fund are to be used for “activities related to the production, distribution, and sale of isotopes and related services.”uses its flexibility in managing the revolving fund to prepare for and mitigate unexpected events. To this end, of the 10 percent fee the program adds to the price of isotopes sold to commercial customers, it deposits 4 percent into the revolving fund to cover unanticipated events. For example, the program drew on the fund to maintain operations in 2011 and 2012 in the face of a significant, unexpected decline in revenue from the sale of strontium-82. The Isotope Program also assesses risks at its three primary isotope production facilities by identifying and managing risks to the production sites. The isotope production facilities at Oak Ridge and Los Alamos National Laboratories in 2011 developed plans that describe processes for identifying and managing risks at the sites that could be detrimental to isotope production. such as infrastructure, chemical processing, and shipping processes— should be assessed for risks and describe how site officials are to determine the likelihood and consequence of any identified risks. In conjunction with the plans, Oak Ridge and Los Alamos National Laboratories created spreadsheets for tracking risks—called risk registers—that list each identified risk, its likelihood, consequence, and mitigation strategy, among other things. Many of the identified risks focus on equipment failure or malfunction, such as risks that components of a processing facility shut down unexpectedly. Other risks are related to management and regulatory issues. Brookhaven National Laboratory has developed a similar risk-tracking spreadsheet that focuses exclusively on risks to production equipment. According to one program official, the risk management plans and spreadsheets help the program set priorities for investments that will help manage risks. For example, on the basis of Los Alamos National Laboratory’s risk register, the program decided to modify its facilities to reduce radiation risks. These risk management plans and risk registers are specific to the three production sites and do not identify risks to the entire Isotope Program. The plans also describe a similar process for identifying and managing opportunities. The third production site at Brookhaven National Laboratory has not developed a similar risk and opportunity assessment and management plan. The Isotope Program is taking risk assessment actions without first establishing clear, consistent objectives; that is, it does not identify and mitigate risks to achieving program objectives in a comprehensive way. One of the federal standards for internal control—risk assessment— states that a precondition to risk assessment is the establishment of clear, consistent objectives. Long-term goals and objectives describe how the program will implement its mission, when actions will be taken, and what resources are needed to reach these goals. Once objectives have been set, the program then identifies risks that could keep it from efficiently and effectively achieving those objectives at all levels. After risks have been identified, they are to be analyzed for their possible effect and decisions made on how to manage the identified risks. DOE’s Isotope Program has not established clear, consistent objectives to serve as a basis for risk assessment. Isotope Program officials told us the program is relying on two reports from the Nuclear Science Advisory Committee’s isotope subcommittee to guide its decisions and that these two reports provide adequate guidance. Together, these reports recommend 15 different long-term actions for the program but do not provide clear objectives for the program or a description of how those objectives are to be achieved. For example, one report recommends that the program construct and operate an electromagnetic isotope separator facility for stable and long-lived radioisotopes but does not describe how this recommendation is to be achieved. The report also does not provide criteria for measuring progress toward meeting this or other recommendations. Isotope Program officials told us, however, that the program is undertaking a new strategic planning process in 2012 to develop a 5-year strategic plan. Without clearly defined objectives that lay out what the program is to accomplish, the Isotope Program cannot be assured that its current risk assessment and mitigation efforts focus on the most significant issues that could impede achievement of its mission. For example, the program does not have objectives that could provide direction about which of the five high-priority isotope lists warrants the most attention. Instead, program officials reported that they take all the lists into account when making production and research decisions. They could not tell us if one list of isotopes is a higher priority than the others. Furthermore, without clear objectives, program officials cannot determine how important one isotope on a list is relative to isotopes on the other lists because they are prioritized using different criteria, or they are not prioritized. For example, thallium-203 is ranked as the most important isotope on the National Isotope Development Center’s list of stable isotopes; actinium-225, astatine-211, and lead-212 are identified as the most important isotopes in medicine, pharmaceuticals, and biology in the report of the Nuclear Science Advisory Committee’s isotope subcommittee; and californium-252 and radium-225 are identified as the most important isotopes for physical science and engineering in this same report. Without consolidating the multiple lists of high-priority isotopes, however, it is unclear which isotopes have greater priority than others. Thus, program managers may not be focusing limited resources on the most important isotopes. Furthermore, because the program does not have clear objectives, it cannot be assured that it is assessing and mitigating risks from all relevant external and internal sources. In particular, the program has not assessed risks associated with relying on a small handful of isotopes for a large percentage of annual revenue. This issue is important in the context of the unexpected decline in strontium-82 orders that occurred in 2011, which resulted in a large reduction in expected revenue. The program likely could not have anticipated this loss, but comprehensive risk assessment efforts might have identified the risk of relying on strontium- 82 and a few other isotopes for a large amount of revenue. Without identifying all relevant risks, the program also cannot determine how to manage such risks. When the strontium-82 orders declined, the program was able to rely on its revolving fund to make up for unexpected revenue loss, but it may not always be able to do so. Isotope Program officials told us there is no guiding document for how the revolving fund should be spent or managed. Without guidance on how to manage the revolving fund in a way that helps mitigate risk, the program cannot be assured that it will be able to continue using the fund to both advance program missions and mitigate risks. For example, if the program unexpectedly loses revenue for several years in a row, the revolving fund may not provide sufficient reserves to maintain program operations. Managing the production and sale of over 300 different isotopes for various research, commercial, industrial, and medical applications is a daunting task. With a wide variety of customers, whose needs may change over time, it is difficult for the Isotope Program to determine demand, plan production, and project revenue streams to avoid shortages of important isotopes or interruptions in the revenues that help to sustain the program. The Isotope Program is taking several actions to assess demand and plan production. In addition, the Isotope Program has clearly defined under what circumstances it will charge reduced prices for research isotopes. The program has not, however, defined what factors it will consider when it sets prices for isotopes sold commercially, including defining under what circumstances it will set prices for such isotopes at or above full cost recovery. Without transparency in decisions on pricing, it is unclear if Isotope Program officials are setting prices consistently. Moreover, in the absence of established market prices and without current information on the value customers place on isotopes and prices of similar products, the Isotope Program cannot ensure that the prices it sets are appropriate and thus may be forgoing revenues that could be used to further its mission and ensure the program’s long-term viability. As the Isotope Program moves forward with its process to establish a 5- year strategic plan, creating clear goals and objectives is the first step in being able to identify and manage risks to achieving the program’s mission. Identifying high-priority isotopes that may need additional oversight is a good step toward managing risks, but without consolidating those lists and prioritizing them, program managers may not direct limited resources toward the most important isotopes. Finally, when the Isotope Program’s revenues from strontium-82 unexpectedly stopped, program officials were fortunate to have the revolving fund to mitigate the unexpected loss in revenue and maintain operations without disrupting supplies of other isotopes. Without clear guidance on when and how to use the revolving fund to mitigate future unexpected losses in revenue, the program cannot ensure that it will have sufficient funds to maintain operations, or for other activities, such as funding research and other projects that help the Isotope Program achieve its mission. We are making four recommendations to the Secretary of Energy designed to improve the Isotope Program’s transparency in setting prices and efficiency in managing isotopes. Specifically, we recommend that the Secretary of Energy direct the Isotope Program to take the following four actions: Clearly define the factors to be considered when the program sets prices for isotopes sold commercially, including defining under what circumstances it will set prices at or above full cost recovery. This should include assessing, when appropriate, current information on the value of isotopes to customers and the prices of similar products. In conjunction with strategic planning efforts already under way, create clear goals and objectives to serve as a basis for risk assessment, identify risks to achieving its goals and objectives, and determine what actions to take to manage the risks. Consolidate the lists of high-priority isotopes so the program can ensure that its resources are focused on the most important isotopes. Establish clear guidance for managing the revolving fund to ensure that the fund is sufficient to use as a contingency for unexpected losses in revenue. We provided a draft of this report to DOE for review and comment. In its written response, reproduced in appendix II, DOE explained that our recommendations will generally be addressed through the Isotope Program’s current efforts to update its pricing policy and develop a strategic plan. DOE took exception, however, to our characterization of how the Isotope Program sets prices for commercial isotopes. Specifically, according to DOE’s letter, the Isotope Program does consider “value of isotopes to customers” when setting prices for commercial isotopes. Nevertheless, none of the documents provided by the Isotope Program during our review show that the program conducted a current, formal analysis of what customers are willing to pay for commercial isotopes. Our report points out that program officials gain a sense of the value customers place on commercial isotopes through informal interactions with the customers themselves. Such interactions, in our view, do not provide a rigorous approach to determining a customer’s value for commercial isotopes as customers generally strive to obtain needed materials, including isotopes, at the lowest possible cost. We are encouraged to see that, according to DOE’s comments, the Isotope Program’s updated pricing policy is to identify which factors are to be considered in setting prices, including formal analysis of the value of commercial isotopes to customers. In its comments, DOE expressed concern that our report suggests maximizing revenue and pricing commercial isotopes to increase revenue. DOE explained that the Isotope Program generally sets prices to fulfill the mandate established by the Atomic Energy Act of 1954 to provide isotopes at prices that do not discourage their use. Our report does not emphasize maximizing revenue or setting prices solely to increase revenue. It does point out that the Isotope Program has not performed the formal market analyses required by its own pricing policy. DOE further stated that the Isotope Program considers several factors when determining prices for commercial isotopes, including a “bottom-up activity-based costing for isotope production,” and it has initiated two market studies that will provide input into the assessment of market prices. Comprehensive market studies would determine the prices customers are willing to pay for isotopes and prices of alternatives, among other factors, and would thus determine if the Isotope Program’s prices for commercial isotopes are set at the appropriate level. Such analyses would also show whether the full-cost-recovery price, which is used for all but three of the commercially sold isotopes, is resulting in unintended, but avoidable, consequences. General economic considerations suggest that setting prices of isotopes at artificially low levels could have unintended consequences such as discouraging other entities from producing isotopes, discouraging commercial entities and researchers from developing alternatives, and encouraging overconsumption. Furthermore, our report points out that the Atomic Energy Act of 1954 states that isotope prices should not discourage commercial isotope producers from entering the market. With regard to our recommendations, DOE’s letter indicates that three of our four recommendations are being addressed through the Isotope Program’s present efforts to update its pricing policy and develop a comprehensive strategic plan and risk assessment. With regard to our fourth recommendation—to consolidate and prioritize isotopes from the lists of high-priority isotopes—DOE stated that it “will need to assess the value added of doing an overall prioritization.” DOE further states that even though an isotope may be a high priority for the isotope community, there is no guarantee that an entity is capable of producing it. In our view, this situation highlights the need for our recommendation. The Isotope Program has done outreach with the isotope community to identify the most important isotopes and has created a peer-review process that considers isotopes on the various high-priority lists as one of its factors in selecting projects for funding. This process alone, however, cannot ensure that the program’s resources are accurately focused on the most- needed isotopes. Therefore, we believe it is up to the Isotope Program to consolidate the lists of high-priority isotopes and develop criteria to determine on which isotopes resources are to be focused. Finally, DOE’s letter stated that we mischaracterized NNSA’s mission, which does not include providing isotopes to stakeholders. We clarified this statement and have made changes throughout the report as needed. DOE also provided technical comments that we incorporated in the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretary of Energy, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This table identifies the isotopes provided at the time of this report for sale by the Department of Energy’s (DOE) Isotope Development and Production for Research and Applications program (Isotope Program). According to Isotope Program officials, the availability of these isotopes may change and some isotopes may be provided in different chemical forms. For example, bromine-79 is available as sodium bromide but also as potassium bromide, silver bromide, and ammonium bromide. The table also shows how the Isotope Program classifies each isotope—as a radioisotope or stable isotope—and if an isotope is provided by the National Nuclear Security Administration (NNSA) and sold by the Isotope Program. In addition to the individual named above, Ned H. Woodward, Assistant Director; Wyatt R. Hundrup; Katherine Killebrew; and Michael Krafve made key contributions to this report. Eric Bachhuber, Ellen W. Chu, R. Scott Fletcher, Cindy Gilbert, Jonathan Kucskar, Mehrzad Nadji, and Timothy M. Persons also made important contributions. | DOE is the only domestic supplier for many of the over 300 different isotopes it sells that are critical to medical, commercial, research, and national security applications. Previous shortages of some isotopes, such as helium-3, an isotope used to detect radiation at seaports and border crossings, highlight the importance of managing supplies of and demand for critical isotopes. Prior reports by GAO and others highlighted risks and challenges faced by the Isotope Program, such as assessing demand for certain isotopes. GAO was asked to determine (1) which isotopes are produced, sold, or distributed either by the Isotope Program or NNSA and how the two agencies make isotopes available for commercial and research applications; (2) what steps the Isotope Program takes to provide isotopes for commercial and research applications; and (3) the extent to which DOE is assessing and mitigating risks facing the Isotope Program. GAO reviewed DOE and NNSA documents, visited Oak Ridge National Laboratory, and interviewed cognizant agency officials. The Department of Energys (DOE) Isotope Development and Production for Research and Applications program (Isotope Program) provides over 300 different isotopes for commercial and research applications. The Isotope Program is responsible for 243 stable isotopes that are no longer produced in the United States but are sold from the programs existing inventory and for 55 radioactive isotopes, called radioisotopes, that the program is able to produce at DOE facilities. An additional 10 isotopes sold by the Isotope Program are provided by the National Nuclear Security Administration (NNSA), a separate agency within DOE, as by-products of its nuclear weapons program. The Isotope Program may be forgoing revenue that could further its mission because of the manner in which it sets prices for commercial isotopes. The Isotope Program determines demand, coordinates production, and sets prices for commercial isotopes. To set prices for radioisotopes, the program considers the full cost of production, including direct costs (e.g., labor costs) and indirect costs (e.g., infrastructure costs). For research applications, isotope prices are set to recover direct costs to reduce prices and encourage research. For commercial applications, prices are set at full cost recoveryof both direct and indirect costsor at an isotopes market price when a market price higher than full cost recovery already exists. The program, however, has not fully assessed the pricing of most of these isotopes, as required by its 1990 pricing policy. This policy provides latitude for setting prices and states that prices should be assessed annually. Factors that may be considered when establishing prices include the value of an isotope to the customer, demand, and the number of suppliers. The program, however, has not assessed the value of isotopes to customers or defined what factors it will consider when it sets prices for commercial isotopes, including defining under what circumstances it will set prices at or above full cost recovery. As a result, the program does not know if its full-cost-recovery prices are set at appropriate levels so as not to distort the market, and it may be forgoing revenue that could further support its mission. The Isotope Program has begun taking some actions to identify and manage risks to achieving its mission of producing isotopes, but because it has not established clear, consistent program objectives, the programs risk assessment efforts are not comprehensive. Actions the Isotope Program is taking include, among other things, identifying high-priority isotopes and using its revolving fund to mitigate risks from unforeseen events. For example, the Isotope Program has identified five lists of high-priority isotopesthose at risk of supply problems because they are already in short supply or are important to users. Isotope Program officials reported using these lists to set program priorities. The Isotope Program is taking these actions, however, without first establishing clear, consistent objectives. The federal standards for internal control state that a precondition to risk assessment is the establishment of clear objectives. Without clearly defined objectives, the program cannot be assured that it is assessing risks from all sources or that its efforts are focusing on the most significant risks to achieving its mission. Furthermore, without consolidating the multiple high-priority lists, Isotope Program managers may not be directing limited resources to the most important isotopes. GAO recommends, among other actions, that DOEs Isotope Program define what factors it considers when setting isotope prices, create clear objectives as a basis for risk assessment, and consolidate the lists of high-priority isotopes. DOE stated that it will address GAOs recommendations through the Isotope Programs current efforts to update its pricing policy and develop a strategic plan. |
LSC relies heavily on its Office of Compliance and Enforcement (OCE) and its Office of Program Performance (OPP) to carry out activities related to grant awards, grantee program effectiveness, and grantee compliance responsibilities. According to LSC officials, LSC established OCE in 1997 and OPP in 1999 to (1) help ensure compliance with requirements of the LSC Act, and (2) evaluate, fund, monitor and oversee grantee programs, including quality of services provided. Figure 1 shows staffing levels for OPP and OCE and LSC overall between 1999 and 2009. As shown in figure 2, the Directors of OPP and OCE report to the Vice President for Programs and Compliance, who reports to the LSC President. LSC’s President reports to an LSC board composed of 11 members. In April 2010, the 11 member board was undergoing transition, with: 1 board member continuing, 6 of the remaining 10 being sworn in during April, 2 board members awaiting to be named, and 2 others awaiting Senate confirmation. (OPP) According to the LSC Vice President of Programs and Compliance’s goals and objectives document (LSC workplan), the Vice President for Programs and Compliance is responsible for coordinating OPP and OCE; implementing efforts to improve LSC’s oversight of grantees; assessing LSC component directors’ staffing allocations and assignments; conducting quarterly joint staff meetings and training sessions; and overseeing LSC’s internal quality agenda, including providing staff training. In accordance with the LSC Workplan, the Vice President for Programs and Compliance also oversees LSC’s grantee compliance and program functions, with emphasis on intra-office coordination, improved grantee guidance, and improved grantee follow-up activities by OCE and OPP. According to LSC’s policy and the 2009 OPP Procedures Manual, OPP’s responsibilities include designing and administering LSC’s process for awarding competitive grants, and developing and implementing strategies to improve grantee program quality. In carrying out its responsibilities, OPP is to issue requests for proposals, guide grant applicants through the application process, and evaluate applications against performance criteria. According to the 2008 Roles and Responsibilities of LSC Offices Responsible for Grantee Oversight, OCE is charged with reviewing grantees’ compliance with the LSC Act and implementing regulations, responding to inquiries and written complaints concerning grantees received from members of the public or Congress, and providing follow up on the referrals of findings from LSC’s Office of Inspector General. In carrying out its responsibilities, OCE is to conduct grantee case service reports and case management system site visits; review grantee compliance with the LSC accounting manual and fiscal-related regulations; review the audited financial statements of grantees; and initiate questioned-cost proceedings as necessary. To increase compliance, OCE is also responsible for issuing corrective action notices to grantees and for following up on corrective action plans through conducting interviews, reviewing grantee corrective action plans, and performing follow-up reviews. Figure 3 presents an overview of LSC grant award process responsibilities as prescribed by LSC’s policies and procedures. In addition, the Office of Legal Affairs (OLA) has some responsibilities with respect to LSC’s grantee oversight. Specifically, according to the Roles and Responsibilities of LSC Offices Responsible for Grantee Oversight, OLA, headed by a Vice President of Legal Affairs who reports to LSC’s President, is responsible for providing legal services for LSC, such as interpreting statutory and regulatory authorities applicable to LSC grantees and approving contracts prior to award. OPP and OCE and other operating units seek legal counsel and information from OLA on application of relevant laws and regulations, as well as legal issues arising from oversight and enforcement activities. LSC controls over reviewing and awarding grants are intended to help ensure the fair and equitable consideration of applicants. Recently LSC has taken action intended to improve controls in this area. For example, LSC enhanced documentation of its grant application evaluation process through its 2010 Reader Guide. In addition, the LSC grants system contains detailed application evaluation questions based on the LSC Performance Criteria, and LSC has developed training materials and provided training to OPP personnel on the application evaluation process. However, at the time of our review, we found LSC’s controls over reviewing grantee applications and awarding grants were deficient in the following areas: documenting grant award decisions, carrying out and documenting management review of grant applications, and using automated grantee data available in the LSC Grants system. These deficiencies increase the risk that LSC may not be considering all relevant information in a consistent manner, limit LSC’s ability to explain the results of award decisions, and have resulted in incomplete and inaccurate information in the LSC grants grantee application evaluations. LSC’s grant application evaluation process and basis for the resulting decisions were not clearly documented, including key management discussions in the evaluation-making process. According to the Standards for Internal Control in the Federal Government, all significant events should be clearly documented, and readily available for examination. We found LSC procedures did not require, nor did the staff maintain, a comprehensive record documenting (1) the extent to which management held discussions and considered all available, relevant information in the grant funding decision-making process for each applicant, and (2) that a complete record of the deliberative process (i.e., inputs, discussions, decisions made) was used, leading up to a grant application being funded or denied by LSC. Instead, LSC uses presentation notebooks, including multiple data sources, including grant applicant information, which are prepared for OPP staff funding recommendation presentations to OPP management and later for presentations to LSC management and the LSC President. Final grant award decisions are summarized in a chart initialed by responsible staff, LSC management and the President and individual grant award letters are certified by the LSC President. LSC’s procedures provided for documenting summaries of grantee application data. Specifically, LSC procedures required a one-page applicant overview and a two-page program summary for each applicant. OPP staff prepare the one-page applicant overview to document (1) information (such as poverty levels) about the applicant’s service area, (2) an overall score based on the reviewer’s evaluation, and (3) whether there are any special grant conditions, such as those due to prior grantee problems, including noncompliance with LSC regulations. OPP staff also prepare a two-page program summary that is to document their assessment of the grantee considering past performance as well as information in the application related to the following four performance areas: (1) effectiveness in identifying the most pressing civil legal needs of low- income people in the service area and targeting resources to address those needs, (2) effectiveness in engaging and serving the low-income population throughout the service area, (3) effectiveness of legal representation and other program activities intended to benefit the low-income population in the service area, and (4) effectiveness of governance, leadership, and administration. According to the Vice President for Programs and Compliance, while not explicitly required to do so by current LSC procedures, LSC officials also develop and use other data and analyses in addition to these two summary documents. Specifically, LSC staff prepare other relevant information and record the information in notebooks, such as the results of prior site visits. LSC staff use these notebooks to facilitate discussions with management about prospective grantee awards. However, the extent to which this other relevant information influenced award decisions was not documented. During a part of our review, we were not able to determine the extent to which the information in any of the notebooks we obtained was used or how it was considered in the funding decisions. LSC managers held a series of meetings where funding and award decisions were discussed. Following these meetings, LSC staff prepared a funding decision chart that was initialed by the Director of OPP, Vice President for Programs and Compliance, and the LSC President to document the final funding decisions. This chart, however, does not document how the managers’ consideration of various elements or relative risks contributed to the final decisions. Therefore, this lack of documentation of the factors considered in making these decisions increases the risks that grantee application evaluation and funding decisions may not consider all key, relevant information and makes it difficult to describe the basis for decisions later. LSC has no requirement for carrying out and documenting OPP Director managerial review and approval of competitive grant evaluations or renewals by the OPP primary staff reviewers. According to the Standards for Internal Control in the Federal Government, control activities, such as conducting and documenting reviews, are an integral part of an entity’s stewardship of government resources and achieving effective results. Existing LSC guidance, such as the 2010 Reader Guide, provides that each application be reviewed against specific elements (derived from the LSC Performance Criteria and the ABA Standards for the Provision of Civil Legal Aid). The Guide is used in conjunction with an automated evaluation form in LSC Grants that reviewers use to record their assessments of each grant application. However, the guidance does not provide specific steps to carry out or document management review of the application evaluation in the LSC grants system. Consequently, the OPP grant application evaluations we reviewed lacked any evidence in LSC Grants that the OPP Director had reviewed them. The OPP Director did not sign any of the evaluation forms we reviewed in the LSC grants system, a key internal control activity. Specifically we selected a probability sample of 80 grantees from a population of 140, which encompassed 57 renewal applications and 23 competitive grant applications. We found that none of the 80 (100 percent) grant files contained any documentation demonstrating that managers had reviewed and approved the OPP staffs’ evaluation of the application. This lack of documented management review impairs LSC’s ability to identify gaps or incompatible data in the applications or evaluations prior to making the grant award. We found instances where an effective OPP manager’s review should have identified and corrected evaluation errors. For example, we identified 14 grant applications where the reviewer incorrectly identified projected expenses for the grant as matching the projected expenditures in another section of the application. LSC Grants is a computer-based application intended to assist LSC in data collection and review of applications submitted in response to an LSC Request for Proposal. However, because LSC’s Grants system lacked basic automated controls to ensure integrity over information in the system related to its grants application evaluation process, the system’s full capabilities were not utilized. The Standards for Internal Control in the Federal Government provide that entities should have application controls designed to help ensure the completeness and accuracy of transactions. Specifically, we found the data in LSC Grants was erroneous and inconsistent because the system did not have edit checks preventing the OPP staff reader from entering incomplete or incompatible data. Lacking complete and reliable grantee applicant evaluation data in LSC Grants, required LSC management to instead rely on inefficient, manual compilation and review of grantee application evaluation data in making decisions about whether to approve and fund a grantee. Our review found 7 of the 57 (12 percent) renewal grantees’ files had input fields that were blank and required information was not included. Similarly, we found 3 of the 23 competitive grantees (13 percent) where essential grantee evaluation data were not filled out. We also found numerous instances in both the renewal grantees, 15 out of 57 (26 percent) and competitive grantees, 6 out of 23 (26 percent), where grantees entered data in different parts of the grant application and the data were inconsistent. In addition, we found one grantee where the grant was to be funded with restrictions on the length of the grant term. However, the space where the reason for this restriction was required was left blank by the OPP staff. According to LSC, the evaluation process relies on both a qualitative and substantive analysis of an applicant’s proposal narrative to assess its capacity to provide high quality legal services. OPP staff’s judgment inherent in the substantive evaluation cannot be flagged or assessed by information validation fields. Nonetheless, LSC acknowledged the consistency and accuracy of information within the application can be addressed. LSC management also informed us that it is reviewing the LSC grants system for improvements. LSC’s external auditor’s 2008 report identified similar issues concerning inconsistent documentation of grantee evaluations. The auditor noted incomplete data in the grants system, used prior to LSC Grants, for 12 out of 32 grantee evaluations. The auditor recommended that the Office of Program Performance establish procedures to ensure that evaluation forms are properly completed before grant awards are made. While LSC recognized the importance of grantee site visits and had established overall policies and reasonable risk-based criteria to be used for such visits, it had not yet established detailed procedures on (1) conducting and documenting site visit selection, (2) timely completion of site visit reports, and (3) timely resolution of site visit recommendations and corrective actions. Control weaknesses hampered effective grantee site visits. These control weaknesses hinder LSC’s ability to effectively oversee its grantees’ compliance with LSC regulations and limits its ability to ensure grantees are visited according to their relative risk levels and that any compliance issues are identified and resolved in a timely manner. We observed good site visit planning techniques and interview execution in Philadelphia, Pennsylvania, and Indianapolis, Indiana. We also noted that LSC has an overall goal that provides for grantee site visits at least once every 3 years; however, LSC did not have procedures detailing how identified risks factors are to be used in a risk-based determination of which grantees should receive site visits by either OPP or OCE personnel. According to the Standards for Internal Control in the Federal Government, management’s internal control assessment should consider identified risks and their possible effect. By not formally documenting specific procedures on how risk assessment criteria are to be used in decisions about which sites to visit, LSC does not have adequate assurance that grantees with the greatest risk of noncompliance receive priority attention and oversight. In a prior GAO report, we recommended that LSC develop and implement an approach for selecting grantees for internal control and compliance reviews that is founded on risk-based criteria, uses information and results from oversight and audit activities, and is consistently applied. Although LSC has identified risk factors to consider,as of April 2010 it did not yet have procedures for how each risk factor is to be applied or considered when determining which grantee sites to visit. OPP officials told us that their program liaisons make recommendations for visits, which are reviewed by the three OPP regional teams (North, South and West). Then OPP meets as a group to discuss the teams’ recommendations and make preliminary recommendations for the next year’s visits. The OPP director and deputy director meet with the OPP regional teams when those recommendations are made and with all of OPP program staff to make final recommendations. After consultations with OCE, OPP’s recommendations are sent to the Vice President for Program Performance and Compliance. The deputy director and director approve the final list when they send it to the Vice President for Program Performance and Compliance for approval. However, we found no documentation demonstrating whether regional teams appropriately applied the risk factors, nor whether risk assessment results were summarized consistently in making the final recommendations for site visits. As shown in table 1, our review of all OCE site visit reports on grantee compliance, completed between October 2007 and July 2009, showed that 15 of 22 exceeded the 120 day goal set for reporting on grantee compliance. OCE’s Procedures Manual provides that OCE’s grantee compliance site visit final reports are to be issued within 120 days of each site visit trip’s completion. According to LSC, the OCE Procedures Manual was updated in April 2008 to establish a time frame of 120 days for completing site visits. Overall, our analysis showed that the average length of time required to complete the OCE site visit reports was about 150 days. Delays in formally communicating grantee site visit findings to grantees can delay grantees’ resolution of any internal control weaknesses (for example, if the grantees did not inquire about prospective income during client intake) and legal noncompliance issues identified during the site visits. Information on any continuing or serious internal control or compliance issues can be critical in making grantee funding decisions. According to LSC, there are informal means by which LSC informs grantees of preliminary findings. For example, OCE staff generally provides grantees with technical assistance in correcting compliance matters during site visits to facilitate immediate correction. LSC officials further stated that at the end of the visit staff hold an exit conference to advise the grantee of the preliminary findings and discuss how to make the necessary corrections. The LSC financial statement auditor also reported in 2010 that its review of OCE site visit reports found that 2009 grantee site visit reports were not issued on time, based on OCE’s 120 day goal. For example, the auditor reported that one out of the nine reports they sampled was issued 289 calendar days after the completion of fieldwork. One of the keys to completing timely OCE site visit reports within prescribed time frames is obtaining timely OLA opinions on LSC regulations. An LSC Director told us that site visit reports are held pending receipt of any requests to OLA for a legal opinion related to a possible noncompliance issue. However, LSC did not have specific procedures defining expected time frames and for overseeing OCE receipt of OLA opinions within such time frames. As of January 28, 2010, OLA had issued 47 opinions since January 2004. The average time elapsed from the date of the request for an OLA opinion and the issuance of the opinion was approximately 200 days. Of those 47 opinions, over 50 percent (25) took longer than 120 days to issue, with an average delivery time of approximately 334 days. As of January 28, 2010, two opinions had been outstanding for 721 and 603 days, respectively, and two other reports were not complete due to a pending legal opinion on prospective income, which was issued 465 days after being requested. While our review found indications that cognizant LSC components share visit reports, LSC did not require and document its process for tracking and assessing actions in response to site visit recommendations and corrective actions. According to the Standards for Internal Control in the Federal Government, an entity’s internal control activities should include monitoring control improvement efforts. It further provides that such controls should assess the quality of performance over time and ensure the findings of audits and other reviews are promptly resolved. Over time, the trend of the number and types of findings, recommendations, and corrective actions, if analyzed and used appropriately, should provide information that could assist LSC management in determining and addressing any issues concerning the quality of grantee program performance and compliance. Consequently, the absence of required documented procedures for tracking OPP and OCE recommendations and corrective actions reduces LSC’s assurance that site visit results information is monitored for necessary corrective action and appropriately shared among cognizant LSC component organizations. According to an OIG manager and the OPP and OCE Directors, OPP and OCE share information on site visit recommendations through the LSC intranet—where site visit reports are posted. Although not required by LSC procedures, according to an LSC Director, OCE submitted site visit reports on grantee compliance—including recommendations and needed corrective actions—to OPP staff responsible for grant awards and monitoring of grantee program performance. According to LSC’s President, OPP staff are in regular contact with grantee executive directors and other program management and program engagement visits are often used as a vehicle for following up on recommendations. The OCE Director told us that OPP staff provided program quality information obtained through its review of site visit reports to OCE for consideration in grantee compliance reviews. Although staff may share information about site visits, an LSC official who is responsible for monitoring program performance told us that LSC does not consider or track whether recommendations are open or closed, but rather provides the recommendations as possible best practices for grantees to consider implementing as their programs develop. Therefore, an LSC Director told us that the site visit report recommendations are not tracked for remediation purposes or for trending and analysis by LSC because these recommendations are considered best practices which may or may not be implemented. The Vice President for Program Performance and Compliance said that OPP prioritizes the recommendations included in its reports and only includes what OPP believes to be the most important recommendations. By undertaking the effort to make recommendations and prioritizing them to highlight important areas, but not tracking their completion and analyzing the results, LSC is missing an opportunity to assess the extent of progress made and leverage the value of these recommendations. LSC performance measures were not aligned with LSC’s core activities nor were they linked to specific offices responsible for making grant awards and monitoring grantee program performance and grantee compliance functions. Further, LSC did not have procedures in place to periodically reassess measures to ensure they are current. According to GAO’s Executive Guide: Effectively Implementing the Government Performance and Results Act, as a best practice, entities should assess performance to ensure that programs meet intended goals, assess the efficiency of processes, and promote continuous improvement. It further provides that performance measures should be linked directly to organizational components that have responsibility for making programs work and that routinely revisiting and updating an entity’s performance measures would help ensure they are relevant in providing feedback about whether the entity is achieving its current objectives. Performance measures that are not linked to the responsible office hinder accountability for program results, including the extent to which the LSC organizational components contribute toward LSC’s mission and where improvements are needed, and limit transparency and accountability to LSC’s Board on any organizational performance issues. LSC issued a Strategic Directions plan in 2006 laying out LSC’s performance measures. However, the plan’s performance measures did not account for the full range of LSC’s key grant awards and monitoring of grantee program performance and organizational grantee compliance responsibilities. For example, LSC’s plan did not include metrics to measure performance in the following core LSC activities related to its key grant awards or monitor grantee program performance and grantee compliance with respect to: identifying and targeting LSC’s own resources to address the most pressing civil legal needs of low-income individuals in the nation, and ensuring that grantees use the funding they receive to serve the low- income population throughout the nation. In addition, not all measures in LSC’s strategic plan were linked to specific LSC components. For example, LSC did not link scores on competitive grant evaluations with either OPP’s or OCE’s performance, even though these offices have responsibility for grantee program quality and compliance oversight. In addition, LSC did not link the performance measure number of technical assistance and training sessions conducted by LSC to the OPP organization even though OPP has organizational responsibility for such technical assistance. Further, we found LSC did not have procedures providing for periodic reassessment of key metrics to ensure they reflect up-to-date LSC mission priorities and objectives. According to the Chief Administrative Officer, LSC has recognized that its existing performance measures should be revised and periodically reassessed to ensure they are up-to-date and have begun actions in this regard. For example, since 2006, management has been developing a performance measure to obtain current information on “timeliness and degree of resolution of OCE corrective action notices.” LSC reviewed the results of a number of follow-up visits to confirm grantee resolution of OCE corrective action notices. The review found that the existing measure based on using the corrective action notices as an indicator of timeliness of resolution was insufficient. Instead, it was determined that without site visit verification of the resolution of original site visit findings the performance measure could not be reported on. LSC’s employee handbook provides overall policy direction over its human capital practices. However, we found existing procedures were flawed in several key respects concerning staffing needs assessments, evaluating performance, and providing appropriate internal control training. Specifically, LSC did not (1) systematically assess short- and long-term workload and staffing needs in relation to the corporation’s strategic goals and objectives, (2) provide required performance reviews for OPP staff in 3 of the 6 years we reviewed and for OCE staff in 2 of the 6 years we evaluated, or (3) provide formal training for current and incoming staff on internal controls. Standards for Internal Control in the Federal Government provides that all personnel are to possess and maintain a level of competence enabling them to effectively accomplish their assigned duties. In addition, Human Capital Principles for Effective Strategic Workforce Planning provides that effective staffing assessments should provide short- and long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. Strategic human capital practices are key to ensuring that an entity (1) has the staff capabilities needed to meet short- and long-term goals, (2) can effectively address performance problems, and (3) has staff who are trained in internal controls and related sound management practices. Our review found that LSC did not have procedures for assessing staffing needs. According to the Chief Administrative Officer and Director of Human Resources, LSC does not use mission priorities to establish staffing needs. Instead, the Vice President for Programs and Compliance said OPP and OCE consider workload needs and required staffing levels when preparing their budgets. According to the LSC employee handbook, LSC’s policy is that employee performance is to be evaluated annually at the beginning of the calendar year by the supervisor of record, based on job performance in the prior year. OPP staff stated that it is through the annual employee performance evaluation process that training needs are identified. However, LSC did not have procedures for ensuring review of employee performance and training. For calendar years (CY) 2003 and 2005, OPP and OCE personnel did not receive annual performance evaluations, and for CY 2008 OPP personnel did not receive performance evaluations. For 2003 and 2005, the Director of OHR stated that LSC did not follow its employee performance evaluation policy for conducting the required staff evaluations in 2003 and 2005 because of concerns about the appraisal process. As a result, LSC’s President suspended the appraisal process for these years. In 2008, according to the OPP director, OPP personnel did not receive appraisals because of a concern that evaluations would have to be done by a combination of people, none of whom had complete responsibility for overseeing the work throughout the year. Without the employee performance appraisals for all of its staff, LSC has limited its opportunities to encourage high performance, identify training needs, and communicate with staff. Although LSC had policies requiring approval and funds availability determination before issuing contracts for its grant activities and programs, it had not established specific funds tracking procedures to ensure that necessary approvals were obtained and funds were available before awarding contracts. Lacking effective contract approval and fund availability controls, LSC is at increased risk of improper contract awards and undetected budget shortfalls. LSC’s Administrative Manual’s policy requires approvals from OLA, the Comptroller, and, if the contract is over $10,500, the President, before contract award. However, our review found that LSC did not obtain contract approvals by OLA, the Comptroller, and LSC President—a critical accountability control—for any of the nine contracts over $10,500 issued in fiscal years 2008 and 2009. Our review of the nine contracts that exceeded the $10,500 presidential approval threshold revealed that LSC lacked any documentation showing that the required Contract Approval Form was completed before the contracts were awarded. The LSC Chief Administrative Officer (CAO) told us that verbal approvals were given by the President for five of the contracts. Of the remaining four contracts, one had the LSC President’s approval on the contract itself (but not the contract approval form), while the remaining three LSC contracts did not have any evidence of approvals. The LSC Administrative Manual, issued in February 2005, requires review and approval of all contracts before award by (1) office directors to ensure that they are within budgetary limitations; (2) OLA for legal assurance; (3) the Comptroller to ensure the requirements of the Administrative Manual were followed and to start a purchase order; and (4) if over $10,500, the LSC President. In accordance with the LSC Administrative Manual, a Contract Approval Form, which shows all approvals by designee signature, must be used to meet documentation requirements and be retained for all contracts awarded. Two contracts that did not follow LSC’s approval process resulted in an unplanned budgetary adjustment for fiscal year 2009. Specifically, we found two Office of Information Technology (OIT) contracts supporting grants management and administration that were not properly authorized and for which fund availability was not determined prior to contract award, which resulted in a LSC budget shortfall of over $70,000 in fiscal year 2009. According to the Director of OIT, after verbal approval by the LSC CAO, these contracts were executed by the Director of OIT without taking any action to determine that sufficient monies were available to fund the contracts, and without obtaining the required prior approval of OLA, the Comptroller, and the LSC President. LSC’s Comptroller informed the Board of Directors, President, and Inspector General of OIT’s overspending and asked for and received a $70,000 internal budgetary adjustment on August 31, 2009, to transfer budgeted funds from LSC’s capital expenditures account to the consulting budget. Consistent with our findings, the LSC financial statement auditor reported in its January 2010 Report of Deficiencies in Internal Control Over Financial Reporting and Other Matters for 2009 that the Contract Approval Forms were not used as required by the LSC Administrative Manual, and there was no evidence of approval by OLA. The auditors recommended in January 2010 that LSC implement procedures to ensure policies for contract awards are followed. LSC recently revised its Administrative Manual, effective October 1, 2009, to include a Contract Approval Form, with a provision that the LSC President approve all contracts over $10,500. Further, the LSC CAO stated that training was provided for all administrative staff on the proper procedures to follow for processing contracts. Such training should help ensure that a Contract Approval Form accompanies all LSC contracts, and that OLA and the Comptroller both review and document approval of all contracts and sign off on the Contract Approval Form before contract execution. However, the training may be of limited value unless LSC also establishes specific, detailed procedures on the steps required to ensure that all necessary approvals and fund availability certification is carried out and documented. Effective governance, accountability, and internal control are key to maintaining public trust and credibility. As such, identifying and implementing effective internal controls will assist LSC in ensuring that the federal funds LSC receives are being used efficiently and effectively. LSC has taken actions to improve its governance and accountability practices by implementing or partially implementing all 17 of the recommendations from our August 2007 and December 2007 reports. Progress continues since our prior testimony in October 2009 as LSC has implemented two additional recommendations and continues to take actions on the remaining recommendations. However, several key recommendations related to LSC’s grantee oversight responsibilities remain to be fully implemented. The control deficiencies we identified, along with the continuing nature of several related deficiencies first identified nearly 3 years ago, are indicative of weaknesses in LSC’s overall control environment. A weak control environment limits LSC’s ability to effectively manage its grant award and grantee performance oversight responsibilities. As such, it will be important for the LSC President and Board of Directors to continue to set a “tone at the top” supportive of establishing and maintaining effective internal control not only by managers but also by personnel throughout the entity’s program operations. In this regard, LSC would benefit from an entitywide internal control assessment, including whether the risks associated with grantee selection are effectively considered, past recommendations and corrective actions are properly tracked, and whether effective controls are in place over performance measurement, performance evaluation, and contract awards. LSC could also strengthen its overall control environment by providing training to staff throughout the entity on how internal controls, when functioning as intended, are integral to the achievement of the entity’s mission objectives. In the near term, it will be important for LSC leadership to direct immediate action to address the continuing weaknesses, as well as those identified in our current review. For the long term, LSC will need to focus on monitoring the sustained commitment to an effective overall system of internal controls necessary to achieve a solid basis for effectively accomplishing its core mission of enabling the grantees to provide legal services to individuals who otherwise could not afford such services. In order to improve key control processes over grant awards and monitoring of grantee program performance and grantee compliance, we recommend the President of LSC, and the Vice President for Programs and Compliance, take the following 17 actions: Grant Application Processing and Award Develop and implement procedures to provide a complete record of all data used, discussions held, and decisions made on grant applications. Develop and implement procedures to carry out and document management’s review and approval of the grant evaluation and award decisions. Conduct and document a risk-based assessment of the adequacy of internal control of the grant evaluation and award and monitoring process from the point that the Request for Proposal is created through award, and grantee selection. Conduct and document a cost benefit assessment of improving the effectiveness of application controls in LSC Grants such that the system’s information capabilities could be utilized to a greater extent in the grantee application evaluation and decision-making process. Develop and implement procedures to ensure that grantee site visit selection risk criteria are consistently used and to provide for summarizing results by grantee. Establish and implement procedures to monitor OCE grantee site visit report completion against the 120 day time frame provided in the OCE Procedures Manual. Execute a study to determine an appropriate standard timeframe for OLA opinions to be developed and issued. Develop and implement procedures to monitor completion of OLA opinions related to OCE site visits against the target time frame for issuing opinions. Develop and implement procedures to provide a centralized tracking system for LSC’s recommendations to grantees identified during grantee site visits and the status of grantees’ corrective actions. Develop and implement procedures to link performance measures (1) to specific offices and their core functions and activities, and (2) to LSC’s strategic goals and objectives. Develop and implement procedures for periodically assessing performance measures to ensure they are up-to-date. Develop and implement procedures to provide for assessing all LSC component staffing needs in relation to LSC’s strategic and strategic human capital plans. Develop and implement a mechanism to ensure that all LSC staff receive annual performance assessments. Develop and implement a process to monitor contract approvals to ensure that all proposed contracts are properly approved before award. Develop and implement procedures for contracts at or above established policy thresholds, to ensure the LSC President provides written approval in accordance with policy before contract award. Develop and implement procedures to ensure budget funds are available for all contract proposals before contracts are awarded. Develop and implement procedures for providing and periodically updating training for LSC management and staff on applicable internal controls necessary to effectively carry out LSC’s grant award and grantee performance oversight responsibilities. Establish a mechanism to monitor progress in taking corrective actions to address recommendations related to improving LSC grants award, evaluation, and monitoring. We provided copies of the draft report to LSC’s management for comment prior to finalizing the report. We received a written comment letter from LSC’s President on behalf of LSC’s management (see appendix III). In its written comments, LSC agreed with our findings and recommendations and identified specific actions it has taken and plans to take to implement these recommendations. LSC also provided technical comments which we considered and incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies of the report to other appropriate congressional committees and the president of LSC. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions or would like to discuss this report, please contact me at (202) 512-9095 or by e-mail at raglands@gao.gov. Major contributors to this report are listed in appendix IV. Our reporting objectives were to determine the extent to which the Legal Services Corporation (LSC) properly implemented key internal controls in awarding grants and overseeing grantee program performance; measured its performance in awarding grants and overseeing grantees; evaluated staffing needs for grant awards management and grantee performance oversight; and followed appropriate budget execution processes for awarding contracts related to grants award and grantee performance and oversight. To address the first two objectives, we interviewed current members of LSC’s management and staff, staff in LSC’s Office of Inspector General (OIG), and the audit firm employed by the OIG to obtain information on the functions and processes of LSC’s grant awards and monitoring of grantee program performance and grantee compliance. We also reviewed LSC documentation on internal control activities related to the awarding of grants and oversight of grantee programs, including policy manuals, audit reports, and management reports. In addition, we selected a probability sample of 80 out of 140 grantees and reviewed related grant applications and application evaluations (for the 2009 funding year), and compared evaluation results with instructions in LSC Grants, a computer-based grants application system. Results based on probability samples are subject to sampling error. The sample we drew for our review is only one of a large number of samples we might have drawn. Because different samples could have provided different estimates, we express our confidence in the precision of our particular sample results as a 95 percent confidence interval. This is the interval that would contain the actual population values for 95 percent of the samples we could have drawn. All survey estimates in this report are presented along with their margins of error. We analyzed the document setting out LSC-wide and component-specific goals and performance measures and compared this to federal guidance on performance measurement. We also observed LSC site visits at two grantees in Philadelphia and Indianapolis. To obtain information on LSC controls for assessing staffing needs for its grants functions, we interviewed LSC management and reviewed policies and procedures for evaluating staffing needs, training, and professional development, and reviewed relevant literature. We compared LSC’s staffing needs assessment processes to federal best practices in workforce planning principles. To obtain information on controls over contract approval and budget execution, we reviewed LSC’s administrative policy and procedure manual and consolidated operating budget guidance, documented budget execution requirements, and tested contracts for proper approval. For each of our objectives, we compared the information obtained with federal best practices in internal control in GAO’s Standards for Internal Control in the Federal Government. We conducted our work in Washington, D.C.; Indianapolis, Indiana; and Philadelphia, Pennsylvania, from March 2009 to May 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient appropriate evidence to provide a reasonable basis for our findings and conclusions. We believe the evidence obtained provided a reasonable basis for our audit findings and conclusions. Our August 2007 report recommendations to improve and modernize the governance processes and structure of LSC, along with our views on the status of LSC’s efforts to implement these recommendations (as of March 2010), are summarized in table 2. LSC data, which we obtained and analyzed as part of our follow-up work conducted between May 2009 and March 2010, showed that the board had fully implemented five of the eight recommendations, and had taken some action on the remaining three recommendations. Our August 2007 report recommendations to improve and modernize key management processes at LSC, along with the status of LSC’s efforts to implement those recommendations (as of March 2010), are summarized in table 3. Our December 2007 report recommendations to improve LSC’s internal control and oversight of grantees, along with our views on the status of LSC’s efforts to implement those recommendations (as of March 2010), are summarized in table 4. In addition to the contact named above, Kimberley A. McGatlin, Assistant Director; Lisa Crye; Patrick Frey; Cole Haase; Bernice M. Lemaire; Mitch Owings; Melanie Swift; and Carrie Wehrly made key contributions to this report. F. Abe Dymond, Lauren S. Fassler, and Justin Fisher provided technical assistance. | The Legal Services Corporation (LSC) was created as a private, nonprofit corporation to support legal assistance for low-income individuals on civil legal matters, primarily through federal grants and is primarily funded through federal appropriations. Effective internal controls over grant awards and oversight of grantees' performance are critical to LSC's mission. GAO and the LSC Inspector General have previously reported weaknesses and made recommendations. GAO's objectives for this report were to determine the extent to which LSC (1) implemented key internal controls in awarding and overseeing grantees, (2) measured its performance, (3) evaluated staffing needs, and (4) adhered to its budget execution processes. GAO analyzed key records and prior recommendations as well as interviewed LSC officials regarding LSC's internal control and performance frameworks, staffing, and contract processes. Although LSC's controls over reviewing and awarding grants are intended to help ensure fair and equitable consideration, they need improvement. Final award and fund decisions are documented and approved; however, LSC's grant application evaluation process and associated decisions were not documented, including key management discussions in the evaluation process. This lack of documentation of factors considered in making these decisions increases the risk that grantee application evaluation and funding decisions may not consider all key relevant information and makes it difficult to describe the basis for decisions later. In addition, LSC has no requirement for carrying out and documenting managerial review and approval of competitive grant evaluations or renewals, limiting its ability to identify gaps or incompatible data in applications. Although LSC has efforts underway to ensure it visits all grantee sites at least once every 3 years, LSC did not consistently or explicitly document the application of risk criteria when selecting which grantees to visit, complete timely site visit reports, or track the recommendations from the site visits. These weaknesses hindered LSC's ability to effectively oversee grantees. LSC is not required to follow the Government Performance and Results Act but has developed a Strategic Directions document with some performance measures. However, these measures do not reflect all of LSC's core activities and are not linked to its two primary offices for awarding and overseeing grants. Therefore, LSC cannot effectively measure its performance in several key dimensions, such as identifying and targeting resources in addressing the most pressing civil legal needs of low-income individuals across the nation. LSC has not systematically assessed its long-term staffing needs to achieve strategic goals and objectives, which could help ensure it has the staff capabilities needed to meet its short- and long-term goals. LSC has not consistently provided performance reviews for all of its staff, limiting opportunities to encourage high performance, identify training needs, and communicate with staff. At times, LSC did not adhere to its budget execution process in awarding contracts supporting its key grant-making responsibilities. Because officials did not follow LSC's approval controls for two contracts and there was a breakdown in tracking funds, LSC had a budget shortfall of $70,000 in 2009. Missing or flawed internal controls limit LSC's ability to effectively manage its grant award and grantee performance oversight responsibilities. Although LSC has taken steps to address all 17 GAO recommendations identified in prior work, several have yet to be fully addressed. In the near term, it will be important for LSC leadership to address both current and continuing weaknesses. For the long term, LSC will need to focus on strengthening its overall system of internal controls in order to establish a solid basis for effectively accomplishing its core mission. |
FMCSA’s primary mission is to reduce the number and severity of crashes involving large commercial trucks and buses conducting interstate commerce. It carries out this mission in the following ways: issuing, administering, and enforcing federal motor carrier safety and hazardous materials regulations; and gathering and analyzing data on motor carriers, drivers, and vehicles, among other things. FMCSA also takes enforcement actions and funds and oversees enforcement activities at the state level through Motor Carrier Safety Assistance Program grants. For-hire motor carriers are required to register with FMCSA and obtain federal operating authority before operating in interstate commerce. Applicants for passenger carrier operating authority must submit certain information to FMCSA, including contact information and U.S. Department of Transportation (DOT) number, and must certify that they have in place mandated safety procedures. After publication of the applicant’s information in the FMCSA Register, a 10-calendar-day period begins in which anyone can challenge the application. Within 90 days of the publication, the carrier’s insurance company must file proof of the carrier’s insurance with FMCSA. Applicants must also designate a process agent, a representative upon whom court orders may be served in any legal proceeding. After FMCSA has approved the application, insurance, and process agent filings, and the protest period has ended without any protests, applicants are issued operating authority. FMCSA ensures that carriers, including motor coach carriers, comply with safety regulations primarily through compliance reviews of carriers already in the industry and safety audits of carriers that have recently started operations. Compliance reviews and safety audits help FMCSA determine whether carriers are complying with its safety regulations and, if not, to take enforcement action against them, including placing carriers out of service. FMCSA makes its compliance determination based on performance in six areas: one area is the carrier’s crash rate, and the other five areas involve the carrier’s compliance with regulations, such as insurance coverage, driver qualifications, and vehicle maintenance and inspections. Carriers are assigned one of three Carrier Safety Ratings based on their compliance with the Federal Motor Carrier Safety Regulations (FMCSR). These ratings include “satisfactory,” for a motor carrier that has in place and functioning adequate safety management controls to meet federal safety fitness standards; “conditional,” for a motor carrier that does not have adequate safety management controls in place to ensure compliance with the safety fitness standard, that could result in a violation of federal safety regulations; or “unsatisfactory,” for a motor carrier that does not have adequate safety management controls in place to ensure compliance with the safety fitness standard, which has resulted in a violation of federal safety regulations. Carriers receiving an unsatisfactory rating have either 45 days (for carriers transporting hazardous materials in quantities that require placarding or transporting passengers) or 60 days (for all other carriers) to address the safety concerns. If a carrier fails to demonstrate it has taken corrective action acceptable to FMCSA, FMCSA will revoke its new entrant registration and issue an out-of-service order, which prohibits the carrier from operating until the violations are corrected. Further fines are assessed if it is discovered that it is operating despite the out-of-service order. Federal law requires new carriers to undergo a new-entrant safety audit within 18 months of when the company begins to operate. Carriers are then monitored on an ongoing basis using various controls that include but are not limited to annual vehicle inspections and driver qualification regulations. However, FMCSA may suspend a company or vehicle’s operation at any time by ordering it out of service if it determines that an imminent safety hazard exists. (An imminent hazard means any condition of vehicle, employee, or commercial motor vehicle operations which substantially increases the likelihood of serious injury or death if not discontinued immediately.) In addition, FMCSA orders carriers out of service for failure to pay civil penalties levied by FMCSA, failing to take required corrective actions related to prior compliance reviews, or failing to schedule a safety audit. Out-of-service carriers are supposed to cease operations and not resume operations until FMCSA determines that they have corrected the conditions that rendered them out of service. If a carrier fails to comply with or disregards an out-of-service order, FMCSA may assess a civil monetary penalty each time a vehicle is operated in violation of the order. FMCSA and state law enforcement agencies use several methods to ensure that carriers ordered out of service, including motor coach companies, do not continue to operate. For example, FMCSA and its state partners monitor data on roadside inspections, moving violations, and crashes to identify carriers that may be violating an out-of-service order. FMCSA will visit some suspect carriers that it identifies by monitoring crash and inspection data to determine whether those carriers violated their orders. Also, recently, the Commercial Vehicle Safety Alliance began to require checking for carriers operating under an out-of-service order during roadside inspections and to take enforcement action against any that are. However, given the large size of the industry, the nation’s extensive road network, and the relatively small size of federal and state enforcement staffs, it is difficult to catch motor coach carriers that are violating out-of- service orders. In addition, some carriers change their identities by changing their names and obtaining new DOT numbers—these carriers are generally referred to as reincarnating carriers—to avoid being caught. Our analysis of FMCSA data for fiscal years 2007 and 2008 identified 20 motor coach companies that likely reincarnated from “out-of-service” carriers. This represents about 9 percent of the approximately 220 motor coach carriers that FMCSA ordered out of service for those fiscal years. The analysis was based on two or more exact matches of data for the new entrant with the data for the out-of-service carriers on the following categories: company name, owner/officer name, address, phone number, cell phone number, fax number, vehicle identification number, and driver names. These 20 motor coach companies registered with FMCSA before FMCSA developed processes specifically for detecting reincarnated bus companies that were established subsequent to the Sherman, Texas, crash (see next section). The number of potential reincarnated motor coach carriers is understated because (1) our analysis was based on exact matches, so it could not find links if abbreviations were used or typos occurred in the data, (2) FMCSA only provided us data on vehicles and drivers when an accident or inspection took place, and thus the provided FMCSA data does not include the entire population of vehicles or drivers for either new entrants or out-of-service carriers, and (3) our analysis could not identify owners who purposely provided FMCSA bogus or otherwise deceptive information on the application (e.g., ownership) to hide the reincarnation from the agency. Although the number of reincarnated motor coach carriers that we could identify was relatively small, the threat these operators pose to the public has proven deadly. According to FMCSA officials, registration and enforcement policies at the time of the Sherman, Texas, crash, reincarnation was relatively simple to do and hard to detect. As a result, motor coach carriers known to be safety risks were continuing to operate, such as the company that was involved in the bus crash in Sherman, Texas. Five of the reincarnated carriers we identified were still operating as of May 2009. Our investigation found one of them had not received a safety evaluation and two carriers had been given a conditional rating after the agency determined its safety management controls were inadequate. The remaining two motor coach carriers were deemed satisfactory in a FMCSA compliance review because FMCSA inspectors were likely not aware of the potential reincarnations. We referred all five companies to FMCSA for further investigation. Based on our review of FMCSA data, we found that the agency already identified six of the 20 reincarnated motor coach carriers and ordered them out of service. The agency discovered them while performing a crash investigation (as in the case of the bus accident in Sherman, Texas), compliance reviews, or other processes. In addition, new carriers are subject to a safety audit within 18 months. Several of the reincarnated carriers we identified were small businesses located in states neighboring Mexico and making trips across the border. Our investigation also determined that all of the reincarnated motor coach carriers we identified were directly related to companies that received fines for safety problems shortly before being ordered out of service. Based on our analysis of the FMCSA data, we believe they reincarnated to avoid paying these fines and continue their livelihood. For example, we found instances where carriers continued to operate despite being ordered out of service for failure to pay their fines. In fact, one carrier was operating for several months after being placed out of service. We believe that these carriers reincarnated into new companies to evade fines and avoid performing the necessary corrective actions. We attempted to contact the owners to ask why they reincarnated but were unable to reach many of them. For the six owners that we did interview none said that they had shut down their old companies and opened new ones to evade the out-of-service orders. Table 1 summarizes information on 10 of the 20 cases that we investigated. Appendix II provides details on 10 others we examined. Appendix III provides a summary of the key data elements that matched on the new entrants that were substantially related to out-of-service carriers. The following narratives provide detailed information on three of the more egregious cases we examined. Case 1: The owner of a Houston motor coach company registered a new carrier with the same phone number, fax number, and cell phone number as the old one. The new company started in March 2007, 8 months before FMCSA ordered the old company out of service. The two companies appear to have operated simultaneously for a period of time. Six days after the new company was formed, a motor coach carrying 16 passengers operated by the old company was stopped and inspected on the United States–Mexico border at Laredo, Texas. The old company was charged with five violations, including “Operating without required operating authority.” The old company owes $2,000 in fines. Our investigators contacted one of the owner’s daughters. She stated that her mother was arrested for possessing drugs when she crossed the border from Mexico into the United States, and that her mother subsequently opened another bus company using another daughter’s name. The daughter said she was not involved with the bus company. In 2008, the new carrier’s new entrant registration was revoked and the owner was convicted of cocaine possession. Case 2: The owner of a New York motor coach company located at a church registered a new carrier using the same fax number, driver, and vehicle as the old one. FMCSA conducted a compliance review for the old company on May 30, 2007. Five safety violations were identified, including one “Acute” violation for “Failure to implement an alcohol and/or controlled substances testing program.” The old company, which was ordered out of service in October 2007 for failing to pay a fine, still has $2,000 in outstanding fines as of May 2009. On August 9, 2007, approximately 2 months prior to FMCSA ordering the old company out of service, a new carrier was established with the same company officer name and fax number as the old carrier. The location of the old carrier was a school, which was associated with (and located next door to) the church. The owner of the new company claimed that the old company belonged to his father, not him, and that it was a “completely different business from his own.” FMCSA records clearly show that this is not the case. The new owner is listed as “Vice President” of the old company, and “President” of the new company. The owner of the new company is also cited as being present during the Compliance Review conducted on May 30, 2007. The new company registered with FMCSA in August 2007 and FMCSA has not conducted a new-entrant safety audit of the new carrier as of July 2009, exceeding FMCSA’s internal goal of 9 months. Case 3: The owner of a Los Angeles motor coach company registered a new carrier using the same social security number, business name, phone number, fax number, and company officer as the old one. FMCSA conducted a compliance review on the old company in December 2006, resulting in an “Unsatisfactory” safety rating. The review cited 11 safety violations, including one “Acute” violation for “Failure to implement an alcohol and/or controlled substances testing program” and four “Critical” violations for failure to maintain driver and vehicle records. Since the old company did not take the necessary steps to fix the violations within 45 days, it was ordered out of service in February 2007. A month later a motor coach operated by the old company was inspected in Douglas, Arizona. The company was charged with operating a commercial motor vehicle after the effective date of an “unsatisfactory” rating and fined $5,620. The same owner started the new company in June 2008. FMCSA conducted a compliance review on the new carrier and gave it a “satisfactory” safety rating in October 2008. FMCSA officials stated that they were not aware of any affiliation with the previous company. Our investigators visited the place of business of the new carrier, which was being run out of a retail store. Although the old carrier was out of service, several brochures and business cards for the old carrier were displayed on the store’s counter, showing the same phone number as the new company. We attempted to contact the owner, but the business representative stated that the owner was currently in Mexico as the driver on a bus tour and could not be contacted. A week after our interview, unrelated to our investigation, FMCSA revoked the new company’s authority due to lack of insurance. Prior to the August 2008 crash in Sherman, Texas, FMCSA did not have a dedicated processe to identify and prevent motor coach carriers from reincarnating. At that time, an out-of-service carrier could easily apply online for a new DOT number and operating authority. In the application, the owner could include the same business name, address, phone number(s), and company officer(s) that already existed under the out-of- service DOT carrier. FMCSA did not have a process to identify these situations, and, thus, FMCSA would have granted the new entrant operating authority upon submission of the appropriate registration data. Subsequent to the Sherman crash, FMCSA established the Passenger Carrier Vetting Process (PCVP), which requires the review of each new application for the potential of being a reincarnation. Under this process, FMCSA executes a computer matching process to compare information contained in the motor coach carrier’s application to data of poor- performing motor coach carriers dating back to 2003. Specifically, it performs an exact match on the application with fields in nine categories across various FMCSA databases. This produces a list of suspect carriers and the number of matches in each category, which serve as indicators for further investigation. FMCSA officials stated that they have begun to enhance the computer-matching portion of the PCVP process. Specifically, the system will also be able to match fields that are close, but not necessarily exact matches of each other. For instance, “John P. Smith Jr.” would match “John Smith,” and “Maple Ln.” would match “Maple Lane.” This enhancement should improve its ability to detect those carriers attempting to disguise their prior registration. In addition to the computer matching, FMCSA Headquarters personnel receive and review each new carrier application for completeness and accuracy. It reviews the application for any red flags or evidence the company is a potentially unsafe reincarnated motor coach carrier. For example, the FMCSA staff check secretaries of state databases for the articles of incorporation to identify undisclosed owners. If the computer- matching process or FMCSA Division Office review identifies any suspected motor coach carriers attempting to reincarnate, FMCSA sends a Verification Inquiry letter to the applicant requesting clarification. If the carrier does not respond to the Verification Inquiry Letter within 20 days, the application will be dismissed. If the response to the letter shows the applicant is attempting to reincarnate, FMCSA issues a Show Cause Order stating that the application for authority will be denied unless the carrier can present evidence to the contrary. If the application is not completed, FMCSA dismisses the application and thus no authority is given. After the carrier is approved to operate, FMCSA requires all new carriers, including motor coach carriers, to undergo a safety audit within 18 months of approval. During this review, FMCSA should identify whether the new motor coach company is a reincarnation of a prior carrier. Although we did not specifically evaluate the effectiveness of the new-entrant audit process, we found two cases where FMCSA did not identify new motor coach carriers as reincarnations of companies it had ordered out of service and after the PCVP went into effect. Because we did not evaluate the effectiveness of the new-entry safety audit and the PCVP, we do not know the extent to which reincarnated carriers are still able to avoid FMCSA detection when registering to operate with the agency. GAO recently reported that PRISM provides up-to-date information on the safety status of the carrier responsible for the safety of a commercial vehicle prior to issuing or renewing vehicle registrations. PRISM generates a daily list of vehicles registered in the state that are associated with carriers that have just been ordered out of service by FMCSA. It is a tool that can be used by state personnel. PRISM’s innovation is that it is designed to associate vehicle identification numbers with out-of-service carriers to prevent the carrier from registering or reregistering its vehicles. Although PRISM is a potential deterrent to a carrier wishing to reincarnate, only 25 states have implemented the system to the extent that they can automatically identify out-of-service carriers and then deny, suspend, or revoke their vehicle registrations. Another limitation to PRISM’s effectiveness is that it only includes vehicles that register under a protocol known as the International Registration Plan (IRP)—which pertains only to carriers involved in interstate commerce. Charter buses are exempt from IRP (interstate) registration and thus not subject to PRISM. Furthermore, vehicles are not checked at registration since companies are not required to supply this information on their application to FMCSA. FMCSA’s duty and authority to deny operating authority registration to persons not meeting statutory requirements is provided by statute. A person applying for registration must demonstrate that he or she is willing and able to comply with the safety regulations, other applicable regulations of the Secretary, and the safety fitness requirements. Complexities regarding the application of State laws on corporate successorship may, in certain instances, affect the agency’s ability to deny operating authority to or pursue enforcement against unsafe reincarnated motor carriers under these statutory provisions. The complexities include the legal standard that must be met to hold a newly formed corporation liable for civil penalties assessed against its corporate predecessor. The facts necessary to satisfy the legal standard—whether under federal or State law—require documentation outside the normal compliance review processes. FMCSA uses a detailed Field Worksheet which lists types of evidence that would be needed, including company contact information, documentation on management and administrative personnel, business assets, tax records, insurance, payroll, drivers, vehicles, customer lists, advertising and promotional materials, corporate charters, and information on the corporate acquisition or merger at issue. This labor intensive investigative process is not undertaken unless strong preliminary evidence indicates that the new company is a reincarnation of a former motor carrier against which enforcement was taken and that the reincarnation was for the purpose of evading enforcement action or violation history of the predecessor company. In order to make it easier for FMCSA to place a reincarnated carrier out of service, the Highways and Transit Subcommittee of the Committee on Transportation and Infrastructure of the House of Representatives approved legislation on June 24, 2009, that would impose a uniform federal standard and would authorize FMCSA to deny or revoke operating authority from a carrier who failed to disclose a relationship with a prior carrier. The legislation would also authorize FMCSA, in certain cases, to impose civil penalties against a reincarnated motor carrier that were originally imposed against a related motor carrier. We briefed U.S. Department of Transportation (DOT) officials on the results of our investigation. They agreed that reincarnation of motor coach carriers is an important concern but stated that there are legitimate reasons for motor coach carriers to transfer ownership or reincorporate, or both, such as divorce, death, relocation, or new business opportunities. DOT officials stated that they established the PCVP to identify and attempt to prevent reincarnated carriers from receiving approval for operating authority. DOT officials stated that the PCVP is also used for household goods carriers and that they hope to use the process for other types of carriers if they obtain the resources to support this process. However, DOT officials stated that even if DOT has identified a carrier as a reincarnation, DOT must still prove that the new carrier is the corporate successor to the old carrier in order to deny or revoke the operating authority of the new carrier. DOT officials stated that this standard differs between states and that certain states require a very high standard of proof. As such, this determination is labor-intensive and requires documentation outside the normal compliance review process. DOT officials also provided technical comments to the report, which we addressed, as appropriate. As agreed with your office, unless you announce the contents of this report earlier, we will not distribute it until 3 days after its issue date. At that time, we will send copies of this report to the Secretary of Transportation and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To identify new entrants that were substantially related to motor carriers ordered out of service, we obtained and analyzed information from the following DOT databases: the Motor Carrier Management Information System (MCMIS), the Licensing & Insurance (L&I) system, and the Enforcement Management Information System (EMIS) as of December 2008. We identified new motor coach operators as those that had a New Entrant Program entry date of October 1, 2006, or later. We identified out- of-service motor carriers as those with an active, nonrescinded out-of- service order in place and who had been ordered off the road for reasons other than failure to make contact with DOT while in the New Entrant Program. We matched the new entrant carriers with those that were ordered out of service on the following key fields: company name, owner/officer name, address, phone number, cell phone number, fax number, vehicle identification number, and driver names. For the motor coach carriers identified, we interviewed, if possible, the owners to validate whether the company had reincarnated and, if possible, determine the reason for the reincarnation. Our analysis understates the actual number of reincarnated carriers because the matching scheme used cannot detect even minor changes in spelling, addresses, or owner names. In addition, the number is understated because FMCSA only provided us data on vehicles and drivers when an accident or inspection took place, and thus the provided FMCSA data does not include the entire population of vehicles or drivers for either new entrants or out-of-service carriers. Our analysis also could not identify all reincarnated carriers where the owners purposely provided FMCSA bogus or deceptive information on the application (e.g., ownership) to hide the reincarnation from FMCSA. To determine the tools FMCSA uses to identify reincarnated carriers, we interviewed FMSCA officials on the process that the agency uses to attempt to identify potentially reincarnating carriers. We also obtained and examined policies and other FMCSA documentation to obtain an understanding of the design of its motor carrier enrollment process. We did not perform any tests of the controls and therefore cannot make conclusions on its effectiveness. To determine the reliability of DOT’s databases, we reviewed the system documentation and performed testing on the validity of the data. We performed electronic testing of the data including verifying the completeness of the carrier data against numbers published by DOT. We discussed the sources of the different data types with DOT officials and discussed their ongoing quality-control initiatives. Based on our review of agency documents and our own testing, we concluded that the data elements used for this report were sufficiently reliable for our purposes. We conducted the work for this investigation from November 2008 through July 2009 in accordance with quality standards for investigations as set forth by the Council of the Inspectors General on Integrity and Efficiency. In the body of the report, we provide detailed information on 10 reincarnated carriers. Table 2 below provides detailed information on the other 10 motor coach carriers that we investigated and determined were potential reincarnations. The cases were primarily identified by two or more exact matches of FMCSA data for new entrants and for out-of- service carriers in the following categories: company name, owner/officer name, address, phone number, cell phone number, fax number, vehicle identification number, and driver names. As stated earlier, we identified 20 new entrants that were substantially related to motor carriers ordered out of service. We identified these new entrant carriers by matching them with those that were ordered out of service on the following key fields: company name, owner/officer name, address, phone number, cell phone number, fax number, vehicle identification number, and driver names. Table 3 below provides the fields that were matched between the new entrant and the carrier that was ordered out of service. GAO staff who made major contributions to this report include Matthew Valenta, Assistant Director; John Ahern; Donald Brown; John Cooney; Paul Desaulniers; Eric Eskew; Timothy Hurley; Steve Martin; Vicki McClure; Sandra Moore; Andrew O’Connell; Anthony Paras; Philip Reiff; and Ramon Rodriguez. | In 2008, the Federal Motor Carrier Safety Administration (FMCSA) reports that there were about 300 fatalities from bus crashes in the United States. Although bus crashes are relatively rare, they are particularly deadly since many individuals may be involved. FMCSA tries to identify unsafe motor coach carriers and take them off the road. GAO was asked to determine (1) to the extent possible, the number of motor coach carriers registered with FMCSA as new entrants in fiscal years 2007 and 2008 that are substantially related to or in essence the same carriers the agency previously ordered out of service, and (2) what tools FMCSA uses to identify reincarnated carriers. To identify new entrants that were substantially related to carriers placed out of service, we analyzed FMCSA data to find matches on key fields (e.g., ownership, phone numbers, etc.). Our analysis understates the actual number of reincarnated carriers because, among other things, the matching scheme used cannot detect minor spelling changes or other deception efforts. We interviewed FMCSA officials on how the agency identifies reincarnated carriers. GAO is not making any recommendations. In July 2009, GAO briefed FMCSA on our findings and incorporated their comments, as appropriate. Our analysis of FMCSA data for fiscal years 2007 and 2008 identified twenty motor coach companies that likely reincarnated from "out of service" carriers. This represents about 9 percent of the approximately 220 motor coach carriers that FMCSA placed out of service during these two fiscal years. The number of likely reincarnated motor carriers is understated, in part, because our analysis was based on exact matches and also could not identify owners who purposely provided FMCSA deceptive information on the application (e.g., ownership) to hide the reincarnation from the agency. Although the number of reincarnated motor coach carriers that we could identify was small, these companies pose a safety threat to the motoring public. According to FMCSA officials, under registration and enforcement policies up to summer 2008, reincarnation was relatively simple to do and hard to detect. As a result, motor coach carriers known to be safety risks were continuing to operate. According to FMCSA data, five of the twenty bus companies were still in operation as of May 2009. We referred these cases to FMCSA for further investigation. The twenty cases that we identified as likely reincarnations were registered with FMCSA at the time that FMCSA did not have any dedicated controls in place to prevent motor coach carriers from reincarnating. In 2008, FMCSA instituted a process to identify violators by checking applicant information against those of poor-performing carriers. For example, if FMCSA finds a new entrant with a shared owner name or company address for an out-of-service company, the agency will make inquiries to determine if the new applicant is related to the out-of-service carrier. If such a determination is made, FMCSA still faces legal hurdles, such as proving corporate successorship, to deny the company operating authority. |
The federal government is in a period of profound transition and faces an array of challenges and opportunities to enhance performance, ensure accountability, and position the nation for the future. As you know, our country’s transition into the 21st century is characterized by a number of key trends including: the national and global response to terrorism and other threats to personal and national security; the increasing interdependence of enterprises, economies, civil society, and national governments, referred to as globalization; the shift to market-oriented, knowledge-based economies; an aging and more diverse U.S. population; advances in science and technology and the opportunities and challenges created by these changes; challenges and opportunities to maintain and improve the quality of life for the nation, communities, families, and individuals; and the changing and increasingly diverse nature of governance structures and tools. As the nation and government policymakers grapple with the challenges presented by these evolving trends, they do so in the context of an overwhelming fact: The fiscal pressures created by the retirement of the baby boom generation and rising health care costs threaten to overwhelm the nation’s fiscal future. Our latest long-term budget simulations reinforce the need for change in the major cost drivers—Social Security and health care programs. By midcentury, absent reform of these entitlement programs and/or other major tax or spending policy changes, projected federal revenues may be adequate to pay little beyond interest on the debt and Social Security benefits. Further, our recent shift from surpluses to deficits means that the nation is moving into the future in a weaker fiscal position. In response to the emerging trends and long-term fiscal challenges the government faces in the coming years, we have an opportunity to create highly effective, performance-based organizations that can strengthen the nation’s ability to meet the challenges of the 21st century and reach beyond our current level of achievement. The federal government cannot accept the status quo as a “given”—we need to reexamine the base of government programs, policies, and operations. We must strive to maintain a government that is effective and relevant to a changing society—a government that is as free as possible of outmoded commitments and operations that can inappropriately encumber the future, reduce our fiscal flexibility, and prevent future generations from being able to make choices regarding what roles they think government should play. Many departments and agencies were created in a different time and in response to problems and priorities very different from today’s challenges. Some have achieved their one-time missions and yet they are still in business. Many have accumulated responsibilities beyond their original purposes. Others have not been able to demonstrate how they are making a difference in real and concrete terms. Still others have overlapping or conflicting roles and responsibilities. Redundant, unfocused, and uncoordinated programs waste scarce funds, confuse and frustrate program customers, and limit overall program effectiveness. Our work has documented the widespread existence of fragmentation and overlap from both the broad perspective of federal missions and from the more specific viewpoint of individual federal programs. As new needs are identified, the common response has been a proliferation of responsibilities and roles to federal departments and agencies, perhaps targeted on a newly identified clientele, or involving a new program delivery approach, or, in the worse case scenario, merely layered onto existing systems in response to programs that have failed or performed poorly. Though our work also suggests that some issues may warrant involvement of multiple agencies or more than one approach, fragmentation and overlap adversely impacts the economy, efficiency and effectiveness and of the federal government. It is obviously important to periodically reexamine whether current programs and activities remain relevant, appropriate, and effective in delivering the government that Americans want, need, and can afford. This includes assessing the sustainability of the programs, as well as the effectiveness of the tools—such as direct spending, loan guarantees, tax incentives, regulation, and enforcement—that these programs embody. Many federal programs—their goals, organizations, processes, and infrastructures—were designed years ago to meet the needs and demands as determined at that time and within the technological capabilities of that earlier era. The recent report of the Volcker Commission similarly observed that “ifty years have passed since the last comprehensive reorganization of the government” and that “he relationship of the federal government to the citizens it services became vastly broader and deeper with each passing decade.” The commission recommended that a fundamental reorganization of the federal government into a limited number of mission-related executive departments was needed to improve its capacity to design and implement public policy. We now have both an opportunity and an obligation to take a comprehensive look at what the government should be doing and how it should go about doing its work. Based on GAO’s own recent experiences with restructuring, such a fundamental reexamination of government missions, functions, and activities could improve government effectiveness and efficiency and enhance accountability by reducing the number of entities managed, thereby broadening spans of control, increasing flexibility, and fully integrating rather than merely coordinating related government activities. Given the obvious case for reexamining the government’s structure, the major issue for debate today is the question of whether and how to change the Congress’ normal deliberative process for reviewing and shaping executive branch restructuring proposals. Such authority can serve to better enable presidential leadership to propose government designs that would be more efficient and effective in meeting existing and emerging challenges. Presidential leadership is critical to set goals and propose the means—the organizational design and policy tools—needed to achieve the goals. However, it is important to ensure a consensus on identified problems and needs, and to be sure that the solutions our government legislates and implements can effectively remedy the problems we face in a timely manner. Fixing the wrong problems, or even worse, fixing the right problems poorly, could cause more harm than good. Congressional deliberative processes serve the vital function of both gaining input from a variety of clientele and stakeholders affected by any changes and providing an important constitutional check and counterbalance to the executive branch. The statutory framework for management reform enacted during the 1990s demonstrates the Congress’ capacity to deal with governmentwide management reform needs. The Congress sought to improve the fiscal, program, and management performance of federal agencies, programs, and activities. For example, the Government Performance and Results Act (GPRA) is a central component of the existing statutory management framework, which includes other major elements, such as the Chief Financial Officers (CFO) Act, and information resource management improvements, such as the Clinger- Cohen Act. These laws provide information that is pertinent to a broad range of management-related decisions to help promote a more results- oriented management and decision-making process, regardless of what organizational approach is employed. The normal legislative process, which by design takes time to encourage thorough debate, does help to ensure that any related actions are carefully considered and have broad support. The Congress has played a central role in management improvement efforts throughout the executive branch and has acted to address several high-risk areas through both legislative and oversight activities. Traditionally, congressional and executive branch considerations of policy trade-offs are needed to reach a reasonable degree of consensus on the appropriate federal response to any substantive national need. It is imperative that the Congress and the administration form an effective working relationship on restructuring initiatives. Any systemic changes to federal structures and functions must be approved by the Congress and implemented by the executive branch, so each has a stake in the outcome. Even more importantly, all segments of the public that must regularly deal with their government—individuals, private sector organizations, states, and local governments—must be confident that the changes that are put in place have been thoroughly considered and that the decisions made today will make sense tomorrow. Only the Congress can decide whether it wishes to limit its powers and role in government reorganizations. As part of the legislative branch, we at GAO obviously have some concerns regarding any serious diminution of congressional authority. In certain circumstances, the Congress may deem limitations appropriate; however, care should be taken regarding the nature, timing, and scope of any related changes. Lessons can be learned from prior approaches to granting reorganization authority to the President. Prior successful reorganization initiatives reinforce the importance of maintaining a balance between executive and legislative roles in undertaking significant organizational changes. Safeguards are needed to ensure congressional input and concurrence on the goals as well as overall restructuring proposals. In the final analysis, the Congress must agree with any restructuring proposals submitted for consideration by the President in order for them to become a reality. Periodically, between 1932 and 1984, the Congress provided the President one form or another of expedited reorganization authority. Most of the authority granted during this period shared three characteristics. First, most previous authorities established rules that allowed the President’s plan to go into effect unless either house acted by passing a motion of disapproval within a fixed period. However, in accordance with the 1983 Chadha decision, which held the one-house legislative veto unconstitutional, the most recent expedited reorganization authority, granted to President Reagan in 1984, required passage of a joint affirmative resolution by both houses and signed by the President to approve any presidential reorganization plan. Hence, the need for both houses to positively approve a president’s plan for it to take effect set a higher bar for success and in essence gave the Congress a stronger role than in the past. Second, between 1949 and 1984, the Congress increasingly limited the scope of what the President could propose in a reorganization plan, which also had the effect of enhancing congressional control. For example, whereas in 1949, there were few restrictions on what the President could propose, the Reorganization Act of 1977 prohibited plans that, among other things, established, abolished, transferred, or consolidated departments or independent regulatory agencies. Third, expedited reorganization authority during this period limited the period of time during which a President could propose any reorganization plans. Clearly, the extent to which the Congress was willing to cede its authority to oversee the President’s reorganization plans has been an important variable in designing such provisions. Throughout the 20th century, efforts to structure the federal government to address the economic and political concerns of the time met with varying degrees of success. The first Hoover Commission, which lasted from 1947 to 1949, is considered by many to have been the most successful of government restructuring efforts. The membership was bipartisan, including members of the administration and both houses of the Congress. Half its members were from outside government. The commission had a clear vision, making reorganization proposals that promoted what they referred to as “greater rationality” in the organization and operation of government agencies and enhanced the president's role as the manager of the government—principles that were understood and accepted by both the White House and the Congress. Former President Hoover himself guided the creation of a citizens' committee to build public support for the commission's work. More than 70 percent of the first Hoover Commission's recommendations were implemented, including 26 out of 35 reorganization plans. According to the Congressional Research Service, “the ease with which most of the reorganization plans became effective reflected two factors: the existence of a consensus that the President ought to be given deference and assistance by Congress in meeting his managerial responsibilities and the fact that most of the reorganization plans were pretty straightforward proposals of an organizational character.” By contrast, the second Hoover Commission, which lasted from 1953 to 1954, had a makeup very similar to that of the first, but it did not have the advance backing of the President and the Congress. Hoover II, as it was called, got into policy areas with the goal of cutting government programs. But it lacked the support of the President, who preferred to use his own advisory group in managing the government. It also lacked the support of the Congress and the public, neither of which cared to cut the government at a time when federally run programs were generally held in high esteem and considered efficient and beneficial. More than 60 percent of Hoover II's recommendations were implemented, but these were mostly drawn from the commission's technical recommendations rather than from its major ones (such as changing the government's policies on lending, subsidies, and water resources) that would have substantively cut federal programs. The lesson of the two Hoover Commissions is clear: If plans to reorganize government are to move from recommendation to reality, creating a consensus for them is essential to the task. In this regard, both the process employed and the players involved in making any specific reorganization proposals are of critical importance. The success of the first Hoover Commission can be tied to the involvement and commitment of both the Congress and the President. Both the legislative branch and executive branches agreed to the goals. With this agreement, a process was established that provided for wide spread involvement, including citizens, and transparency so that meaningful results could be achieved. That lesson shows up again in the experience of the Ash Council, which convened in 1969-70. Like the first Hoover Commission, the Ash Council aimed its recommendations at structural changes to enhance the effectiveness of the President as manager of the government. In addition to renaming the Bureau of the Budget the Office of Management and Budget, the Ash Council proposed organizing government around broad national purposes by integrating similar functions under major departments. It proposed that four super departments be created—economic affairs, community development, natural resources, and human services—with State, Defense, Treasury, and Justice remaining in place. But the Ash Council could not gain the support of the Congress. Its recommendations would have drastically altered jurisdictions within the Congress and the relationships between committees and the agencies for which they had oversight responsibilities. The Congress was not thoroughly clear on the implications of the four super departments, was not readily willing to change its own structure to parallel the structure proposed by the council, and was not eager to substantially strengthen the authority of the presidency. Once again, the lesson for today is that reorganizing government is an immensely complex and politically charged activity. Those who would reorganize government must make their rationale clear and must build a consensus for change before specific proposed reorganizations are submitted to Congress if they are to see their efforts bear fruit. It is important that all players, particularly the Congress and the President, reach agreement on restructuring goals and establish a process to achieve their objectives that provides needed transparency if anything substantive is to be achieved. The process may vary depending on the significance of the changes sought. However, the risk of failure is high without having the involvement of key players and a process to help reach consensus on specific reorganization proposals that are submitted to the Congress for its consideration. A final important lesson from these prior experiences is that a balance must be struck between the need for due deliberation and the need for action. A distinction also needs to be made between policy choices and operational choices. Relatively straightforward reorganization proposals that focus on operational issues appear to have met with greater success than those that addressed more complex policy issues. For example, proposals to eliminate programs, functions, or activities typically involve policy choices. On the other hand, a proposal to consolidate those same activities within a single organization is more focused on management effectiveness and efficiency, than on policy changes. Therefore, in contrast to the past “one-size-fits-all” approaches, in again granting expedited reorganization authority to the President, the Congress may wish to consider different tracks that allow for a longer period for review and debate of proposals that include significant policy elements as opposed to operational elements. Three years ago, I testified that the challenge for the federal government at the start of the 21st century is to continue to improve and to translate the management reforms enacted by the Congress in the 1990s into a day-to- day management reality across government. Restructuring can be an important tool in this effort. Restructuring efforts must, however, be focused on clear goals. Further, irrespective of the number and nature of federal entities, creating high-performing organizations will require a cultural transformation in government agencies. Hierarchical management approaches will need to yield to partnerial approaches. Process-oriented ways of doing business will need to yield to results-oriented ones. Siloed organizations—burdened with overlapping functions, inefficiencies and turf battles—will need to become more horizontal and integrated organizations if they expect to make the most of the knowledge, skills, and abilities of their people. Internally focused agencies will need to focus externally in order to meet the needs and expectations of their ultimate clients—the American people. In the coming month, I plan to convene a forum to discuss steps federal agencies can take to become high- performing organizations. GAO is leading by example. To create a world-class professional services organization, we have undertaken a comprehensive transformation effort over the past few years. Our strategic plan, which is developed in consultation with the Congress, is forward looking and built on several key themes that relate to the United States and our position in the world community. We restructured our organization in calendar year 2000 to align with our goals, resulting in significant consolidation—going from 35 to 13 teams, eliminating an extra organizational layer, and reducing the number of field offices from 16 to 11. We have become more strategic, results-oriented, partnerial, integrated, and externally focused. Our scope of activities includes a range of oversight-, insight-, and foresight-related engagements. We have expanded and revised our products to better meet client needs. In addition, we have re-defined success in results-oriented terms and linked our institutional and individual performance measures. We have strengthened our client relations and employed a “constructive engagement approach” to those we review. The impact on our results has been dramatic. Several of our key performance measures have almost doubled and our client feedback reports satisfaction has also improved. There are six important elements to consider for a successful reorganization—establishing clear goals, taking an integrated approach, developing a comprehensive human capital strategy, selecting appropriate service delivery mechanisms, managing the implementation, and providing effective oversight. Clear goals. The key to any reorganization plan is the creation of specific, identifiable goals. The process to define goals will force decision makers to reach a shared understanding of what really needs to be fixed in government, what the federal role really ought to be, how to balance differing objectives, and what steps need to be taken to create not just short-term advantages but long-term gains. The mission and strategic goals of an organization must become the focus of the transformation, define the culture, and serve as a vehicle to build employee and organizational identity and support. Mission clarity and a clear articulation of priorities are critical, and strategic goals must align with and support the mission and serve as continuing guideposts for decision making. New organizations must have a clear set of principles and priorities that serve as a framework for the organization, create a common culture, and establish organizational and individual expectations. The most recent restructuring, the formation of the Department of Homeland Security (DHS), illuminates this point. There was clear national consensus that a new national goal and priority was homeland security. With agreement on the mission and goals of this new department, the various activities and functions scattered throughout the government could be identified and moved into the new department. Building a framework of clearly articulated goals facilitates any restructuring effort. This is true for both the initial design and the implementation. The government today is faced with many challenges. In considering restructuring, it is important to focus on not just the present but the future trends and challenges. Identification of goals to address these trends and challenges provides a framework for achieving consensus and organizational design. In fact, the effects of any reorganization are felt more in the future than they are today. The world is not static. Therefore, it is vital to take the long view, positioning the government to meet the challenges of the 21st century. Regardless of the immediate objectives, any reorganization should have in mind certain overarching goals: a government that serves the public efficiently and economically, that is run in a sound, businesslike fashion with full accountability, and that is flexible enough to respond to change. Integrated approach. The importance of seeing the overall picture cannot be overestimated. Reorganization demands a coordinated approach, within and across agency lines, supported by solid consensus for change. One cannot underestimate the interconnectedness of government structure and activities. Make changes here, and you will certainly affect something over there. Our work has certainly illuminated the interconnectedness of federal programs, functions, and activities. DHS again provides lessons. Though many homeland security responsibilities, functions, and activities have been brought under the umbrella of DHS, many remain outside. DHS will have to form effective partnerships throughout the federal government—on intelligence functions, health issues, science activities. In addition, partnerships will be required outside the federal government—state and local governments, private sector organizations, and the international community, if DHS is to successfully accomplish its mission. We have previously reported that the Government Performance and Results Act (Results Act) could provide a tool to reexamine roles and structure at the governmentwide level. The Results Act requires the President to include in his annual budget submission a federal government performance plan. The Congress intended that this plan provide a “single cohesive picture of the annual performance goals for the fiscal year.” The governmentwide performance plan could be a unique tool to help the Congress and the executive branch address critical federal performance and management issues. It also could provide a framework for any restructuring efforts. Unfortunately, this provision has not been fully implemented. Beyond an annual performance plan, a strategic plan for the federal government might be an even more useful tool to provide broad goals and facilitate integration of programs, functions, and activities, by providing a longer planning horizon. In the strategic planning process, it is critical to achieve mission clarity in the context of the environment in which we operate. With the profound changes in the world, a re-examination of the roles and missions of the federal government is certainly needed. From a clearly defined mission, goals can be defined and organizations aligned to carrying out the mission and goals. Integration and synergy can be achieved between components of the government and with external partners to provide more focused efforts on goal achievement. If fully developed, a governmentwide strategic plan can potentially provide a cohesive perspective on the long-term goals for a wide array of federal activities. Successful strategic planning requires the involvement of key stakeholders. Thus, it could serve as a mechanism for building consensus. The process of developing the plan could prompt a more integrated and focused discussion between the Congress and the administration about long-term priorities and how agencies interact in implementing those priorities. Further, it could provide a vehicle for the President to articulate long-term goals and a road map for achieving them. In the process, key national performance indicators associated with the long-term goals could be identified and measured. In addition, a strategic plan can provide a much needed framework for considering any organizational changes and making resource allocation decisions. Essentially, organizations and resources (e.g., human, financial, and technological) are the ways and means of achieving the goals articulated by the strategic plan. Organizations should be aligned to be consistent with the goals and objectives of the strategic plan. Clear linkages should exist between the missions and functions of an organization and the goals and objectives it is trying to achieve. In making the trade-offs in resource decisions, a strategic plan identifies clear priorities and forms a basis for allocating limited resources for maximum effect. The process of developing a strategic plan that is comprehensive, integrated, and reflects the challenges of our changing world will not be easy. However, the end result could be a government that serves the public efficiently and economically, that is run more efficiently and effectively with full accountability, and that is flexible enough to respond to our rapidly changing world. Human capital strategy. People are an organization’s most important asset, and strategic human capital management should be the centerpiece of any transformation or organizational change initiative. An organization’s people define its character, affect its capacity to perform, and represent the knowledge base of the organization. Since 2001, we have designated human capital management as a governmentwide high risk. The Congress and the executive branch have taken a number of steps to address the federal government’s human capital shortfalls. However, serious human capital challenges continue to erode the ability of many agencies, and threaten the ability of others, to economically, efficiently, and effectively perform their missions. A consistent, strategic approach to maximize government performance and ensure its accountability is vital to the success of any reorganization efforts as well as to existing organizations. A high-performance organization focuses on human capital. Human capital approaches are aligned with mission and goal accomplishment. Strategies are designed, implemented, and assessed based on their ability to achieve results and contribute to the organization’s mission. Leaders and managers stay alert to emerging mission demands and human capital challenges. They reevaluate their human capital approaches through the use of valid, reliable, and current data, including an inventory of employee skills and competencies. Recruiting, hiring, professional development, and retention strategies are focused on having the needed talent to meet organizational goals. Individual performance is clearly linked with organizational performance. Effective performance management systems provide a “line of sight” showing how unit, team, and individual performance can contribute to overall organizational goals. Human capital strategies need to be built into any restructuring efforts. The Congress has recognized the importance of human capital in recent restructuring efforts. For example, in the creation of DHS and the Transportation Security Agency (TSA), human capital issues were addressed directly with the granting of flexibilities to improve the effectiveness of their workforces. Thus, human capital issues need to be addressed in both the design and implementation of any organization. Service delivery mechanisms. Once goals are defined, attention must be paid not only to how the government organizes itself but also to the tools it uses to achieve national goals. The tools for implementing federal programs include, for example, direct spending, loans and loan guarantees, tax expenditures, and regulations. A hallmark of a responsive and effective government is the ability to mix public/private structures and tools in ways that are consistent with overriding goals and principles while providing the best match with the nature of the program or service. The choice of tools will affect the results the government can achieve. Therefore, organizations must be designed to effectively use the tools they will employ. In most federal mission areas—from low-income housing to food safety to higher education assistance—national goals are achieved through the use of a variety of tools and, increasingly, through the participation of many organizations that are beyond the direct control of the federal government. This environment provides unprecedented opportunities to change the way federal agencies are structured to do business internally and across boundaries with state and local governments, nongovernmental organizations, private businesses, and individual citizens. Implementation. No matter what plans are made to reorganize the government, fulfilling the promise of these plans will depend on their effective implementation. The creation of a new organization may vary in terms of size and complexity. However, building an effective organization requires consistent and sustained leadership from top management to ensure the needed transformation of disparate agencies, programs, functions, and activities into an integrated organization. To achieve success, the end result should not simply be a collection of component units, but the transformation to an integrated, high-performance organization. The implementation of a new organization is an extremely complex task that can take years to accomplish. It is instructive to note that the 1947 legislation creating the Department of Defense was further changed four times by the Congress in order to improve the effectiveness of the department. Despite these changes, DOD continues to face a range of major management challenges, with six agency-specific challenges on our 2003 list and three governmentwide challenges. Start-up problems under any reorganization are inevitable but can be mitigated by comprehensive planning and strong leadership. An implementation plan anchored by an organization’s mission, goals and core values is critical to success. An implementation plan should address the complete transition period, not just the first day or the first year. It must go beyond simply the timetable for the organization’s creation, consolidation, or elimination. Effective implementation planning requires identification of key activities and milestones to transform the organization into a fully integrated, high-performance organization and establish accountability for results. Careful planning and attention to management practices and key success factors, such as strategic planning, information technology, risk management, and human capital management, are important to overall results. A human capital strategic plan must be developed. It is vital to have key positions filled with people who possess the critical competencies needed by the organization. Further, systems and processes need to be tailored to and integrated within the organization. The experiences of TSA highlight the need for long-term planning. A year after being set up, although great progress has been made, TSA still faces numerous challenges—ensuring adequate funding; establishing adequate cost controls; forming effective partnerships to coordinate activities; ensuring adequate workforce competence and staffing levels; ensuring information systems security; and implementing national security standards. Top leadership must set priorities and focus on the most critical issues. While top leadership is essential and indispensable, it will be important to have a broad range of agency leaders and managers dedicated to the transformation process to ensure that changes are thoroughly implemented and sustained over time. Dedicated management leadership can free the head of the organization from day-to-day operational and administrative issues, allowing time to focus on mission priorities. One approach to providing the sustained management attention essential for addressing key infrastructure and stewardship issues while helping facilitate the transition and transformation process is the creation of a chief operating officer (COO) position within selected federal agencies. To be successful, a COO must have a proven track record in a related position and high profile—reporting directly to the agency head, and be vested with sufficient authority to achieve results. Since successful restructurings often take a considerable amount of time, 5 to 7 years being common, a term appointment of up to 7 years might be warranted. To further clarify accountability, the COO should be subject to a clearly defined, results- oriented performance contract with appropriate incentives, rewards, and accountability mechanisms. Oversight. Congressional involvement is needed not just in the initial design of the organization, but in what can turn out to be a lengthy period of implementation. The Congress has an important role to play—both in its legislative and oversight capacities—in establishing, monitoring, and maintaining progress to attain the goals envisioned by government transformation and reorganization efforts. Sustained oversight by the Congress is needed to ensure effective implementation. The understanding by the Congress of the various agencies will provide a measure of whether the reorganization is accomplishing its goals and whether it needs further refinement. Assessing progress is important to ensuring implementation is moving in the right direction. To ensure effective implementation, along with efficient and effective oversight, the Congress will also need to consider realigning its own structure. With changes in the executive branch, the Congress should adapt its own organization in order to improve its efficiency and effectiveness. Most recently, the Congress has undertaken a reexamination of its committee structure, with the implementation of DHS. In fact, the DHS legislation instructed both houses of Congress to review their committee structures in light of the reorganization of homeland security responsibilities within the executive branch. In summary, the key issue at hand is how to make changes and reforms and what the respective roles of the Congress and the executive branch should be in the process. Only the Congress can decide whether it wishes to limit its powers and role in government reorganizations. As part of the legislative branch, I obviously have some concerns about any serious diminution of your authority. In certain circumstances, the Congress may deem it appropriate. A distinction needs to be made between policy choices and operational choices, and a balance must be struck between the need for due deliberation and the need for action in these different cases. The Congress may wish to consider a longer period for review and debate of proposals that include significant policy elements versus operational elements. Further, the President and the Congress may wish to consider establishing a process (e.g., a commission), that provides for the involvement of the key players and a means to help reach consensus on any specific restructuring proposals that would be submitted for consideration by the Congress. | GAO has sought to assist the Congress and the executive branch in considering the actions needed to support the transition to a more high performing, results-oriented, and accountable federal government. At the Committee's request, GAO provided perspective on the proposal to reinstate the authority for the President to submit government restructuring plans to the Congress for expedited review. In view of the overarching trends and the growing fiscal challenges facing our nation, there is a need to consider the proper role of the federal government, how the government should do business in the future, and in some instances, who should do the government's business in the 21st century. The fundamental issue raised by the proposal to grant reorganization authority to the President is not whether the government's organization can and should be restructured, but rather, whether and how the Congress wishes to change the nature of its normal deliberative process when addressing proposals to restructure the federal government. Given current trends and increasing fiscal challenges, a comprehensive review, reassessment, and reprioritization of what the government does and how it does it is clearly warranted. This is especially vital in view of changing priorities and the compelling need to examine the base of government programs, policies, and operations since, given GAO's long-term budget simulations, the status quo is unsustainable over time. While the intent of such a review is desirable and some expedited congressional consideration may well be appropriate for specific issues, the Congress also has an important role to play in government reform initiatives, especially from an authorization and oversight perspective. In contrast to the past "one-size-fits-all" approaches in developing new executive reorganization authority, the Congress may want to consider different tracks for proposals that propose significant policy changes versus those that focus more narrowly on government operations. Further, Congress may want to consider establishing appropriate processes to ensure the involvement of key players, particularly in the legislative and executive branches, to help facilitate reaching consensus on specific restructuring proposals that would be submitted for consideration, should the Congress enact a new executive reorganization authority. Modern management practices can provide a framework for developing successful restructuring proposals. Such practices include: establishing clear goals, following an integrated approach, developing an effective human capital strategy, considering alternative program delivery mechanisms, and planning for both initial and long-term implementation issues to achieve a successful transformation. Furthermore, successful implementation will depend in part on continuing congressional oversight. The Congress could significantly enhance its efficiency and effectiveness by adapting its own organization to mirror changes in the executive branch. |
Multiple executive-branch agencies are responsible for different phases in the federal government’s personnel security clearance process. In 2008, the Director of National Intelligence, for example, was designated Security Executive Agent by Executive Order 13467 and, in this capacity, is responsible for developing uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of background investigations and adjudications relating to determinations of eligibility for access to classified information or eligibility to hold a sensitive position. In turn, requesting executive branch agencies determine which positions—military, civilian, or private-industry contractors—require access to classified information and, therefore, which people must apply for and undergo a security clearance investigation. Investigators—often contractors—from Federal Investigative Services within the Office of Personnel Management (OPM) conduct these investigations for most of the federal government using federal investigative standards and OPM internal guidance as criteria for collecting background information on applicants. Adjudicators from requesting agencies, such as DOD, use the information contained in the resulting OPM investigative reports and consider federal adjudicative guidelines to determine whether an applicant is eligible for a personnel security clearance. DOD is OPM’s largest customer, and its Under Secretary of Defense for Intelligence (USD(I)) is responsible for developing, coordinating, and overseeing the implementation of DOD policy, programs, and guidance for personnel, physical, industrial, information, operations, chemical/biological, and DOD Special Access Program security. Additionally, the Defense Security Service, under the authority and direction and control of USD(I), manages and administers the DOD portion of the National Industrial Security Program for the DOD components and other federal services by agreement, as well as providing security education and training, among other things. Pub. L. No. 108-458 (2004) (relevant sections codified at 50 U.S.C. § 435b). Council. Under the executive order, this council is accountable to the President for driving implementation of the reform effort, including ensuring the alignment of security and suitability processes, holding agencies accountable for implementation, and establishing goals and metrics for progress. The order also appointed the Deputy Director for Management at the Office of Management and Budget as the chair of the council and designated the Director of National Intelligence as the Security Executive Agent and the Director of OPM as the Suitability Executive Agent. We have previously reported that, to safeguard classified data and manage costs, agencies need an effective process to determine whether positions require a clearance and, if so, at what level. Last year we found, however, that the Director of National Intelligence, as Security Executive Agent, has not provided agencies clearly defined policies and procedures to consistently determine if a civilian position requires a security clearance.for, among other things, developing uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of background investigations and adjudications relating to determinations of eligibility for access to classified information or eligibility to hold a sensitive position, and gives the Director authority to issue guidance to agency heads to ensure uniformity in processes relating to those determinations. Further, the Director also has not established guidance to require agencies to review and revise or validate existing federal civilian Executive Order 13467 assigns the Director responsibility position designations. Executive Order 12968certain exceptions, eligibility for access to classified information shall only be requested and granted on the basis of a demonstrated, foreseeable need for access, and the number of employees that each agency determines is eligible for access to classified information shall be kept to the minimum required. The order also states that access to classified information shall be terminated when an employee no longer has a need for access, and prohibits requesting or approving eligibility for access in excess of the actual requirements. Without such requirements, executive branch agencies may be hiring and budgeting for initial and periodic security clearance investigations using position descriptions and security clearance requirements that no longer reflect national security needs. says that, subject to In our July 2012 report, we found that Department of Homeland Security and DOD components’ officials were aware of the need to keep the number of security clearances to a minimum, but were not always required to conduct periodic reviews and validations of the security clearance needs of existing positions. Overdesignating positions results in significant cost implications, given that the fiscal year 2012 base price for a top secret clearance investigation conducted by OPM was $4,005, while the base price of a secret clearance was $260. Conversely, underdesignating positions could lead to security risks. In the absence of guidance to determine if a position requires a security clearance, agencies are using a tool that OPM designed to determine the sensitivity and risk levels of civilian positions which, in turn, inform the type of investigation needed. OPM audits, however, found inconsistency in these position designations, and some agencies described problems in implementing OPM’s tool. In an April 2012 audit, OPM reviewed the sensitivity levels of 39 positions in an agency within DOD and reached different conclusions than the agency for 26 of them. Problems exist, in part, because OPM and the Office of the Director of National Intelligence did not collaborate on the development of the position designation tool, and because their roles for suitability—consideration of character and conduct for federal employment—and security clearance reform are still evolving. In our July 2012 report, we concluded that without guidance from the Director of National Intelligence, and without collaboration between the Office of the Director of National Intelligence and OPM in future revisions to the tool, executive branch agencies will continue to risk making security clearance determinations that are inconsistent or at improper levels. In July 2012, we recommended, among other things, that the Director of National Intelligence, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue clearly defined policy and procedures for federal agencies to follow when determining if federal civilian positions require a security clearance. We also recommended that the Director of National Intelligence, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue guidance to require executive branch agencies to periodically review and revise or validate the designation of all federal civilian positions. The Director of National Intelligence concurred with our recommendation and has taken steps to implement them. We have emphasized—since the late 1990s—a need to build quality and quality monitoring throughout the clearance process to promote oversight and positive outcomes, such as honoring reciprocity. Executive branch efforts have emphasized timeliness, but efforts to develop and implement metrics for measuring the quality of investigations have not included goals with related outcome focused measures to show progress or identify obstacles to progress and possible remedies. Furthermore, our recent reviews of OPM’s investigations show reasons for continuing concern. For example, in May 2009 we reported that, with respect to initial top secret clearances adjudicated in July 2008, documentation was incomplete for most OPM investigative reports. We independently estimated that 87 percent of about 3,500 investigative reports that DOD adjudicators used to make clearance decision were missing required documentation. We recommended that the Director of OPM direct the Associate Director of OPM’s Federal Investigative Services Division to measure the frequency with which its investigative reports meet federal investigative standards in order to improve the completeness—that is, quality—of future investigation documentation. As of March 2013, however, OPM had not implemented our recommendation to measure how frequently investigative reports meet federal investigative standards. Instead, OPM continues to assess the quality of investigations based on voluntary reporting from customer agencies. Specifically, OPM tracks investigations that are (1) returned for rework from the requesting agency, (2) identified as deficient using a web-based survey, and (3) identified as deficient through adjudicator calls to OPM’s quality hotline. In our past work, we have noted that the number of investigations returned for rework is not by itself a valid indicator of the quality of investigative work because adjudication officials have been reluctant to return incomplete investigations in anticipation of delays that would impact timeliness. Further, relying on agencies to voluntarily provide information on investigation quality may not reflect the quality of OPM’s total investigation workload. In February 2011, we noted that one of OPM’s customer agencies, DOD, had developed and implemented a tool known as Rapid Assessment of Incomplete Security Evaluations to monitor the quality of investigations completed by OPM. In that report, we noted that leaders of the reform effort had provided congressional members and executive branch agencies with metrics assessing quality and other aspects of the clearance process. Although the Rapid Assessment of Incomplete Security Evaluations was one tool the reform team members planned to use for measuring quality, according to an OPM official, OPM chose not to use this tool. Instead, OPM opted to develop another tool but has not provided details on the tool including estimated timeframes for its development and implementation. Since 2008, we have highlighted the importance of the executive branch enhancing efficiency and managing costs related to security clearance reform efforts. Government-wide suitability and personnel security clearance reform efforts have not yet focused on identifying potential cost savings, even though the stated mission of these efforts includes improving cost savings. For example, in 2008, we noted that one of the key factors to consider in current and future reform efforts was the long- term funding requirements. Further, in 2009, we found that reform-related reports issued in 2008 did not detail which reform objectives require funding, how much they will cost, or where funding will come from.Finally, the reports did not estimate potential cost savings resulting from these reform efforts. While the Performance Accountability Council has a stated goal regarding cost savings, it has not provided the executive branch with guidance on opportunities for achieving efficiencies in managing personnel security clearances. For example, OPM’s investigation process—which represents just a portion of the security clearance process and had significant costs—has not been studied for process efficiencies or cost savings. In February 2012, we reported that OPM received over $1 billion to conduct more than 2 million background investigations (suitability determinations and personnel security clearances) for government employees in fiscal year 2011. OPM officials explained that, to date, they have chosen to address investigation timeliness and investigation backlogs rather than the identification of process and workforce efficiencies. To its credit, OPM helped reduce the backlog of ongoing background investigations that it inherited from DOD at the time of the 2005 transfer. However, only recently has OPM started to look at its internal processes for efficiencies. Further, while OPM invested in an electronic case-management program, it continues to convert submitted electronic files to paper. In November 2010, the Deputy Director for Management of the Office of Management and Budget testified that OPM receives 98 percent of investigation applications electronically, yet we observed that it was continuing to use a paper-based investigation processing system and convert electronically submitted applications to paper. OPM officials stated that the paper- based process is required because a small portion of their customer agencies do not have electronic capabilities. As a result, OPM may be simultaneously investing in process streamlining technology while maintaining a less efficient and duplicative paper-based process. In 2012, we recommended that, to improve transparency of costs and the efficiency of suitability and personnel security clearance background investigation processes that could lead to cost savings, the Director of OPM direct the Associate Director of Federal Investigative Services to take actions to identify process efficiencies that could lead to cost savings within its background investigation process. OPM agreed with this recommendation and we are working with OPM to assess any progress it has made in this area. Further, agencies have made potentially duplicative investments in case- management and adjudication systems without considering opportunities for leveraging existing technologies. In February 2012, as part of our annual report on opportunities to reduce duplication, overlap and fragmentation, we reported that multiple agencies have invested in or are beginning to invest in potentially duplicative, electronic case management and adjudication systems despite government-wide reform effort goals that agencies leverage existing technologies to reduce duplication and enhance reciprocity. development of its Case Adjudication Tracking System in 2006 and, as of 2011, had invested a total of $32 million to deploy the system. The system helped DOD achieve efficiencies with case management and an electronic adjudication module for secret level cases that did not contain issues, given the volume and types of adjudications performed. According to DOD officials, after it observed that the Case Adjudication Tracking System could easily be deployed to other agencies at a low cost, the department intended to share the technology with interested entities across the federal government. However, at that time, five other agencies were also developing or seeking funds to develop individual systems with With multiple agencies developing capabilities similar to DOD’s system. GAO, 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue, GAO-12-342SP (Washington, D.C.: Feb. 28, 2012). individual case-management systems, these agencies may be at risk of duplicating efforts and may fail to realize cost savings. In 2012, we recommended that the Deputy Director for Management at OMB, in his capacity as the Chair of the Performance Accountability Council, expand and specify reform-related guidance to help ensure that reform stakeholders identify opportunities for efficiencies and cost savings, such as preventing duplication in the development of electronic case management. OMB concurred with our recommendation. As of March of this year, however, OMB has not expanded and specified reform-related guidance to help ensure that reform stakeholders identify opportunities for cost savings. According to OMB officials, they are exploring whether and how to develop and implement guidance on information technology spending that is minimally disruptive, will not compromise agencies’ ability to adjudicate cases, and is implementable within budget constraints. While these specific efforts may be notable steps in clearance reform, they do not meet the intent of our recommendation for OMB to develop overarching guidance that reform stakeholders can use to identify opportunities for cost savings. In conclusion, while the executive branch has made strides in improving the timeliness of the personnel security clearance process, now is the time to focus on making the improvements GAO has recommended. Failing to do so increases the risk of damaging unauthorized disclosures of classified information. This concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information on this testimony, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, who may be reached at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include David Moser (Assistant Director), Sara Cradic, Mae Jones, Erin Preston, Leigh Ann Sennette, and Michael Willems. Managing for Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. Security Clearances: Agencies Need Clearly Defined Policy for Determining Civilian Position Requirements. GAO-12-800. Washington, D.C.: July 12, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Personnel Security Clearances: Overall Progress Has Been Made to Reform the Governmentwide Security Clearance Process. GAO-11-232T. Washington, D.C.: December 1, 2010. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. DOD Personnel Clearances: Preliminary Observations on DOD’s Progress on Addressing Timeliness and Quality Issues. GAO-11-185T. Washington, D.C.: November 16, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Personnel security clearances allow government and industry personnel to gain access to classified information that, through unauthorized disclosure, can in some cases cause exceptionally grave damage to U.S. national security. In 2012, the Director of National Intelligence reported that more than 4.9 million federal government and contractor employees held a security clearance. Multiple executive-branch agencies are responsible for different phases in the government-wide personnel security clearance process. The Director of National Intelligence, as Security Executive Agent, is to develop uniform and consistent policies and procedures. Executive branch agencies are to determine which positions require access to classified information. OPMs investigators from the Federal Investigative Service conduct the majority of security investigations on personnel holding those positions, and adjudicators from requesting agencies, such as DOD, make the final clearance eligibility determination. Reform efforts and reporting requirements since 2005 have focused on expediting the processing of clearances. This testimony is based on GAO reports and testimonies issued between 2008 and 2013 on DODs personnel security clearance programs and security clearance reform efforts. This testimony addresses three areas for improvement to the government-wide personnel security clearance process: (1) a sound requirements determination process, (2) performance metrics to measure quality, and (3) guidance to enhance efficiencies. In July 2012, GAO reported that the Director of National Intelligence, as Security Executive Agent, had not provided agencies clearly defined policy and procedures to consistently determine whether a civilian position required a security clearance. Underdesignating positions can lead to security risks; overdesignating positions can result in significant cost implications. Also, GAO reported that the Department of Homeland Security and Department of Defense (DOD) components' officials were aware of the need to keep the number of security clearances to a minimum but were not always required to conduct periodic reviews and validations of the security clearance needs of existing positions. GAO recommended that, among other things, the Director of National Intelligence, in coordination with the Director of Office of Personnel Management (OPM) and other executive branch agencies as appropriate, issue clearly defined policies and procedures to follow when determining if federal civilian positions require a security clearance, and also guidance to require executive branch agencies to periodically review and revise or validate the designation of all federal civilian positions. The Director of National Intelligence concurred with GAO's recommendations and identified actions to implement them. Executive branch agency efforts to improve the personnel security process have emphasized timeliness but not quality. In May 2009, GAO reported that with respect to initial top secret clearances adjudicated in July 2008, documentation was incomplete for most of OPM investigative reports. GAO independently estimated that 87 percent of about 3,500 investigative reports that DOD adjudicators used to make clearance decisions were missing required documentation. In May 2009, GAO recommended that the Director of OPM direct the Associate Director of OPM's Federal Investigative Services to measure the frequency with which its investigative reports met federal investigative standards in order to improve the completeness--that is, quality--of future investigation documentation. As of March 2013, however, OPM had not implemented this recommendation. Government-wide personnel security reform efforts have not yet focused on potential cost savings, even though the stated mission of these efforts includes improving cost savings. For example, OPM's investigation process--which represents a portion of the security clearance process and has significant costs--has not been studied for process efficiencies or cost savings. In February 2012, GAO reported that OPM received over $1 billion to conduct more than 2 million background investigations in fiscal year 2011. GAO raised concerns that OPM may be simultaneously investing in process streamlining technology while maintaining a less efficient and duplicative paper-based process. In 2012, GAO recommended that, to improve the efficiency of suitability and personnel security clearance background investigation processes that could lead to cost savings, the Director of OPM direct the Associate Director of Federal Investigative Services to take actions to identify process efficiencies that could lead to cost savings within its background investigation process. OPM agreed with this recommendation and GAO is working with OPM to assess any progress it has made in this area. |
As the central human resources agency for the federal government, OPM is tasked with ensuring that the government has an effective civilian workforce. In carrying out its mission, the agency delivers human resources products and services, including policies and procedures for recruiting and hiring, provides health and training benefit programs; and administers the retirement program for federal employees. The agency reports that approximately 2.7 million active federal employees and nearly 2.5 million retired federal employees rely on its services. According to OPM, the retirement program serves current and former federal employees by providing tools and options for retirement planning and retirement compensation. Two defined-benefit retirement plans that provide retirement, disability, and survivor benefits to federal employees are administered by the agency: (1) the Civil Service Retirement System (CSRS), which provides retirement benefits for most federal employees hired before 1984 and (2) the Federal Employees Retirement System (FERS), which covers most employees hired in or after 1984 and provides benefits that include Social Security and a defined contribution system. Retirement processing includes functions such as determining retirement eligibility, inputting data into benefit calculators, and providing customer service. The agency uses over 500 different procedures, laws, and regulations, which are documented on the agency’s internal website, to process retirement applications. For example, the site contains memorandums that outline new procedures for handling special retirement applications, such as those for disability or court orders. Further, OPM’s retirement processing involves the use of over 80 information systems that have approximately 400 interfaces with other internal and external systems. Recognizing the need to improve the efficiency and effectiveness of its retirement claims processing, OPM has undertaken a number of initiatives since 1987 that were aimed at modernizing its paper-intensive processes and antiquated systems. Initial modernization visions called for developing an integrated system and automated processes to provide prompt and complete benefit payments. However, following attempts over more than two decades, the agency has not yet been successful in achieving the modernized retirement system that it envisioned. In early 1987, OPM began a program called the FERS Automated Processing System. However, after 8 years of planning, the agency decided to reevaluate the program, and the Office of Management and Budget requested an independent review of the program, which identified various management weaknesses. The independent review suggested areas for improvement and recommended terminating the program if immediate action was not taken. In mid-1996, OPM terminated the program. In 1997, OPM began planning a second modernization initiative, called the Retirement Systems Modernization (RSM) program. The agency originally intended to structure the program as an acquisition of commercially available hardware and software that would be modified in- house to meet its needs. From 1997 to 2001, OPM developed plans and analyses and began developing business and security requirements for the program. However, in June 2001, it decided to change the direction of the retirement modernization initiative. In late 2001, retaining the name RSM, the agency embarked upon its third initiative to modernize the retirement process and examined the possibility of privately sourced technologies and tools. Toward this end, the agency determined that contracting was a viable alternative and, in 2006, awarded three contracts for the automation of retirement processing, the conversion of paper records to electronic files, and consulting services to redesign its retirement operations. In February 2008, OPM renamed the program RetireEZ and deployed an automated retirement processing system. However, by May 2008 the agency determined that the system was not working as expected and suspended system operation. In October 2008, after 5 months of attempting to address quality issues, the agency terminated the contract for the system. In November 2008, OPM began restructuring the program and reported that its efforts to modernize retirement processing would continue. However, after several years of trying to revitalize the program, the agency terminated the retirement system modernization in February 2011. In mid-January 2012, OPM released a plan to undertake targeted, incremental improvements to retirement processing rather than a large- scale modernization, which described planned actions in four areas: hiring and training 56 new staff to adjudicate retirement claims and 20 additional staff to support the claims process; establishing higher production standards and identifying potential working with other agencies to improve the accuracy and completeness of the data they provide to OPM for use in retirement processing; and improving the department’s IT by pursuing a long-term data flow strategy, exploring short-term strategies to leverage work performed by other agencies, and reviewing and upgrading systems used by retirement services. Through implementing these actions, OPM said that it aimed to eliminate the agency’s retirement processing backlog and accurately process 90 percent of its cases within 60 days by July 31, 2013. While its Fiscal Year 2013 Summary of Performance and Financial Information indicated that the agency was on track to eliminate the backlog, the agency nonetheless reported that two factors beyond its control prevented achieving the goal.First, Voluntary Early Retirement Authority and a Voluntary Separation Incentive Program offered by the U.S. Postal Service increased OPM’s retirement processing workload by over 20,000 cases. Second, funding reductions due to sequestration required the agency to curtail overtime work on retirement processing in April 2013. In March 2014, OPM again articulated a retirement claims processing improvement goal as part of its Fiscal Year 2014-2015 Agency Priority Goals strategy. Specifically, the agency reiterated the goal to process 90 percent of retirement cases within 60 days, but extended the date for doing so to July 2014. However, OPM did not achieve this goal, reporting that 77.9 percent of cases were processed within 60 days in July 2014. Further, in October 2014, the most recent month for which the agency has reported, 83.2 percent of cases were processed within 60 days. Our prior reports noted that OPM’s efforts to modernize its retirement system were hindered by weaknesses in key IT management disciplines. For example, in reporting on RSM in February 2005, we noted weaknesses in project management, risk management, and organizational change management. Project management is the process for planning and managing all project-related activities, including defining how project components are interrelated. Effective project management allows the performance, cost, and schedule of the overall project to be measured and controlled in comparison to planned objectives. Although OPM had defined major retirement modernization project components, it had not defined the dependencies among them. Specifically, by not identifying critical dependencies among project components, OPM increased the risk that unforeseen delays in one activity could hinder progress in other activities. Risk management entails identifying potential problems before they occur. Risks should be identified as early as possible, analyzed, mitigated, and tracked to closure. OPM officials acknowledged that they did not have a process for identifying and tracking retirement modernization project risks and mitigation strategies on a regular basis but stated that the agency’s project management consultant would assist it in implementing a risk management process. Lacking such a process, OPM did not have a mechanism to address potential problems that could adversely impact the cost, schedule, and quality of the retirement modernization project. Organizational change management includes preparing users for the changes to how their work will be performed as a result of a new system implementation. Effective organizational change management includes plans to prepare users for impacts the new system might have on their roles and responsibilities, and a process to manage those changes. However, OPM officials had not developed a detailed plan to help users transition to different job responsibilities. Without having and implementing such a plan, effective implementation of new systems could be hindered by confusion about user roles and responsibilities. We recommended that the Director of OPM ensure that the retirement modernization program office expeditiously establish processes for effective project management, risk management, and organizational change management. In response, the agency initiated steps toward establishing management processes for retirement modernization and demonstrated activities to address our recommendations. We reported again on OPM’s retirement modernization in January 2008, as the agency was about to deploy a new automated retirement processing system.management capabilities, including system testing, cost estimating, and progress reporting. We noted weaknesses in additional key Effective testing is an essential activity of any project that includes system development. At the time of our review, test results showed that the new system had not performed as intended. Although the agency planned to perform additional tests to verify that the system would work as intended, the schedule for conducting these tests became compressed, with several tests to be performed concurrently rather than sequentially. The agency stated that a lack of testing resources and the need for further system development, contributed to the delay of planned tests and the need for concurrent testing. The high degree of concurrent testing that OPM planned to meet its February 2008 deployment schedule increased the risk that the agency would not have the resources or time to verify that the planned system worked as expected. Cost estimating is the identification of individual project cost elements, using established methods and valid data to estimate future costs. Establishing a reliable cost estimate is important for developing a project budget and having a sound basis for measuring performance, including comparing the actual and planned costs of project activities. Although OPM developed a retirement modernization cost estimate, it was not supported by the documentation that is fundamental to a reliable cost estimate. Without a reliable cost estimate, OPM lacked a sound basis for formulating retirement modernization budgets or for developing the cost baseline that is necessary for measuring and predicting project performance. Earned value management (EVM) is a tool for measuring program progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Fundamental to reliable EVM is the development of a baseline against which variances are calculated. OPM used EVM to measure and report monthly performance of the retirement modernization system. The reported results indicated that the project was progressing almost exactly as planned. However, this view of project performance was not reliable because the baseline on which it was based did not reflect the full scope of the project, had not been validated, and was unstable (i.e., subject to frequent changes). This EVM approach in effect ensured that material variances from planned performance would not be identified and that the state of the project would not be reliably reported. We recommended that the Director of OPM conduct effective system tests prior to system deployment and improve program cost estimation and progress reporting. OPM stated that it concurred with our recommendations and would take steps to address the weakness we identified. Nevertheless, OPM deployed a limited initial version of the modernized retirement system in February 2008. After unsuccessful efforts to address system quality issues, the agency suspended system operation, terminated the system contract, and began restructuring the modernization effort. In April 2009, we again reported on OPM’s retirement modernization, noting that the agency still remained far from achieving the modernized Specifically, we retirement processing capabilities that it had planned.noted that significant weaknesses continued to exist in the areas of cost estimating, progress reporting, and testing, while also noting two additional weaknesses related to planning and oversight. Although it concurred with our January 2008 recommendation to develop a revised cost estimate for the retirement modernization effort, OPM had not completed initial steps for developing the new estimate by the time we issued our report in April 2009. We reported that the agency had not yet fully defined the estimate’s purpose, developed an estimating plan, or defined the project’s characteristics. By not completing these steps, OPM increased the risk that it would produce an unreliable estimate and not have a sound basis for measuring project performance and formulating retirement modernization budgets. OPM also concurred with our January 2008 recommendation to establish a basis for effective EVM but had not completed key steps as of the time of our report. Specifically, despite planning to use EVM to report the retirement modernization project’s progress, the agency had not developed a reliable cost estimate and a validated baseline. Engaging in EVM reporting without first taking these fundamental steps could have again rendered the agency’s assessments unreliable. As previously discussed, effective testing is an essential component of any project that includes developing systems. To be effectively managed, testing should be planned and conducted in a structured and disciplined fashion. Beginning the test planning process in the early stages of a project life cycle can reduce rework later. Early test planning in coordination with requirements development can provide major benefits. However, at the time of our April 2009 report, the agency had not begun to plan test activities in coordination with developing its requirements for the system it was planning at that time. Consequently, OPM increased the risk that it would again deploy a system that did not satisfy user expectations and meet requirements. Project management principles and effective practices emphasize the importance of having a plan that, among other things, incorporates all the critical areas of system development and is to be used as a means of determining what needs to be done, by whom, and when. Although OPM had developed a variety of informal documents and briefing slides that described retirement modernization activities, the agency did not have a complete plan that described how the program would proceed in the wake of its decision to terminate the system contract. As a result, we concluded that until the agency completed such a plan and used it to guide its efforts, it would not be properly positioned to proceed with its restructured retirement modernization initiative. Office of Management and Budget and GAO guidanceagencies to ensure effective oversight of IT projects throughout all life- cycle phases. Critical to effective oversight are investment management boards made up of key executives who regularly track the progress of IT projects such as system acquisitions or modernizations. OPM’s Investment Review Board was established to ensure that major investments are on track by reviewing their progress and identifying appropriate actions when investments encounter challenges. Despite meeting regularly and receiving information that indicated problems with the retirement modernization, the board did not ensure that retirement modernization investments were on track, nor did it determine appropriate actions for course correction when needed. For example, from January 2007 to August 2008, the board met and was presented with reports that described problems the program was facing, such as the lack of an integrated master schedule and earned value data that did not reflect the “reality or current status” of the program. However, meeting minutes indicated that no discussion or action was taken to address these problems. According to a member of the board, OPM had not established guidance regarding how the board was to communicate recommendations and needed corrective actions for investments it oversaw. Without a fully functioning oversight body, OPM lacked insight into the retirement modernization and the ability to make needed course corrections that effective boards are intended to provide. Our April 2009 report made new recommendations calling for OPM to address the weaknesses in the retirement modernization project that we identified. Although the agency began taking steps to address them, the recommendations were overtaken by the agency’s decision in February 2011 to terminate the retirement modernization project. OPM’s Strategic Plan for Fiscal Years 2014-2018 includes a strategic goal to “Ensure that Federal retirees receive timely, appropriate, transparent, seamless, and accurate retirement benefits.” To achieve this goal, the agency has set forth a strategy to improve the retirement claims processing system by, among other things, investing in information technology solutions, such as the acquisition of a case management system. In addition, the agency’s February 2014 Strategic Information Technology Plan articulated OPM’s vision of “transitioning the retirement program to a paperless system that will truly honor a Federal employee’s service by authorizing accurate retirement benefits on the day they are due, answering customers’ questions in a timely manner, and promoting self-service account maintenance.” The plan also reiterated the agency’s intention to acquire a new case management system. According to OPM’s chief information officer (CIO), as of late-November 2014, the case management initiative is the agency’s primary focus. Toward this end, the strategic plan states that OPM intends to complete documentation of its needs, evaluate available commercial solutions against those needs, and create an acquisition plan for procuring licenses and services this month. The agency then intends to develop a plan to begin implementing the chosen solution in August 2015. OPM received a fiscal year 2014 appropriation of $2.6 million for the case management system and, according to an agency official, is expecting to receive additional funding for the system in fiscal year 2015. Beyond acquisition of the case management system, the strategic IT plan also describes other initiatives that are intended to incrementally improve retirement claims processing. These initiatives include expanding and testing a retirement data repository to include data from agency human resources and payroll systems, data submitted via the online retirement application, and scanned documents; building a capability for the retirement calculator to pull data from the retirement data repository; identifying functional requirements for deployment of a web-based retirement data viewer to additional agencies; and developing requirements for a web-based electronic retirement application. According to the plan, pursuit of these initiatives is dependent on OPM receiving additional funding. While we have not conducted a detailed examination of OPM’s plans for acquiring new technology for retirement processing, it will be important for the agency to leverage all available opportunities to ensure that its investments are carried out in the most effective manner possible, and not repeat mistakes of the past. Our experience has shown that challenges, such as those that have plagued the agency’s past efforts, can successfully be overcome through using a more disciplined approach to IT acquisition management. To help federal agencies, such as OPM, address the acquisition challenges that they face, in 2011, we reported on nine common factors critical to the success of IT acquisitions. Specifically, we reported that department officials from seven agencies had each identified a successful investment acquisition, in that they best achieved their respective cost, schedule, scope, and performance goals. Among these seven IT investments, the officials identified nine factors as critical to the success of three or more of the seven. The factors most commonly identified include active engagement of stakeholders, program staff with the necessary knowledge and skills, and senior department and agency executive support for the program. These nine critical success factors are consistent with leading industry practices for IT acquisitions. Table 1 shows how many of the investments reported the nine factors. Officials for all seven selected investments cited active engagement with program stakeholders—individuals or groups (including, in some cases, end users) with an interest in the success of the acquisition—as a critical factor to the success of those investments. Agency officials stated that stakeholders, among other things, reviewed contractor proposals during the procurement process, regularly attended program management office sponsored meetings, were working members of integrated project teams, and were notified of problems and concerns as soon as possible. In addition, officials from two investments noted that actively engaging with stakeholders created transparency and trust, and increased the support from the stakeholders. Additionally, officials for six of the seven selected investments indicated that the knowledge and skills of the program staff were critical to the success of the program. This included knowledge of acquisitions and procurement processes, monitoring of contracts, large-scale organizational transformation, Agile software development concepts, and areas of program management such as earned value management and technical monitoring. Finally, officials for five of the seven selected investments identified having the end users test and validate the system components prior to formal end user acceptance testing for deployment as critical to the success of their program. Similar to this factor, leading guidance recommends testing selected products and product components throughout the program life cycle.prior to acceptance demonstrates, earlier rather than later in the program life cycle, that the functionality will fulfill its intended use. If problems are found during this testing, programs are typically positioned to make changes that are less costly and disruptive than ones made later in the life cycle would be. Testing of functionality by end users Use of the critical success factors described above can serve as a model of best practices for all agencies as they plan and conduct their own IT acquisitions. With specific regard to OPM, application of these acquisition best practices presents opportunities for the agency to undertake a more disciplined and, thus effective, management approach, as well as increase the likelihood that its planned IT investments to improve retirement processing will meet their cost, schedule, scope, and performance goals. In summary, despite OPM’s longstanding recognition of the need to improve the timeliness and accuracy of retirement processing, the agency has thus far been unsuccessful in several attempts to develop the capabilities it has long sought. For over two decades, the agency’s retirement modernization efforts were plagued by weaknesses in management capabilities that are critical to the success of such endeavors. Applying the information technology best practices we have identified to OPM’s acquisition of a new case management system could help the agency overcome its long history of unsuccessful retirement modernization efforts. Chairman Farenthold, Ranking Member Lynch, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. If you have any questions concerning this statement, please contact Valerie C. Melvin, Director, Information Management and Technology Resources Issues, at (202) 512-6304 or melvinv@gao.gov. Other individuals who made key contributions include Mark T. Bird, Assistant Director; Glenn Spiegel; Pavitri K. Daitnarayan; and Nancy E. Glover. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The use of IT is integral to OPM's ability to carry out its responsibilities in modernizing federal employee retirement claims processing. Since 1987, the agency has undertaken a number of initiatives that were aimed at modernizing its paper-intensive processes and antiquated systems, but that were unsuccessful. GAO was asked to summarize findings from its previous reports on the challenges that OPM has faced in modernizing its retirement claims processing systems. The testimony also summarizes the agency's current plans to acquire new technology to improve the retirement process and key IT acquisition best practices that could serve as critical factors in the agency's successful accomplishment of its latest modernization efforts. The information in this testimony is primarily based on GAO's previous work at OPM. GAO also reviewed the agency's plans and related information discussing current efforts to improve retirement processing systems. Additionally, the testimony highlights findings from GAO's previous report on critical success factors for major IT acquisitions. Work in support of this testimony was performed during November and December 2014. In a series of reviews, GAO found that the Office of Personnel Management's (OPM) efforts over two decades to modernize its processing of federal employee retirement applications were fraught with information technology (IT) management weaknesses. Specifically, in 2005, GAO made recommendations to address weaknesses in project, risk, and organizational change management. In 2008, as OPM was on the verge of deploying an automated retirement processing system, GAO reported deficiencies in, and made recommendations to address, additional weaknesses in system testing, cost estimating, and progress reporting. In 2009, GAO reported that OPM continued to have deficiencies in its cost estimating, progress reporting, and testing practices and made recommendations to address these and other weaknesses in the planning and oversight of the agency's modernization effort. OPM began to address these recommendations; however, in February 2011, it terminated the modernization effort. OPM's Strategic Plan for Fiscal Years 2014-2018 includes a goal to deliver retirement benefits to employees accurately, seamlessly, and on time. To achieve this goal, the agency has plans to acquire a new case management system and, ultimately, to transition to a paperless system that will authorize accurate retirement benefits on the day they are due. In addition, the agency plans other initiatives that are intended to incrementally improve retirement claims processing. GAO has previously reported that its experience at other agencies has demonstrated that successfully overcoming challenges, such as those that have plagued OPM's past efforts, can best be achieved when critical success factors are applied. Nine common factors critical to the success of IT acquisitions are Active engagement of senior officials with stakeholders. Qualified and experienced program staff. Support of senior department and agency executives. Involvement of end users and stakeholders in the development of requirements. Participation of end users in testing system functionality prior to formal end user acceptance testing. Consistency and stability of government and contractor staff. Prioritization of requirements by program staff. Regular communication maintained between program officials and the prime contractor. Sufficient funding. These critical success factors can serve as a model of best practices that OPM could apply to enhance the likelihood that the incremental IT investments the agency now plans, including the acquisition of a new case management system, will be successfully achieved. GAO is not making new recommendations at this time. GAO has previously made numerous recommendations to address IT management challenges that OPM has faced in carrying out its retirement modernization efforts. Fully addressing these challenges remains key to the success of OPM's efforts. |
The US-VISIT expenditure plan partially satisfies 8 of the 11 legislative conditions required of DHS. For example, the plan partially satisfies the legislative conditions that it contain a listing of all open GAO and DHS Office of Inspector General recommendations. Specifically, while the plan did include a listing and status of our recommendations, it did not provide milestones for addressing any of the recommendations, as required by the act. include a certification by the DHS Chief Procurement Officer that the program was reviewed and approved in accordance with the department’s investment management process and that this process fulfilled all capital planning and investment control requirements and reviews established by the Office of Management and Budget (OMB). While the plan did include such a certification, it was based on information that pertains to the fiscal year 2007 expenditure plan and the fiscal year 2009 budget submission, rather than on the fiscal year 2008 expenditure plan, as required by the act. include an architectural compliance certification by the Chief Information Officer that the system architecture of the program is sufficiently aligned with the information system enterprise architecture of DHS. Specifically, while the plan did include such a certification, the basis for the certification was an assessment against the 2007 DHS enterprise architecture, which is a version that we recently reported to be missing important US-VISIT architectural content. provide a detailed accounting of operations and maintenance, contractor services, and program management costs. While the plan did provide an accounting of operations and maintenance, and program management costs, it did not separately identify the program’s contractor costs, as required by the act. The plan does not satisfy the remaining three conditions that apply to DHS. Specifically: The expenditure plan did not explicitly define how funds are to be obligated to meet future program commitments, including linking the planned expenditure of funds to milestone-based delivery of specific capabilities and services. While the plan linked funding to four broad core capability areas and associated projects, it did not link this planned use of funds to milestones, and it did not consistently decompose projects into specific mission capabilities, services, performance levels, benefits and outcomes, or program management capabilities. The expenditure plan did not include a certification by the DHS Chief Human Capital Officer that the program’s human capital needs are being strategically and proactively managed and that the program has sufficient human capital capacity to execute the expenditure plan. While the plan contained a certification, it only addressed that the human capital plan reviewed by the Chief Human Capital Officer contained specific initiatives to address the hiring, development, and retention of program employees and that a strategy existed to develop indicators to measure the progress and results of these initiatives. It did not address the implementation of this plan or whether the current human capital capabilities were sufficient to execute the expenditure plan. The expenditure plan did not include a complete schedule for the full implementation of a biometric exit program or certification that a biometric exit program is not possible within 5 years. While the plan contains a very high-level schedule that identifies five broadly defined tasks and high-level milestones, the schedule did not include, among other things, decomposition of the program into a work breakdown structure or sequencing, integrating, or resourcing each work element in the work breakdown structure. We are making five observations about US-VISIT relative to its proposed exit solution, its management of program risks, and its use of earned value management. These observations are summarized here. Reliability of cost estimates for air and sea exit alternatives is not clear. In developing its air and sea exit Notice of Proposed Rule Making (NPRM), DHS is required to prepare a written assessment of the costs, benefits, and other effects of its proposal and a reasonable number of alternatives and to adopt the least costly, most cost-effective, or least burdensome among them. To accomplish this, it is important that DHS have reliable cost estimates for its proposed and alternative solutions. However, the reliability of the estimates that DHS developed is not clear because (1) DHS documents characterize the estimates as being, by definition, rough and imprecise, but DHS officials responsible for developing the estimates stated that this characterization is not accurate; (2) our analysis of the estimates’ satisfaction of cost estimating best practices shows that while DHS satisfied some key practices, it did not fully satisfy others or the documentation provided was not sufficient for us to determine whether still other practices were met; and (3) data on certain variables pertaining to airline costs were not available for inclusion in the estimates, and airlines report that these costs were understated in the estimates. DHS reports that the proposed air and sea exit solution provides less security and privacy than other alternatives. Adequate security and privacy controls are needed to assure that personally identifiable information is secured against unauthorized access, use, disclosure, or retention. Such controls are especially needed for government agencies, where maintaining public trust is essential. In the case of US-VISIT, one of its stated goals is to protect the security and privacy of U.S. citizens and visitors. DHS’s proposed air and sea exit solution would require air and vessel carriers to implement and manage the collection of biometric data at the location(s) of their choice. However, the NPRM states that having carriers collect the biometric information is less secure than alternatives where DHS collects the information, regardless of the information collection point. Similarly, the NPRM states that the degree of confidence in compliance with privacy requirements is lower when DHS does not maintain full custody of personally identifiable information. Public comments on the proposed air and sea exit solution raise a range of additional concerns. Ninety-one entities—including the airline, trade, and travel industries, as well as federal, state, and foreign governments—commented on the air and sea exit proposal. The comments that were provided raised a number of concerns and questions about the proposed solution. For example, comments stated that (1) technical requirements the carriers must meet in delivering their respective parts of the proposed solution had yet to be provided; (2) the proposed solution conflicts with air and vessel carrier passenger processing improvements; (3) the proposed solution is not fully integrated with other border screening programs involving air carriers; and (4) stakeholders were not involved in this rulemaking process as they had been in previous rulemaking efforts. Risk management database shows that some program risks have not been effectively managed. Proactively managing program risks is a key acquisition management control and, if defined and implemented properly, it can increase the chances of programs delivering promised capabilities and benefits on time and within budget. To its credit, the US-VISIT program office has defined a risk management plan and related process that is consistent with relevant guidance. However, its own risk database shows that all risks have not been proactively mitigated. As we have previously reported, not proactively mitigating risks increases the chances that risks become actual cost, schedule, and performance problems. Significance of a task order’s schedule variances have been minimized by frequent rebaselining. According to the GAO Cost Assessment Guide, rebaselining should occur rarely, as infrequently as once in the life of a program or project. Schedule rebaselining should occur only when a schedule variance is significant enough to limit its utility as a predictor of future schedule performance. For task order 7, the prime contractor’s largest task order, the program office has rebaselined its schedule twice in the last 2 years—first in October 2006 and again in October 2007. This rebaselining has resulted in the task order showing a $3.5 million variance, rather than a $7.2 million variance that would exist without either of the rebaselinings. DHS has not adequately met the conditions associated with its legislatively mandated fiscal year 2008 US-VISIT expenditure plan. The plan does not fully satisfy any of the conditions that apply to DHS, either because it does not address key aspects of the condition or because what it does address is not adequately supported or is otherwise not reflective of known program weaknesses. Given that the legislative conditions are intended to promote the delivery of promised system capabilities and value, on time and within budget, and to provide Congress with an oversight and accountability tool, these expenditure plan limitations are significant. Beyond the expenditure plan, other program planning and execution limitations and weaknesses also confront DHS in its quest to deliver US- VISIT capabilities and value in a timely and cost-effective manner. Most notably, DHS has proposed a solution for a long-awaited exit capability, but it is not clear if the cost estimates used to justify it are sufficiently reliable to do so. Also, DHS has reported that the proposed solution provides less security and privacy than other alternatives analyzed, and the proposed solution is being challenged by those who would be responsible for implementing it. Further, DHS’s ability to measure program performance and progress, and thus be positioned to address cost and schedule shortfalls in a timely manner, is hampered by weaknesses in the prime contractor’s implementation of earned value management. Each of these program planning and execution limitations and weaknesses introduce risk to the program. In addition, DHS is not effectively managing the program’s risks, as evidenced by the program office’s risk database showing that known risks are being allowed to go years without risk mitigation and contingency plans. Overall, while DHS has taken steps to implement a significant percentage of our prior recommendations aimed at improving management of US-VISIT, additional management improvements are needed to effectively define, justify, and deliver a system solution that meets program goals, reflects stakeholder input, minimizes exposure to risk, and provides Congress with the means by which to oversee program execution. Until these steps are taken, US-VISIT program performance, transparency, and accountability will suffer. To assist DHS in planning and executing US-VISIT, we recommend that the Secretary of Homeland Security direct the department’s Investment Review Board to review the reasons for the plan’s limitations and address the challenges and weaknesses raised by our observations about the proposed air and sea exit solution, risk management, and the implementation of earned value management, and to report the results to Congress. In written comments on a draft of this report, signed by the Director, Departmental Audit Liaison Office, and reprinted in appendix II, DHS concurred with our recommendations and stated that the department’s Investment Review Board would meet for the purpose of reviewing US- VISIT and addressing our findings and recommendations. Moreover, DHS commented that our report has prompted the department to modify the fiscal year 2009 US-VISIT expenditure plan to provide greater visibility into operations and maintenance and program management expenditures, and to include milestones and performance targets for planned accomplishments, mitigation plans, milestones for closing open recommendations, and results relative to prior year commitments. DHS also commented that after it received our report for comment, it issued an interim policy for managing investments, such as US-VISIT, and thus it disagreed with one of our findings relative to one of the legislative conditions—namely that DHS’s investment management process is not sufficiently mature. However, DHS did not provide the policy itself, thus we were not able to determine whether it addressed our concerns. Further, the memo states that the policy is draft and that implementation of the policy, including training, still needs to occur. Thus, while we have modified our briefing document to reflect the policy’s issuance, we have not modified our conclusion that DHS’s investment management process is not sufficiently mature. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the Secretary of Homeland Security, Secretary of State, and the Director of OMB. Copies of this report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at hiter@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made significant contributions to this report are listed in appendix III. Briefing for staff members of the Subcommittees on Homeland Security Senate and House Committees on Appropriations *This briefing has been amended on page 44 to address DHS comments. U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) is a Department of Homeland Security (DHS) program for collecting, maintaining, and sharing information on foreign nationals who enter and exit the United States. The goals of US-VISIT are to: enhance the security of U.S. citizens and visitors, facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. Pub. L. No. 110-161 (Dec. 26, 2007). Since fil yer 2002, $2.22 illion has een pproprited for US-VISIT. Thi the eventh legitively-mndted US-VISIT expenditre pln. 4 control requirements and reviews established by the Office of Management and Budget (OMB), including Circular A-11, part 7; a certification by the DHS Chief Information Officer (CIO) that an independent verification and validation agent is currently under contract for the project; a certification by the DHS CIO that the system architecture of the program is sufficiently aligned with the department’s information systems enterprise architecture to minimize future rework, including a description of all aspects of the architectures that were and were not assessed in making the alignment determination, the date of the alignment determination, and any known areas of misalignment, along with the associated risks and corrective actions to address any such areas; a certification by the DHS CPO that the plans for the program comply with federal acquisition rules, requirements, guidelines, and practices, and a description of the actions being taken to address any areas of noncompliance, the risks associated with them, along with any plans for addressing these risks and the status of their implementation; a certification by the DHS CIO that the program has a risk management process that regularly identifies, evaluates, mitigates, and monitors risks throughout the system life cycle, and communicates high-risk conditions to agency and DHS investment decision makers, as well as a listing of all the program’s high risks, and a status of efforts to address them; 5 a certification by the DHS Chief Human Capital Officer (CHCO) that the human capital needs of the program are being strategically and proactively managed, and that current human capital capabilities are sufficient to execute the plans discussed in the report; a complete schedule for the full implementation of a biometric exit program or a certification that such a program is not possible within 5 years; a detailed accounting of operations and maintenance, contractor services, and program management costs associated with the program. The act also requires that we review this plan. DHS submitted its fiscal year 2008 US- VISIT expenditure plan to the House and Senate Appropriations Subcommittees on Homeland Security on June 12, 2008. As agreed, our objectives were to (1) determine whether the plan satisfies the legislative conditions and (2) provide observations about the plan and management of the program. A diussed in the cope nd methodology ection of thi riefing (ttchment 1), we ght clrifiction from ff with the Housnd Sente Approprition Committee, Subcommittee on Homelnd Secrity, on thi condition. A result, the wording of thi condition has een modified lightly from tht in the ct. 6 To accomplish the first objective, we compared the information provided in the plan with each aspect of the eleven conditions. Further, for those conditions requiring a DHS certification, we analyzed documentation, interviewed cognizant officials, and leveraged our recent work to determine the basis for each certification. We then determined whether the plan satisfies, partially satisfies, or does not satisfy the conditions based on the extent to which (1) the plan addresses all aspects of the applicable condition, as specified in the act or (2) the applicable certification letter contained in the plan (a) addresses all aspects of each condition, as specified in the act, (b) is sufficiently supported by documented and verifiable analysis, (c) contains significant qualifications, and (d) is otherwise consistent with our related findings. To accomplish the second objective, we analyzed DHS’s Notice of Proposed Rule Making (NPRM) for Air/Sea Exit, the Regulatory Impact Analysis, Privacy Impact Assessment, and US-VISIT’s Exit Pilot Report. We also compared available information on the US- VISIT prime contractor’s implementation of earned value management and the program office’s implementation of risk management to relevant guidance. (See attachment 1 for more detailed information on our scope and methodology.) 9 The reliability of DHS Air and Sea Exit cost estimates is not clear for various reasons, including program officials’ statements that contradict how the department characterized the estimates in the public documents and supporting documentation about the estimates’ derivation that we have yet to receive. The proposed Air and Sea Exit solution, according to DHS, would provide less security and privacy than other alternatives, because it relies on private carriers to collect, store, and transmit passenger data. Comments on the Proposed Air and Sea Exit solution, provided by airlines and others, raised a number of additional stakeholder concerns, such as conflicts with air carrier business models and impact on trade and travel. The program office’s risk database shows that risk mitigation and contingency plans have not been developed and implemented in a timely fashion for a number of risks, which increases the chances that known risks will become actual problems. Significant schedule variances are being minimized by frequent redefinition of baselines, thus limiting the use of earned value management as a performance management tool. We provided a draft of this briefing to DHS officials, including the Director of US-VISIT. While these officials did not state whether they agreed or not with our findings, conclusions, or recommendations, they did provide a range of technical comments, which we have incorporated into the briefing, as appropriate. They also sought clarification on our scope and methodology, which we have also incorporated into the briefing. The strategic goals of US-VISIT are to enhance the security of U.S. citizens and visitors, facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. It is to accomplish these things by: collecting, maintaining, and sharing biometric and other information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their admission; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and facilitating information sharing and coordination within the immigration and border management community. As defined in expenditure plans prior to fiscal year 2006, US-VISIT biometric entry and exit capabilities were to be delivered in four increments. Increments 1 through 3 were to be interim, or temporary, solutions that would focus on building interfaces among existing (legacy) systems; enhancing the capabilities of these systems; and deploying these systems to air, sea, and land ports of entry (POEs). Increment 4 was to be a series of yet-to-be-defined releases, or mission capability enhancements, that were to deliver long-term strategic capabilities for meeting program goals. Increments 1 through 3 have produced an entry capability that began operating at over 300 POEs by 2006. (See the system diagram on the next slide for an overview of this entry capability; attachment 3 provides further details on each of the systems.) For detil on the process nderlying ech increment nd tem supplying informtion on US-VISIT, ee ttchment 3. Increment 4 has continued to evolve. The fiscal year 2006 expenditure plan described increment 4 as the combination of two projects: (1) Transition to 10 fingerprints in the Automated Biometric Identification System (IDENT) and (2) interoperability between IDENT and the Federal Bureau of Investigation’s (FBI) Integrated Automated Fingerprint Identification System (IAFIS). The fiscal year 2007 expenditure plan combines these two projects with a third project called Enumeration (developing a single identifier for each individual) into a larger project referred to as Unique Identity. During fiscal year 2007, the following Unique Identity efforts were completed. The Interim Data Sharing Model (iDSM) was deployed. It allows sharing of certain biometric information between US-VISIT and the FBI, as well as with the Office of Personnel Management and police departments in Houston, Dallas, and Boston. The next phase of IDENT/IAFIS interoperability (referred to as Initial Operating Capability) is to be deployed in October 2008. The 10-print scanners were deployed to 10 air locations for pilot testing. Deployment of the scanners to 292 POEs is to begin during fiscal year 2008 and is to be completed by December 2008. Also in fiscal year 2007, steps were taken relative to a biometric exit solution. Exit pilot projects were halted at 12 airports and 2 seaports in May 2007. Exit radio frequency identification proof-of-concept projects were discontinued at selected land ports in November 2006. Planning for an air and sea exit solution based on lessons learned from the pilot projects was begun, to include studying the costs, impacts, and privacy concerns of alternative solutions. The fiscal year 2008 expenditure plan provides additional information on these and other projects in the context of the program’s four core mission capabilities: (1) providing identity management and screening services, (2) developing and enhancing biometric identity collection and data sharing, (3) providing information technology support for mission services, and (4) enhancing program management. For example, under developing and enhancing biometric capabilities, the plan allocates $228 million for further development and deployment of Unique Identity and $13 million for development of an Air and Sea Exit solution. (See table on next slide). Rdio freqency technology relie on proximity crd nd crd reder. Rdio freqency device red the informtion contined on the crd when the crd iassed ner the device. The informtion cn contin peronl informtion of the crdholder. (dollars in millions) US-VISIT projects are subject to the program’s Enterprise Life Cycle Methodology (ELCM). Within ELCM is a component methodology for managing software-based system projects, such as Unique Identity and Air/Sea Exit, known as the US-VISIT Delivery Methodology (UDM). According to version 4.3 of UDM (April 2007), it applies to both new development and operational projects; specifies the documentation and reviews that should take place within each of the methodology’s six phases: plan, analyze, design, build, test, and deploy; and allows for tailoring to meet the needs and requirements of individual projects, in which specific activities, deliverables, and milestone reviews that are appropriate for the scope, risk, and context of the project can be set for each phase of the project. The chart on the following page shows the status of each US-VISIT project within the life cycle methodology as of August 2008. 20 task orders have been issued against this contract, and their total value is about $501 million. 11 of these task orders are ongoing, and their total value is about $331 million. The table on the following slides provides additional information about the ongoing task orders organized by the four core mission capabilities and projects. An indefinite delivery/indefinite quantity contrct provide for n indefinite quantity, within ted limit, of supplie or ervicering fixed period of time. The government chedle deliverie or performnce y plcing order with the contrctor. Accentre’rtner in thi contrct inclde, mong other, Rytheon Compny, the Titn Corportion, nd SRA Interntionl, Inc. Totl ve i the reported budget t completion as of My 2008. Approximate Value (dollars in millions) Develop and enhance biometric identity collection and data sharing capabilities Biometric oltion delivery Uniqe Identity 82.5 Plnning, development, nd implementtion of Uniqe Identity (IDENT/IAFIS integrtion nd IDENT 10-print) Approximate value (dollars in millions) Opertion nd mintennce IT ervice 27.7 Mgement of opertion nd mintennce ctivitie for deployed cabilitie Informtion technology ervice for implemented fnctionlity, inclding ecrity pgrde, tem chnge, etc. DHS issued a draft Investment Review Process guide in March 2006 that includes milestone decision points (MDP) linking five life cycle phases: project initiation (MDP1), concept and technology development (MDP2), capability development and demonstration (MDP3), production and deployment (MDP4), and operations and support (MDP5). Under the draft guide, a program sends an investment review request prior to the initial milestone date. The program is then to be reviewed by the DHS Enterprise Architecture Board (EAB), Joint Requirements Council and/or Investment Review Board, depending on such factors as the program’s cost and significance. According to the official from DHS’s Program Analysis and Evaluation Directorate who is responsible for overseeing program adherence to the investment control process, the draft guide is being used for all DHS programs, including US-VISIT. This official also stated that milestone reviews can be performed concurrently with an expenditure plan review. In December 2006, the DHS Investment Review Board held an MDP1 review of US- VISIT. Since then, the EAB held an MDP2 review in April 2007, and the EAB is currently performing an MDP3 review. Neither the Joint Requirements Council nor the Investment Review Board have reviewed US-VISIT since MDP1. On April 24, 2008, DHS published its NPRM for establishing a biometric exit capability at commercial air and sea ports. At the same time, it published an Air/Sea Biometric Exit Regulatory Impact Analysis providing information on the projected costs and benefits of several alternatives discussed in the proposed rule. Key aspects of the NPRM are summarized here. The proposed rule would require aliens who are subject to US-VISIT biometric requirements on entry at POEs to provide biometric information to commercial carriers before departing air and sea POEs. The rule also proposed that the biometric information collected be submitted to DHS within 24 hours of securing the airplane doors for air travel or departing the seaport. According to the NPRM, these requirements would not apply to persons departing on certain private or small carriers. The proposed rule discussed nine exit alternatives for collecting biometrics: (1) at the check-in counter by air and vessel carriers, (2) at the check-in counter by DHS, (3) at the security checkpoint by DHS, (4) at the departure gate by air and vessel carriers, (5) at the departure gate by DHS, (6) at the check-in counter by air and vessel carriers with verification at the departure gate, (7) at the check-in counter by DHS with verification at the departure gate, (8) at the security checkpoint by DHS with 24 verification at the departure gate, and (9) within the sterile area (after passing through the Transportation Security Administration checkpoint) by DHS. The following five alternatives were subject to further analysis of costs and benefits. Proposed Alternative: Air and vessel carriers implement and manage the collection of biometric data at location(s) of their choice. Alternative 1: Air and vessel carriers implement and manage the collection of biometric data at their check-in counter. Alternative 2: DHS implements and manages the collection of biometric data at the TSA Security checkpoint. Alternative 3: DHS implements and manages the collection of biometric data at location(s) of the air or vessel carrier’s choice. Alternative 4: DHS implements and manages the collection of biometric data at kiosks placed in various locations. Thi oltion wold not pplicable to vessel crrier ecause there re no TSA checkpoint port. 26 Objective 1: Legislative Conditions Of the 12 legislative conditions pertaining to DHS’s fiscal year 2008 expenditure plan for US-VISIT, the plan partially satisfies 8 and does not satisfy 3 of them. Our review has satisfied the remaining condition. Given that the act’s conditions are designed to help ensure that the program is effectively managed and that congressional oversight of program can occur, a partially or a not satisfied condition should be viewed as introducing risk to the program. Each of the conditions is addressed in detail on the following slides. 27 Objective 1: Legislative Conditions Condition 1 Condition 1. The plan partially satisfies the legislative condition to include a detailed accounting of the program’s progress to date relative to system capabilities or services, system performance levels, mission benefits and outcomes, milestones, cost targets, and program management capabilities. As we previously reported, describing how well DHS is progressing relative to US-VISIT program commitments (e.g., cost, schedule, capabilities, and benefits commitments) that it has made in previous expenditure plans is essential to permitting meaningful program oversight and promoting accountability for results. GAO, Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning, GAO-03-563 (Washington, D.C.: Jne 9, 2003) nd Homeland Security: Some Progress Made, but Many Challenges Remain on U.S. Visitor and Immigrant Status Indicator Technology Program, GAO-05-202 (Washington, D.C.: Fe. 23, 2005). 28 Objective 1: Legislative Conditions Condition 1 for an enhanced Candidate Verification Tool. However, the information presented is not always sufficient to measure progress. For example, The fiscal year 2007 plan stated that US-VISIT would begin 10-print pilot deployment in late 2007 to ten air locations, but the fiscal year 2008 plan only states that DHS selected a number of pilot locations and evaluated the performance and operational impacts at those locations. According to program officials, although the plan does not state the number of locations for the pilot, it was in fact deployed to ten locations, and this information has been previously provided to the Congress. The fiscal year 2008 plan describes progress in achieving some, but not all, system performance levels. For example, the fiscal year 2007 plan cited a target of 1,850 biometric watch list hits for travelers processed at POEs, and the latest plan reports that the number of these hits was 11,838. However, many of the target measures included in the fiscal year 2007 plan are not described in the current plan. For example, The fiscal year 2007 plan cited a target of having biometric information on file for 49 percent of foreign nationals prior to their entering the United States (also referred to as the “Unique Identity baseline”). However, this measure is not discussed in the fiscal year 2008 plan. 29 Objective 1: Legislative Conditions Condition 1 The fiscal year 2007 plan cited a target of 26 days for resolving requests by visitors to correct their baseline data. However, this measure is not discussed in the fiscal year 2008 plan. The fiscal year 2007 plan stated that US-VISIT would establish a baseline of the number of individuals who were biometrically verified based on 10-print enrollment. However, this baseline measure is not discussed in the fiscal year 2008 plan. According to program officials, although these measures are not mentioned in the expenditure plan, performance data relative to each is in fact collected and monitored. The fiscal year 2008 plan identifies estimated costs (i.e., funding levels) for each of the four broad capability areas. In some cases, the broad areas are decomposed and meaningful detail is provided to understand how the funds will be used. However, in many cases, capabilities and costs are not decomposed to a level that permits such understanding and oversight. For example, The fiscal year 2008 plan states that $7.9 million will be used for the Biometric Support Center. However, allocations for specific support center capabilities and services are not provided. 30 Objective 1: Legislative Conditions Condition 1 The fiscal year 2008 plan states that $72.6 million will be used to update DHS border and process technology in support of 10-print and IDENT/IAFIS interoperability. However, the funds are not allocated between the two activities or to major tasks, products, and services under each activity, such as the completion of initial operating capability for IDENT/IAFIS integration. The fiscal year 2008 plan states that $6.4 million will be used for data integrity efforts. However, the funds are not allocated among specific data integrity activities described in the plan, such as upgrading the integrity of the system and data to meet stakeholder needs. Furthermore, the fiscal year 2007 and 2008 plans use different terminology to describe categories of spending under the broad capability areas. For example, The fiscal year 2008 plan shows $5.0 million in fiscal year 2007 funds allocated to “Information Technology” under the “Comprehensive Biometric Exit Solution—Air and Sea” project, but the 2007 plan does not identify an “Information Technology” component to this project, but rather shows $5.0 million being allocated to “Planning and Design.” 31 Objective 1: Legislative Conditions Condition 1 The fiscal year 2008 plan shows $1.4 million in fiscal year 2007 funds allocated to “Law Enforcement and Intelligence” under Biometric Support Services, but the fiscal year 2007 plan does not identify a Law Enforcement and Intelligence component, but instead shows $1.4 million being allocated to “Management.” Objective 1: Legislative Conditions The plan cites the following benefits relative to the Comprehensive Biometric Exit Solution – Air and Sea project: “Provides greater accuracy in recording identity of persons leaving the country, enables improved assessment by DHS of travelers’ compliance with immigration laws, and enables DHS to more easily match records across multiple identities or travel documents.” However, since these benefits/outcomes are not linked to a baseline measure, and the amount of the expected improvement is not specified, the proposed benefits are not meaningful. The plan cites benefits from sharing biometric data globally, including enabling countries to redirect the course of an immigration claims or enforcement activity, improving the accuracy of records through vetting and validation, identifying patterns of legal and illegal migration, achieving efficiency savings, establishing the identities of individuals who sought benefits among partner agencies and governments, and helping to prevent fraud through identity verification of individuals seeking benefits. However, it does not link any of these benefits to specific baseline measures. 33 Objective 1: Legislative Conditions Condition 1 The fiscal year 2008 plan cites high-level milestones that are traceable to the prior plan. However, neither of the plans provides enough specificity to measure progress. For example: The fiscal year 2007 plan stated that the first phase of IDENT/IAFIS interoperability was implemented via the iDSM prototype in 2006. It also identified high-level activities to design, build, and deploy the initial operating capability for IDENT/IAFIS interoperability, such as advancing the data sharing architecture and enabling the assignment of a unique number to each individual. While the fiscal year 2008 plan states that some of these efforts were completed, neither plan provided specific milestones to measure progress. The fiscal year 2007 plan stated that efforts to deploy a biometric exit solution for air and sea environments would be launched. While the fiscal year 2008 plan states that US-VISIT developed a Comprehensive Biometric Exit strategy and began planning to address the air and sea environments, neither plan provided specific milestones to measure progress. 34 Objective 1: Legislative Conditions Condition 1 The fiscal year 2008 plan discusses several initiatives to enhance and leverage key program management capabilities, such as continuing efforts to improve the program’s use of earned value management, the maturity of software acquisition/ development processes, and the quality of internal governance. In some cases, the plan cites program management efforts that can be traced to the fiscal year 2007 plan. For example, the fiscal year 2007 plan stated that an assessment of the prime contractor’s earned value management system was to be conducted during fiscal year 2007. According to the fiscal year 2008 plan, an assessment was completed in June 2007 that identified a number of weaknesses, a plan of action and milestones was developed to address the weaknesses, and this plan is to be executed in 2008. (These weaknesses are discussed in detail later in this briefing.) However, the fiscal year 2008 plan also identifies program management capability improvements that are not traceable to prior plan commitments. For example, the fiscal year 2008 plan states that a Planning, Programming, Budgeting, and Execution process was developed during fiscal year 2007. However, this effort was not mentioned in the prior plan as a commitment and thus as a basis for measuring progress. 35 Objective 1: Legislative Conditions Condition 2 Condition 2. The plan does not satisfy the condition that it include an explicit plan of action defining how all funds are to be obligated to meet future program commitments, with the planned expenditure of funds linked to the milestone-based delivery of specific capabilities, services, performance levels, mission benefits and outcomes, and program management capabilities. As we have previously reported, the purpose of the expenditure plan is to provide Congress with sufficient information to exercise effective oversight of US-VISIT and to hold DHS accountable for results. As such, the plan should specify planned system capabilities, schedules, costs, and expected benefits for each of its projects and for its program management activities. While the fiscal year 2008 plan links funding to four broad core capability areas and associated projects, it does not link this planned use of funds to milestones and it does not consistently decompose projects into specific mission capabilities, services, performance levels, benefits and outcomes, or program management capabilities. GAO, Homeland Security: U.S. Visitor and Immigrant Status Program’s Long-standing Lack of Strategic Direction and Management Controls Needs to Be Addressed, GAO-07-1065 (Washington, D.C.: Ag. 31, 2007). 36 Objective 1: Legislative Conditions Condition 2 To illustrate, the expenditure plan allocates funding among the program’s four broad core capability areas. For one of these capability areas, the plan identifies major projects, such as Unique Identity and Comprehensive Biometric Exit Solution—Air and Sea. These projects are then decomposed into general functional activities (e.g., project integration and analysis, and acquisition and procurement), which are then associated with fiscal year 2007 and 2008 funding. However, these functional activities do not constitute specific capabilities, services, performance levels, or benefits. Rather, they represent functions to be performed that presumably will produce such capabilities, services, performance levels, or benefits. Similarly, the remaining three core capability areas are also divided into general functional activities (e.g., biometric support, data integrity, program staffing, data center operations) that do not constitute capabilities, services, performance levels, or benefits. Moreover, the funding associated with the broad core capability areas, projects, or functional activities is not linked to any milestones. For example, the plan states that $72.6 million of fiscal year 2008 funds will be used to update DHS border and process technology for 10-print transition and IDENT/IAFIS, but does not state what updates will be accomplished or by when. The plan also states that $45.1 million will be used to operate and maintain applications, but does not state what maintenance activities will be performed and when they will be performed. 37 Objective 1: Legislative Conditions Condition 3 Condition 3. The plan, including related program documentation and program officials’ statements, partially satisfies the condition that it include a listing of all open GAO and OIG recommendations related to the program and the status of DHS actions to address them, including milestones. We reported in August 2007 that US-VISIT’s progress in implementing our prior recommendations had been slow, as indicated by the 4-year-old recommendations that were less than fully implemented. Given that our recommendations focus on fundamental limitations in the management of US-VISIT, they are integral to DHS’s ability to execute its expenditure plans, and thus should be addressed in the plans. Since 2003, GAO has made 44 recommendations to the US-VISIT program. The fiscal year 2008 plan provides a listing and status of our recommendations. However, the plan does not provide milestones for addressing these recommendations. The table on the next slide summarizes our analysis of the status of our recommendations. GAO-07-1065. 39 Objective 1: Legislative Conditions Condition 4 Condition 4. The plan partially satisfies the condition that it include a certification by the DHS CPO that (1) the program has been reviewed and approved in accordance with the department’s investment management process and (2) the process fulfills all capital planning and investment control requirements and reviews established by the Office of Management and Budget (OMB), including Circular A-11, part 7. As we have previously reported, it is important for organizations such as DHS, which rely heavily on IT to support strategic outcomes and meet mission needs, to adopt and employ an effective institutional approach to IT investment management. Such an approach provides agency management with the information needed to ensure that IT investments cost-effectively meet strategic mission needs and that projects are meeting cost, schedule, and performance expectations. We have also reported that the capital investment control requirements and reviews outlined in the OMB Circular A-11, part 7, are important because they are intended to minimize a program’s exposure to risk, permit performance measurement and oversight, and promote accountability. Office of Mgement nd Bdget Circr A-11, Prt 7 eablihe policy for plnning, budgeting, cqition, nd mgement of federl cpitasset. GAO, Information Technology: DHS Needs to Fully Define and Implement Policies and Procedures for Effectively Managing Investments, GAO-07-424 (Washington, D.C.: April 27, 2007). GAO-07-1065. 40 Objective 1: Legislative Conditions Condition 4 On March 14, 2008, the DHS CPO certified that (1) US-VISIT was reviewed and approved in accordance with the department’s investment management process and (2) this process fulfills all capital planning and investment control requirements and reviews established by OMB, including Circular A-11, part 7. In support of certifying the first aspect of the condition, the CPO stated that OMB scored US-VISIT’s fiscal year 2009 budget submission (i.e., budget exhibit 300) a 35 out of a possible 50 in November 2007. According to OMB, this score means that the submission has “very few points . . . but still needs strengthening.” In addition, the CPO stated that the program had been reviewed by the DHS Investment Review Board in December 2006, and that the board had issued a decision memorandum in April 2007 stating that the fiscal year 2007 expenditure plan met, among other things, OMB capital planning and investment review requirements and satisfied that aspect of the DHS investment management process that requires investments to comply with DHS’s enterprise architecture. However, this support is not sufficient to fully satisfy the first aspect of the legislative condition because this condition applies to the fiscal year 2008 expenditure plan, and the support that the CPO cites does not relate to either the fiscal year 2008 budget submission or to the fiscal year 2008 expenditure plan. Rather, it pertains to the following year’s budget submission and the prior year’s plan. 41 Objective 1: Legislative Conditions Condition 4 In support of certifying the second aspect of the condition, the CPO again cites the fiscal year 2009 budget submission, which DHS documents show underwent a series of reviews and revisions before being sent to OMB that raised the department’s scoring of the submission from a 29 to a 37. According to OMB, a score of 29 means, among other things, that “much work remains to solidify and quantify” the submission. In certifying to this aspect, the CPO also stated that his office will continue to oversee US-VISIT through the department’s emerging investment management process. However, the cited support is not sufficient to satisfy the legislative condition for two reasons. As previously noted, the cited budget submission is for fiscal year 2009 rather than fiscal year 2008. GAO-07-424. GAO, Information Technology Investment: A Framework for Assessing and Improving Process Maturity, GAO-04-394G (Washington, D.C.: Mrch 2004). 42 Objective 1: Legislative Conditions Condition 4 and investment control requirements. In particular, we reported that: DHS’s process (policies and procedures) for project-level management do not include all key elements, such as specific criteria or steps for prioritizing and selecting new investments. DHS has not fully implemented the practices needed to control investments—at the project level or at the portfolio level, including regular project-level reviews by the DHS Investment Review Board. DHS’s process does not identify a methodology with explicit decision-making criteria to determine an investment’s alignment with the DHS enterprise architecture. 43 Objective 1: Legislative Conditions Condition 4 In its comments on a draft of this report, DHS disagreed that its investment management process is not sufficiently mature, stating that on November 7, 2008 it issued an interim operational policy for investment control that addresses the limitations that we reported in April 2007. However, because DHS’s comments only provided the memo that issued the interim policy, and not the policy itself, we have yet to review it to determine whether it addresses the above limitations. Also, the memo describes the interim policy as a “resulting draft” that is the product of an “informal staffing process” and that changes will be made to “the policy prior to completing this process.” Moreover, implementation of the policy, including training on its implementation, still needs to occur. Therefore, we continue to view DHS’s investment management process as not sufficiently mature. 44 Objective 1: Legislative Conditions Condition 5 Condition 5. The plan partially satisfies the condition that it include a certification by the DHS CIO that an independent verification and validation (IV&V) agent is currently under contract. As we have previously reported, IV&V is a recognized best practice for large and complex system development and acquisition programs, like US-VISIT, as it provides management with objective insight into the program’s processes and associated work products. On February 25, 2008, the former DHS Acting CIO conditionally certified that the program has an IV&V agent under contract. However, this certification was qualified to recognize that the contract only provided for IV&V services relative to testing system applications (i.e., it did not extend to other key program activities). Accordingly, the certification was made conditional on the program office providing an update on its efforts to award a contract for program-level IV&V by April 15, 2008. According to program officials, they are in the process of evaluating a program-wide IV&V contract proposal and plan to award a contract in September 2008. GAO, Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed, GAO-04- 586 (Washington, D.C.: My 11, 2004). 45 Objective 1: Legislative Conditions Condition 6 Condition 6. The plan partially satisfies the condition that it include a certification by the DHS CIO that the program’s system architecture is sufficiently aligned with the department’s enterprise architecture (EA), including a description of all aspects of the architectures that were and were not assessed in making the alignment determination, the date of the alignment determination, and any known areas of misalignment, along with the associated risks and corrective actions to address any such areas. According to federal guidelines and best practices, investment compliance with an EA is essential for ensuring that new and existing systems are defined, designed, and implemented in a way that promotes integration and interoperability and minimizes overlap and redundancy, thus optimizing enterprisewide efficiency and effectiveness. A compliance determination is not a one-time event that occurs when an investment begins, but rather occurs throughout an investment’s life cycle as changes to both the EA and the investment’s architecture are made. Within DHS, the EAB, supported by the Enterprise Architecture Center of Excellence, is responsible for ensuring that system investments demonstrate adequate technical and strategic compliance with the department’s EA. Chief Informtion Officer Concil, A Practical Guide to Federal Enterprise Architecture, Verion 1.0, Feuary 2001. GAO, Information Technology: A Framework for Assessing and Improving Enterprise Architecture Management (verion 1.1), GAO- 03-584G (Washington, D.C.: April 2003). 46 Objective 1: Legislative Conditions Condition 6 In early 2008, the DHS Acting CIO certified that the US-VISIT system architecture was aligned with the DHS EA based on an assessment of the program’s alignment to the 2007 version of DHS’s EA, which was conducted by the EAB in support of the program’s MDP2 review. Consistent with the legislative condition, the fiscal year 2008 expenditure plan includes the former Acting CIO’s certification, the date of the board’s conditional approval of architectural alignment for MDP2 (September 27, 2007) and the date of the certification (February 25, 2008). It also includes areas of misalignment and corrective actions to address the identified areas. Specifically, it identifies such areas of misalignment as US-VISIT requirements and products to support 10-print solution not having been defined and included in the 2007 EA technical reference model, and US-VISIT data standards not having been vetted with the DHS Enterprise Data Management Office for compliance. It states that corrective actions to address these areas were completed in September 2007, and that no outstanding MDP2 conditions remain. However, the certification does not fully satisfy the legislative conditions for three reasons. 47 Objective 1: Legislative Conditions Condition 6 First, the basis for the certification is an assessment against the 2007 EA, which is a version that we recently reported to be missing important US-VISIT architectural content.Further, while DHS recently issued a 2008 version of its EA, it does not address these content shortfalls. The following are examples of the missing architecture content: US-VISIT’s representation in this version’s business model—which associates the department’s business functions with the organizations that support and/or implement them—does not align US-VISIT with certain business functions (e.g., verify identity and establish identity) that the program office has identified as a critical part of its mission. US-VISIT business rules and requirements are not included in this version’s business model. Business rules are important because they explicitly translate business policies and procedures into specific, unambiguous rules that govern what can and cannot be done. As such, they facilitate the consistent implementation of policies and procedures. US-VISIT’s baseline and target performance goals (e.g., for transaction volume) are not reflected in this version. GAO, Homeland Security: Strategic Solution for US-VISIT Program Needs to Be Better Defined, Justified, and Coordinated, GAO- 08-361 (Washington, D.C.: Fe. 29, 2008). 48 Objective 1: Legislative Conditions Condition 6 in the 2007 EA. For example, it erroneously identifies two US-VISIT component systems as being owned by two other DHS entities. All US-VISIT system interfaces are not included in the 2007 EA’s system reference model. For example, it does not identify key interfaces between the IDENT, Advance Passenger Information System (APIS), Arrival and Departure Information System (ADIS), and Treasury Enforcement Communications System. Additionally, it does not identify the interface between IDENT and the Global Enrollment System, even though US-VISIT officials confirmed that the interface exists and is operating. Second, the department lacks a defined methodology for determining an investment’s compliance with its EA, including explicit steps and criteria. According to federal guidance, such a methodology is important because the benefits of using an EA cannot be fully realized unless individual investments are defined, designed, and developed in a way that avoids duplication and promotes interoperability. However, we reported in April 2007 that DHS does not have such a methodology. Without this methodology and verifiable documentation demonstrating its use in making compliance determinations, the basis for concluding that a program sufficiently complies with any version of the 2007 EA will be limited. GAO-07-424. 49 Objective 1: Legislative Conditions Condition 6 Third, the certification attachment includes a description of what was assessed to provide the basis for the compliance certification. For example, the attachment states that the board “evaluated the program’s ability to support the Department’s line of business and strategic goals; their alignment to a DHS Office of the CIO portfolio; the data, data objects, and data entity that encompass the investment; the technology leveraged to deliver capabilities and functions by the program; and compliance with information security, Section 508, and screening coordination.” However, the descriptions do not link directly to key 2007 EA artifacts. For example, it aligns US-VISIT’s data entities (e.g., Watch List and Warrants) to the data object “Record”. The 2007 EA, however, does not define that data object. Moreover, those aspects of the architectures that were not assessed are not identified, such as the business rules and enterprise security architecture. 50 Objective 1: Legislative Conditions Condition 7 Condition 7. The plan partially satisfies the condition that it include a certification by the DHS CPO that the plans for the program comply with federal acquisition rules, requirements, guidelines and practices, and a description of the actions being taken to address any areas of noncompliance, the risks associated with them, along with any plans for addressing these risks, and the status of their implementation. As we have previously reported, federal IT acquisition requirements, guidelines, and management practices provide an acquisition management framework that is based on the use of rigorous and disciplined processes for planning, managing, and controlling the acquisition of IT resources. If implemented effectively, these processes can greatly increase the chances of acquiring software-intensive systems that provide promised capabilities on time and within budget. GAO-07-1065. 51 Objective 1: Legislative Conditions Condition 7 practices. In addition, the CPO stated that DHS's Office of Procurement Operations had conducted self-assessments of US-VISIT-related contracts in fiscal years 2006 and 2007, and that these assessments had not identified any areas of non-compliance that required risk mitigation. However, the cited support is not sufficient to fully satisfy the legislative condition because the condition applies to the fiscal year 2008 expenditure plan, while the support that is cited pertains to the fiscal year 2007 expenditure plan and assessments that were completed in fiscal years 2006 and 2007. 52 Objective 1: Legislative Conditions Condition 8 Condition 8. The plan partially satisfies the condition that it include (1) a certification by the DHS CIO that the program has a risk management process that regularly identifies, evaluates, mitigates, and monitors risks throughout the system life cycle and communicates high-risk conditions to department investment decision makers, as well as (2) a listing of all the program’s high risks and the status of efforts to address them. As we have previously reported, proactively managing program risks is a key acquisition management control, and if defined and implemented properly, it can increase the chances of programs delivering promised capabilities and benefits on time and within budget. On February 25, 2008, the former DHS Acting CIO certified that US-VISIT had a sufficient risk management process in place, adding that this process satisfied all process-related aspects of the legislative condition. In doing so, the then Acting CIO relied on an assessment of a range of US-VISIT risk management documents, including a policy, plan, periodic listings of high risks and related status reports, and communications with department decision makers. GAO, DOD Business Systems Modernization: Key Marine Corps System Acquisition Needs to Be Better Justified, Defined, and Managed, GAO-08-22 (Washington, D.C.: Jly. 28, 2008). 53 Objective 1: Legislative Conditions Condition 8 However, the certification does not fully satisfy the legislative condition. Our analysis of the same risk management documents that the certification is based on revealed key weaknesses: The US-VISIT risk management plan is not being effectively implemented, which is also a weakness that we reported in February 2006. For example, of the 33 high risks identified as being in or past the handling phase of the risk management process in the February 6, 2008 risk inventory, 8 (about 24 percent) did not have a mitigation plan, and 19 (about 58 percent) did not have a contingency plan. Moreover, considerable time has passed without such plans being developed, in some cases more than 3 years. According to the risk management plan, mitigation and contingency plans should be developed for all high and medium risks once they have reached the handling phase of the risk management process. (This weakness is discussed in greater detail later in this briefing.) GAO, Homeland Security: Recommendations to Improve Management of Key Border Security Program Needs to Be Implemented, GAO-06-296 (Washington, D.C.: Fe. 14, 2006). The US-VISIT Rik Mgement Plepte the rik mgement process into five tep. The forth tep—rik hndling—i the process of electing nd implementing repon to identified nd prioritized ri. 54 Objective 1: Legislative Conditions Condition 8 The US-VISIT process for managing risk does not contain thresholds for elevating risks beyond the program office. Moreover, program officials told us that an update to this process that is currently in draft does not include such thresholds. Without thresholds, it is unlikely that senior DHS officials will become aware of those risks requiring their attention. In this regard, we reported in February 2006 that the thresholds for elevating risks to department executives that were in place were not being applied. In August 2007, we reported that these thresholds had been eliminated and that no risks had been elevated to department executives since December 2005. During the following 32 months, only one risk was elevated beyond the program office. GAO-06-296. GAO-07-1065. 55 Objective 1: Legislative Conditions Condition 9 Condition 9. The plan does not satisfy the condition that it include a certification by the DHS Chief Human Capital Officer that the human capital needs of the program are being strategically and proactively managed, and that current human capital capabilities are sufficient to execute the plans discussed in the report. As we have previously reported, strategic management of human capital is both a best practice and a provision in federal guidance. Among other things, it involves proactive efforts to understand an entity’s future workforce needs, existing workforce capabilities, and the gap between the two and charting a course of action to define how this gap will be continuously addressed. By doing so, agencies and programs can better ensure that they have the requisite human capital capacity to execute agency and program plans. On March 6, 2008, the DHS Chief Human Capital Officer certified that the US-VISIT human capital strategic plan provides specific initiatives to address the hiring, development, and retention of program employees, and that a strategy exists to develop indicators to measure the progress and results of these initiatives. However, this certification does not satisfy the legislative condition for two reasons. GAO-07-1065. 56 Objective 1: Legislative Conditions Condition 9 The certification does not address the strategic plan’s implementation, which is important because just having a human capital strategic plan does not constitute strategic and proactive management of the program’s human capital. The certification does not address whether the current human capital capabilities are sufficient to execute the expenditure plan. For example, it does not recognize that US-VISIT is under staffed. We reported in August 2007 that the program office had 21 vacancies and had taken the interim step to address this shortfall by temporarily assigning other staff to cover the vacant positions, and planned to fill all the positions through aggressive recruitment. As of July 2008, the program office reported having 23 vacancies, including vacancies in leadership positions, such as the program’s deputy director. Since then, the program office reports that it has filled nine of these vacancies. GAO-07-1065. 57 Objective 1: Legislative Conditions Condition 10 Condition 10. The plan does not satisfy the condition that it include a complete schedule for the full implementation of a biometric exit program or a certification that such a program is not possible within 5 years. As we stated in our June 2007 testimony, a complete schedule for the full deployment of an exit capability would specify, at a minimum, what work will be done, by what entities, and at what cost to define, acquire, deliver, deploy, and operate expected system capabilities. A complete schedule is essential to ensuring that the solution is developed and implemented effectively and efficiently. The fiscal year 2008 plan does not contain either a complete schedule for fully implementing biometric exit capabilities at air, sea, and land POEs, or a statement that this cannot be completed within a 5-year time frame. Rather, the plan contains a very high-level schedule that only identifies five broadly-defined tasks, and a date by which each is to be completed, as shown in the table on the following slide. GAO, Homeland Security: Prospects for Biometric US-VISIT Exit Capability Remains Unclear, GAO-07-1044T (Washington, D.C.: Jne 28, 2007). 59 Objective 1: Legislative Conditions Condition 11 Condition 11. The plan partially satisfies the condition that it include a detailed accounting of operation and maintenance, contractor services, and program management costs associated with the program. As we have previously reported, the purpose of the expenditure plan is to provide Congress with sufficient information to exercise effective oversight of US-VISIT and to hold DHS accountable for results. To accomplish this, the act sought specific information relative to planned US-VISIT spending for operations and maintenance, contractor services, and program management. A diussed in the cope nd methodology ection of thi riefing (ttchment 1), we ght clrifiction from ff with the Housnd Sente Approprition Committee, Subcommittee on Homelnd Secrity, on thi condition. A result, the wording of thi condition has een modified lightly from tht in the ct. 60 Objective 1: Legislative Conditions Condition 11 The fiscal year 2008 plan provides a decomposition of program operations and maintenance costs according to functional areas of activity, such as operations and maintenance of system applications, data center operations, network/data communications, and IT services. While this decomposition does satisfy the condition, it nevertheless could be more informative if the costs were associated with specific capabilities, systems, and services, such as the cost to operate and maintain ADIS, IDENT, and iDSM. The fiscal year 2008 plan does not separately identify the program’s costs for contractor services. According to program officials, such services are embedded in other cost categories, such as Program Staffing (which is a combination of government and contractor staff), Prime Integrator, and Project Integration and Analysis. The one exception is for the Provide Identity Management and Screening Services broad core capability area, which identifies $15.8 million in contractor services. 61 Objective 1: Legislative Conditions Condition 11 The fiscal year 2008 plan states that program management costs will total $115.2 million, and allocates them to items such as program staffing ($46.2 million), planning and logistics ($14.3 million), prime integrator ($33.5 million), and working capital and management reserve ($ 21.2 million). It also describes a number of program management related initiatives, such as maturing program monitoring and control processes, developing strategic plans and related policies, conducting public information dissemination and outreach, and strengthening human capital management and stakeholder training. However, it does not allocate the $115.2 million to these initiatives. For example, the plan does not describe what portion of the $115.2 million will be used to develop criteria for estimating life cycle costs, which is one effort within the maturing program processes initiative, or to properly align program management staffing to tasks and rewrite position descriptions, which are efforts within strengthening human capital management. In addition, the $115.2 million does not include $11.6 million in contractor program management support provided to specific projects, such as Air and Sea Exit. As a result, total cost allocated to program management in fiscal year 2008 is $126.8 million, which is similar to the program management costs we reported in the fiscal year 2006 and 2007 62 Objective 1: Legislative Conditions Condition 11 expenditure plans. As we previously reported, these levels of program management costs represented a sizeable portion of the US-VISIT planned spending, but were not adequately justified. GAO, Homeland Security: Planned Expenditures for U.S. Visitor and Immigrant Status Program Need to be Adequately Defined and Justified, GAO-07-278 (Washington, D.C.: Fe. 14, 2007). 63 Objective 1: Legislative Conditions Condition 12 Condition 12. We have reviewed the plan, thus satisfying the condition. Our review was completed on September 15, 2008. In developing its Air and Sea Exit NPRM, DHS is required to prepare a written assessment of the costs, benefits, and other effects of its proposal and a reasonable number of alternatives, and to adopt the least costly, most cost-effective, or least burdensome among them. To accomplish this, it is important that DHS have reliable cost estimates for its proposed and alternative solutions. As noted earlier in this briefing, the NPRM and regulatory impact analysis cite the estimated costs of each of the five alternatives that were analyzed. For example, the impact analysis states that the estimated cost of the proposed solution is $3.6 billion. Moreover, this analysis states that each of the cost estimates are “rough order of magnitude” estimates, meaning that they are by definition rough and imprecise, to the point of being potentially understated by as much as 100 percent, and overstated by as much as 50 percent. Restated, this means that the estimated cost of the proposed solution could be anywhere from $1.8 billion to $7.2 billion. According to DHS’s analysis, these broad cost risk ranges were used to reflect the degree to which Air and Sea Exit has been defined, including the assumptions that had to be made about airline solution configurations in the absence of airline data. According to GAO’s Cost Estimating Guide, rough order of magnitude estimates are used when few details are available about the alternatives, and they should not be considered budget-quality cost estimates. Accordingly, they should not be viewed as sufficiently credible, accurate, or comprehensive to be considered reliable for making informed choices among competing investment options. Available Documentation Shows Some Estimating Best Practices Were Met, While Others Were Not GAO’s Cost Estimating Guide identifies four characteristics of reliable cost estimates and associates a number of estimating best practices with each characteristic. The four characteristics of reliable cost estimates are that they are well-documented, credible, comprehensive, and accurate. The cost estimates for the Air and Sea Exit alternatives satisfied a number of the best practices in GAO’s Cost Estimating Guide. For example, the estimate’s purpose and scope are clearly defined, the cost team included experienced cost analysts, and the cost estimate included a description of the cost estimation process, data sources, and methods. Moreover, we have yet to receive documentation from DHS relative to other best practices cited in the guide. For example, the guide recognizes the importance of performing risk analyses that allow for risks to be examined across the work breakdown structure so that the uncertainties associated with individual work elements can be determined, and risk levels can be assigned to each. According to the regulatory impact analysis, a standard level 5 risk range (50 percent below to 100 percent above) was used with the cost estimates because a comprehensive risk analysis had not been done. Program officials told us, however, that a risk analysis was performed, but we have yet to receive it. Further, we have yet to receive evidence showing that all relevant costs were addressed, such as the cost of spare, refreshed, and updated equipment and technology. The regulatory impact analysis states that data on several variables were not available for inclusion in the analysis, including estimates for burden to carriers and travelers. Of the 56 airlines and airline associations that provided comments on the NPRM, 21 commented that DHS’s cost estimate for its proposed solution was understated because it did not adequately reflect the burden to carriers. In particular, the International Air Transport Association commented that the proposed solution could cost the air carriers as much as $12.3 billion over 10 years. According to this association, its estimate was developed in collaboration with airlines, network service providers, and hardware manufacturers. The association attributed the understatement of DHS’s estimate to its omission of relevant costs for data transmission, secure networks, and secure data warehouses. Specifically, it stated that transmission requirements for biometric data would be between 350 and 800 times greater that what the airlines currently use for the transmission of biographic and manifest text data (between 31 and 128 megabytes of information for each international flight versus about 100 kilobytes currently transferred); 69 Objective 2: Observations 1: Reliability of Air nd S Exit Cot Etimte Not Cle secure networks required for transmission of biometric data would need to be installed between the airports and the airlines’ departure control systems because they currently do not exist (estimated to cost about $150 million over 10 years); and secure data warehouses for biometric data storage would need to be installed to store the data prior to transmission to DHS (estimated to cost about $1 billion to operate over 10 years). In addition, United Airlines commented that its start-up costs would be about $21.8 million. It also commented that DHS’s cost estimate does not include the cost of additional traveler burden, which they estimated to be about $30 per hour. According to United Airlines, passenger time is potentially the highest cost element with as many as 50 million persons being affected by queuing, congested space, and flight delays. DHS’s regulatory impact analysis acknowledges the omission of the cost of additional travel burden and the impact on the cost to each carrier’s business processes. Further, Air Canada Jazz, a regional airline, commented that because the requirement for airline personnel to collect biometric data is beyond the scope of duties outlined in current collective agreements, it would have to renegotiate its agreements to add these duties. Adequate security and privacy controls are needed to assure that personally identifiable information is secured against unauthorized access, use, disclosure, or retention. Such controls are especially needed for government agencies, where maintaining public trust is essential. In the case of US-VISIT, one of its stated goals is to protect the security and privacy of U.S. citizens and visitors. However, DHS's proposed solution would have more privacy and security risks than alternative solutions. According to the NPRM, having carriers collect the biometric information is less secure than alternatives where DHS collects the information, regardless of the information collection point. Moreover, it states that information that is in the sole custody of one entity (e.g., DHS) is less likely to be compromised than information passed from private carriers to DHS. Similarly, the NPRM states that the degree of confidence in compliance with privacy requirements is lower when DHS does not maintain full custody of personally identifiable information. According to the NPRM, these privacy and security risks will be addressed in two ways. First, DHS will require carriers to ensure that their systems and transmission methods of biometric data meet DHS technical, security and privacy requirements to be established in guidance and issued in conjunction with the final rule. However, it is unclear how DHS will ensure that the guidance is effectively implemented. Second, when the data are received by DHS, the NPRM states that it will be protected in accordance with a robust privacy and security program. However, we recently reported that the systems supporting US-VISIT have significant information security weaknesses that place sensitive and personally identifiable information at increased risk of unauthorized and possibly undetected disclosure and modification, misuse, and destruction. GAO, Information Security: Homeland Security Needs to Immediately Address Significant Weaknesses in Systems Supporting the US-VISIT Program, GAO-07-870 (Washington, D.C.: Jly 13, 2007). As noted earlier, 91 entities, including the airline, trade, and travel industries, and federal, state, and foreign governments, commented on the Air and Sea Exit proposal. In addition to the comments discussed earlier relative to the reliability of the cost estimates and the security and privacy implications of a carrier-implemented solution, a number of other comments were provided that raise further concerns and questions about the proposed solution. Specifically, the entities provided the following comments: According to some carriers, DHS has yet to provide technical requirements for the carriers to meet in delivering their respective parts of the proposed solution. In particular, the NPRM stated that carriers will be required to comply with the DHS Consolidated User’s Guide. However, they stated that this guide does not define, for example, how biometric images are to be incorporated into the existing message format used for APIS transmissions. Similarly, the NPRM states that all biometric data transmissions would be bound by existing regulations, including the FBI’s Criminal Justice Information Services Electronic Transmission Specifications. However, carriers stated that these specifications had not been made available. vessel carrier passenger processing improvements. Requiring passenger-agent contact goes against recent simplifications to carriers’ business models in which new technologies are being introduced to eliminate time-consuming passenger-agent interactions. For example, most airlines and cruise ships allow passengers to confirm arrival and check-in online prior to entering the airport or sea terminal, or to check in and print a boarding pass at a kiosk. These carriers commented that the passenger- agent contact required under the NPRM is at odds with this evolution in business processes and will slow down the travel process, delay flights, and make air and sea ports more crowded. According to one carrier’s estimates, the proposed solution will add 1 to 2 minutes processing time per passenger, which will collectively add an estimated 3 to 5 hours per flight. While the regulatory impact analysis projected flight delays to be less lengthy, it nevertheless acknowledged that most travelers would be delayed by about 50 minutes. A number of entities said that such significant delays will cause foreign travelers to vacation elsewhere. fully integrated with other border screening programs involving air carriers. DHS has recently issued proposed or final rules for four DHS programs, and each of these require or propose requiring carriers to collect and transmit additional data in 2008 and 2009. As such, these organizations viewed the four as duplicative (require very similar data) and inefficient (use different transmission methods), and claimed that DHS’ sequential introduction of these programs will require carriers to undertake separate and repeated system development and employee training efforts that will impact their operations. According to several carriers, DHS did not involve the stakeholders in this rulemaking process as it had in previous rulemaking efforts. Carriers stated that for US-VISIT entry and the Advance Passenger Information System-Quick Query, which is about to be deployed, they were involved in developing a solution, but for US-VISIT exit, they were not. There the Air/S Exit, Secre Flight, the Electronic Trvel Athoriztion Stem, nd the Advnce Passenger Informtion Stem-Qick Qery. Proactively managing program risks is a key acquisition management control and, if defined and implemented properly, it can increase the chances of programs delivering promised capabilities and benefits on time and within budget. To its credit, the program office has defined a risk management plan and related process that is consistent with relevant guidance. However, its own risk database shows that not all risks have been proactively mitigated. As we have previously reported, not proactively mitigating risks increases the chances that risks become actual cost, schedule, and performance problems. GAO-06-296 OMB, Circular No. A-11, Part 7 Supplement - Capital Programming Guide, 2006, http://www.whitehouse.gov/om/circ/11/ crrent_yer/_11_2006.pdf (ccessed Jne 16, 2008) nd Softwre Engineering Intitte, CMMI for Acquisition, Verion 1.2, CMU/SEI-2007-TR-017 (Pittsburgh, PA; Novemer 2007). Within each of these steps, the plan defines a number of activities that are consistent with federal guidance and related best practices. For example, In the preparation phase, each project office is to develop a strategy for managing risk that includes, among other things, the scope of the project risks to be addressed and the risk management tools to be used. In the risk identification phase, risks are to be identified in as much detail as possible and a risk owner is to be designated. In the risk analysis phase, the estimated probability of occurrence and impact on the program or project of each risk is to be determined and used to assign a priority (high, medium, or low). prepared for all medium-and-high priority risks as early as possible. In the risk monitoring phase, the status of risk mitigation and contingency plans is to be tracked, and decisions are to be reached as to whether to close a risk or to designate it as a realized issue (i.e., actual problem). Days the risk has been open (as of February 6, 2008) Management step Hndle (6 ri) Monitor (6 ri) Relized (11 ri) Days the risk has been open (as of July 3, 2008) Management step Hndle (7 ri) Monitor (6 ri) Relized (11 ri) According to the GAO Cost Assessment Guide, rebaselining should occur very rarely, as infrequently as once in the life of a program or project and only when a schedule variance is significant enough to limit its utility as a predictor of future schedule performance. For task order 7, the largest task order, which provides for development and deployment of new capabilities (e.g., Unique Identity and Biometric Solutions Delivery) the program office has rebaselined its schedule twice in the last 2 years—first in October 2006, when the task order had a negative schedule variance of $958,216, and then in October 2007, when the negative schedule variance for Unique Identity and Biometric Solutions was $4.1 million. Since this last rebaselining, the program office reports a negative variance through May 2008 of $3.5 million. Without the rebaselinings, this would have amounted to a $7.2 million schedule variance. The graphic on the next slide shows the cumulative schedule variance with and without the rebaselining. GAO, Cost Assessment Guide: Best Practices for Estimating and Managing Program Costs, Exposure Draft, GAO-07-1134SP. (Washington, D.C.:Jly 2007). Task order 7 has pproximte ve of $141 million. Cumulative Schedule Variance, TO7 (Biometric Solutions + Unique ID) 84 DHS has not adequately met the conditions associated with its legislatively mandated fiscal year 2008 US-VISIT expenditure plan. The plan does not fully satisfy any of the conditions that apply to DHS, either because it does not address key aspects of the condition or because what it does address is not adequately supported or is otherwise not reflective of known program weaknesses. Given that the legislative conditions are intended to promote the delivery of promised system capabilities and value, on time and within budget, and to provide Congress with an oversight and accountability tool, these expenditure plan limitations are significant. Beyond the expenditure plan, other program planning and execution limitations and weaknesses also confront DHS in its quest to deliver US-VISIT capabilities and value in a timely and cost-effective manner. Most notably, DHS has proposed a solution for a long- awaited exit capability, but it is not clear if the cost estimates used to justify it are sufficiently reliable to do so. DHS has reported itself that the proposed solution provides less security and privacy than other alternatives analyzed, and the proposed solution is being challenged by those responsible for implementing it. Further, DHS’s ability to measure program performance and progress, and thus be positioned to address cost and schedule shortfalls in a timely manner, is hampered by weaknesses in the prime contractor’s implementation of EVM. Each of these program planning and execution limitations and weaknesses introduce risk to the program. In addition, DHS is not effectively managing the program’s risks, as evidenced by the program office’s risk database showing that known risks are being allowed to go years without risk mitigation and contingency plans. Overall, while DHS has taken steps to implement a significant percentage of our prior recommendations aimed at improving management of US-VISIT, additional management improvements are needed to effectively define, justify, and deliver a system solution that meets program goals, reflects stakeholder input, minimizes exposure to risk, and provides Congress with the means by which to oversee program execution. Until these steps are taken, US-VISIT program performance, transparency, and accountability will suffer. To assist DHS in planning and executing US-VISIT, we recommend that the Secretary of Homeland Security direct the department’s Investment Review Board to immediately hold a review of the US-VISIT program that, at a minimum, addresses The reasons for the fiscal year 2008 expenditure plan not fully addressing each of the legislative conditions and corrective action to ensure that this does not occur for future expenditure plans; The adequacy of the basis for any future Air and Sea Exit solution, including the reliability of cost estimates, implication of privacy and security issues, and addressing key concerns raised in comments to the proposed rule; The weaknesses in the program’s implementation of risk management, and The weaknesses in the prime contractor’s implementation of earned value management, including the limitations in the quality of the schedule baselines and the schedule variance measurements. We further recommend that the Secretary of Homeland Security report the results of this Investment Review Board review to Congress. 87 Agency Comments and Our Evaluation We provided a draft of this briefing to DHS officials, including the Director of US-VISIT. In their oral comments on the draft, these officials did not state whether they agreed or not with our findings, conclusions, or recommendations. They did, however, provide a range of technical comments, which we have incorporated in the briefing, as appropriate. They also sought clarification on our scope and methodology, which we have also incorporated in the briefing. A greed, ocope of work focused on the pln delivered to the Housnd Sente Approprition Committee. For condition 4, we reviewed the DHS certification and supporting documentation for US-VISIT’s capital planning and investment controls, including US-VISIT’s most recent OMB submission and documents related to the milestone decision point 1 and 2 approvals, to determine whether a sufficient basis existed for the certification; For condition 5, we reviewed the DHS certification for the independent verification and validation agent and analyzed supporting documentation, such as DHS’s assessment of US-VISIT’s independent verification and validation efforts, to determine whether a sufficient basis existed for the certification; For condition 6, we reviewed the DHS certification that the US-VISIT architecture is sufficiently aligned with the DHS EA, and assessed supporting documentation, including US-VISIT program documents against the DHS EA 2007, and criteria in DHS’s Investment Review Process and DHS’s EA Governance Process Guide to 91 Attachment 1 Scope nd Methodology determine whether a sufficient basis existed for the certification; For condition 7, we reviewed the DHS certification that the plans for the US-VISIT program comply with federal acquisition rules, guidelines, and practices, and analyzed supporting documentation, such as DHS’s assessment of US-VISITs contracts, to determine whether there was a sufficient basis for the certification; For condition 8, we reviewed the DHS certification that US-VISIT have a risk management process that identifies, evaluates, mitigates, and monitors risks throughout the life cycle, and communicates high risks to the appropriate managers at the US-VISIT program and DHS levels. We also analyzed the most current US- VISIT risk management plan, risk lists, and risk meeting minutes, to determine whether there was a sufficient basis for the certification; and For condition 9, we reviewed the DHS certification that the human capital needs of the US-VISIT program were being strategically and proactively managed, and analyzed supporting documentation, such as US-VISIT’s Human Capital Strategic Plan, to determine whether there was a sufficient basis for the certification. We did not ttempt to vlidte the comment. For observtion 6, we used the Uniqe ID nd Biometric Soltion Delivery subas of task order 7. Thee tas covered 98 percent of the totl ve of task order 7 nd the remining 2 percent were relted to subasssued in fil yer 2008. 93 Attachment 1 Scope nd Methodology define and implement a risk management process that addresses the identification, analysis, evaluation, and monitoring of risks by reviewing the risk management policy, risk management plan, active and high risk lists, risk meeting minutes, and a risk elevation memorandum. Additionally, in February 2007, we reported that the system that US-VISIT uses to manage its finances (U.S. Immigration and Customs Enforcement’s Federal Financial Management System) has reliability issues. In light of these issues, the US-VISIT Budget Office tracks program obligations and expenditures separately using a spreadsheet and comparing this spreadsheet to the information in Federal Financial Management System. Based on a review of this spreadsheet, there is reasonable assurance that the US-VISIT budget numbers being reported by Federal Financial Management System are accurate. For DHS-provided data that our reporting commitments did not permit us to substantiate, we have made appropriate attribution indicating the data’s source. Defined, Justified, and Coordinated. GAO-08-361. Washington, D.C.: February 29, 2008. Homeland Security: U.S. Visitor and Immigrant Status Program’s Long-standing Lack of Strategic Direction and Management Controls Needs to be Addressed. GAO-07- 1065. Washington, D.C.: August 31, 2007. Homeland Security: DHS Enterprise Architecture Continues to Evolve But Improvements Needed. GAO-07-564. Washington, D.C.: May 9, 2007. Homeland Security: US-VISIT Program Faces Operational, Technological, and Management Challenges. GAO-07-632T. Washington D.C.: March 20, 2007. Homeland Security: US-VISIT Has Not Fully Met Expectations and Longstanding Program Management Challenges Need to Be Addressed. GAO-07-499T. Washington, D.C.: February 16, 2007. Homeland Security: Planned Expenditures for U.S. Visitor and Immigrant Status Program Need to Be Adequately Defined and Justified. GAO-07-278. Washington, D.C.: February 14, 2007. Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry. GAO-07-378T. Washington, D.C.: January 31, 2007. Challenges at Land Ports of Entry. GAO-07-248. Washington, D.C.: December 6, 2006. Homeland Security: Contract Management and Oversight for Visitor and Immigrant Status Program Need to Be Strengthened. GAO-06-404. Washington, D.C.: June 9, 2006. Homeland Security: Progress Continues, but Challenges Remain on Department’s Management of Information Technology. GAO-06-598T. Washington, D.C.: March 29, 2006. Homeland Security: Recommendations to Improve Management of Key Border Security Program Need to Be Implemented. GAO-06-296. Washington, D.C.: February 14, 2006. Homeland Security: Visitor and Immigrant Status Program Operating, but Management Improvements Are Still Needed. GAO-06-318T. Washington, D.C.: January 25, 2006. Information Security: Department of Homeland Security Needs to Fully Implement Its Security Program. GAO-05-700. Washington, D.C.: June 17, 2005. Information Technology: Customs Automated Commercial Environment Program Progressing, but Need for Management Improvements Continues. GAO-05-267. Washington, D.C.: March 14, 2005. Homeland Security: Some Progress Made, but Many Challenges Remain on U.S. Visitor and Immigrant Status Indicator Technology Program. GAO-05-202. Washington, D.C.: February 23, 2005. Border Security: State Department Rollout of Biometric Visas on Schedule, but Guidance Is Lagging. GAO-04-1001. Washington, D.C.: September 9, 2004. Border Security: Joint, Coordinated Actions by State and DHS Needed to Guide Biometric Visas and Related Programs. GAO-04-1080T. Washington, D.C.: September 9, 2004. Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed. GAO-04-586. Washington, D.C.: May 11, 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-04-569T. Washington, D.C.: March 18, 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-03-1083. Washington, D.C.: September 19, 2003. Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning. GAO-03-563. Washington, D.C.: June 9, 2003. Increment 1 processes—Increment 1 includes the following five processes at air and sea ports of entry (POE): pre-entry, entry, status management, exit, and analysis, which are depicted in the graphic below. US-VISIT irrently tritioning from nning only the right nd left index finger to nning ll 10 finger. 8 U.S.C. § 1221(). When the foreign national arrives at a primary POE inspection booth, the inspector, using a document reader, scans the machine-readable travel documents. APIS returns any existing records on the foreign national to the CBP primary inspection workstation screen, including manifest data matches and biographic lookout hits. When a match is found in the manifest data, the foreign national’s name is highlighted and outlined on the manifest data portion of the screen. Biographic information, such as name and date of birth, is displayed on the bottom of the computer screen, as well as the photograph from State’s Consular Consolidated Database. The inspector at the booth scans the foreign national’s fingerprints and takes a digital photograph. This information is forwarded to the IDENT database, where it is checked against stored fingerprints in the IDENT lookout database. The new 10-print process will o integrte thi informtion with mnifet d o tht it i ll repreented on one creen. While the system is checking the fingerprints, the inspector questions the foreign national about the purpose of his or her travel and length of stay. The inspector adds the class of admission and duration of stay information into the Treasury Enforcement Communications System (TECS), and stamps the “admit until” date on the Form I-94. If the foreign national is ultimately determined to be inadmissible, the person is detained, lookouts are posted in the databases, and appropriate actions are taken. Within 2 hours after a flight lands and all passengers have been processed, TECS is to send the Arrival and Departure Information System (ADIS) the records showing the class of admission and the “admit until” dates that were modified by the inspector. The status management process manages the foreign national’s temporary presence in the United States, including the adjudication of benefits applications and investigations into possible violations of immigration regulations. Commercial air and sea carriers transmit departure manifests electronically for each departing passenger. These manifests are transmitted through APIS and shared with ADIS. ADIS matches entry and exit manifest data to ensure that each record showing a foreign national entering the United States is matched with a record showing the foreign national exiting the United States. ADIS maintains a status indicator for each traveler and computes the number of overstay days a visitor remains beyond their original entry duration. ADIS also provides the ability to run queries on foreign nationals who have entry information but no corresponding exit information. ADIS receives status information from the Computer Linked Application Information Management System and the Student and Exchange Visitor Information System on foreign nationals. The exit process includes the carriers’ electronic submission of departure manifest data to APIS. This biographic information is passed to ADIS, where it is matched against entry information. An ongoing analysis capability is to provide for the continuous screening against watch lists of individuals enrolled in US-VISIT for appropriate reporting and action. As more entry and exit information becomes available, it is to be used to analyze traffic volume and patterns as well as to perform risk assessments. The analysis is to be used to support resource and staffing projections across the POEs, strategic planning for integrated border management analysis performed by the intelligence community, and determination of travel use levels and expedited traveler programs. Increments 2B and 3 deployed US-VISIT entry processing capabilities to land POEs. These two increments are similar to Increment 1 (air and sea POEs), with several noteworthy differences. No advance passenger information is available to the inspector before the traveler arrives for inspection. Travelers subject to US-VISIT are processed at secondary inspection, rather than at primary inspection. Inspectors’ workstations use a single screen, which eliminates the need to switch between the TECS and IDENT screens. Form I-94 data are captured electronically. The form is populated by data obtained when the machine-readable zone of the travel document is swiped. If visa information about the traveler exists in the Datashare database, it is used to populate the form. Fields that cannot be populated electronically are manually entered. A copy of the completed form is printed and given to the traveler for use upon exit. No electronic exit information is captured. Dasre inclde extrct from Ste’ Consur Conolidted Dabastem nd inclde the visa photogrph, iogrphicl d, nd the fingerprint identifiction ner assigned when nonimmigrnt pplie for visa. US-VISIT Increments 1 through 3 include the interfacing and integration of existing systems and, with Increment 2C, the creation of a new system. The three main existing systems are as follows: Arrival and Departure Information System (ADIS) stores non-citizen traveler arrival and departure data received from air and sea carrier arrival data captured by CBP officers at air and sea POEs, Form I-94 issuance data captured by CBP officers at Increment 2B land POEs, Form I-94 data captured at air and sea ports of entry, and status update information provided by the Student and Exchange Visitor Information System (SEVIS) and the Computer Linked Application Information Management System (CLAIMS 3) (described on the next slide). 105 Attachment 3 Detiled Decription of Increment nd Component Stem ADIS provides biographic identity record matching, query, and reporting functions. The passenger processing component of the Treasury Enforcement Communications System (TECS) includes two systems: Advance Passenger Information System (APIS) captures arrival and departure manifest information provided by air and sea carriers, and Interagency Border Inspection System (IBIS) maintains lookout data and interfaces with other agencies’ databases. CBP officers use these data as part of the admission process. The results of the admission decision are recorded in TECS and ADIS. Federal Bureau of Investigation information on all known and suspected terrorists, all active wanted persons and warrants, and previous criminal histories for visitors from high-risk countries; DHS Immigration and Customs Enforcement information on deported felons and sex DHS information on previous criminal histories and previous IDENT enrollments. Informtion from the Federl Breau of Invetigtion inclde fingerprint from the Integrted Atomted Fingerprint Identifiction Stem. 107 Attachment 3 Detiled Decription of Increment nd Component Stem US-VISIT also exchanges biographic information with other DHS systems, including SEVIS and CLAIMS 3: SEVIS is a system that contains information on foreign students and CLAIMS 3 is a system that contains information on foreign nationals who request benefits, such as change of status or extension of stay. Some of the systems involved in US-VISIT, such as IDENT and ADIS, are managed by the program office, while some systems are managed by other organizational entities within DHS. For example: TECS is managed by CBP, SEVIS is managed by Immigration and Customs Enforcement, and CLAIMS 3 is under United States Citizenship and Immigration Services. Wtch lit d rce inclde DHS’ Custom nd Border Protection nd Immigrtion nd Custom Enforcement; the Federl Breau of Invetigtion; legcy DHS tem; the U.S. Secret Service; the U.S. Coast Guard; the Internl RevenService; the Drg Enforcement Agency; the Breau of Alcohol, Tobacco, & Firerm; the U.S. M Service; the U.S. Office of Foreign Asset Control; the Ntionl Guard; the Treasury Inpector Generl; the U.S. Deprtment of Agricltre; the Deprtment of Defene Inpector Generl; the Royl Cdin Monted Police; the U.S. Ste Deprtment; Interpol; the Food nd Drg Adminitrtion; the Finncil Crime Enforcement Network; the Breau of Engrving nd Printing; nd the Deprtment of Justice Office of Specil Invetigtion. 1. Develop nd pprove complete tet pl efore teting egin. Thee pl, minimm, hold (1) pecify the tet environment, inclding tet eqipment, oftwre, mteril, nd necessary trining; (2) decrie ech tet to e performed, inclding tet control, inp, nd expected otp; (3) define the tet procedre to e followed in condcting the te; nd (4) provide trceability etween tet cas nd the reqirement to e verified y the teting.(GAO-04-586) Implement effective configtion mgement prctice, inclding eablihing US-VISIT chnge control rd to mge nd overee tem chnge. (GAO-04-586) Partially Implemented: The progrm office has developed nd pproved tet pl for vrious tem component, such as the US-VISIT/IDENT Prodct Integrtion nd the Unified IDENT Release 2 Component/Assemly. Oly of thee pl how tht they (1) pecified the tet environment, inclding tet eqipment, oftwre, mteril, nd necessary trining; (2) decried ech tet to e performed, inclding tet control, inp, nd expected otp; (3) defined tet procedre to e followed in condcting te; nd (4) provided trceability etween tet cas nd the reqirement to e verified y the teting. However, we were able to verity tht thee pl were pproved prior to teting. Implemented: The progrm office has developed configtion control rd tht i reponle for, mong other thing, to mge nd overee tem chnge. The office has o developed configtion mgement plnd egn implementing prctice pecified in the pln. For exmple, project level configtion mgement pln was developed for Uniqe Identity nd chnge control reqsubmitted nd pproved y the rd. 3. Develop pln, inclding explicit tas nd miletone, for implementing ll of or open recommendtion, inclding thoe provided in thi report. The plhold provide for periodic reporting to the Secretry nd Under Secretry on progress in implementing thi pln. The Secretry hold report thi progress, inclding reason for del, in ll fre US-VISIT expenditre pl. (GAO-04-586) Partially Implemented: US-VISIT audit coordintion nd reoltion i governed y formaudit gidnce nd coordinted throgh n Integrted Project Tem. The tem has developed pln tht incldeas nd miletone for implementing GAO recommendtion. The plo provide for the periodic reporting to the Secretry nd Under Secretry. Frther, the us of effort to ddress er of GAO recommendtionas een inclded in recent US-VISIT expenditre pl, lthogh reason for del in implementing them hve not. 4. Flly nd explicitly dicloe in ll fre expenditre pl how well DHS progressing int the commitment tht it mde in prior expenditre pl. (GAO- 05-202) 5. Reassss it pl for deploying n exit ability to ensure tht the cope of the exit pilot provide for dequate evuation of lterntive oltion nd etter ensure tht the exit oltion elected i in the t interet of the progrm. (GAO-05-202) Partially Implemented: A diussed erlier in thi riefing, while the fil yer 2008 expenditre pln provide ome informtion on how well DHS progressing int commitmentde in the fil yer 2007 expenditre pln, it doe not flly nd explicitly dicloe how well it i progressing inll previous commitment, nd it decri progress in reas not committed to in the prior yer’ pln. Implemented: The progrm office has reassssed it pl for deploying n exit cability. A result of thassssment, the progrm office dicontined the US-VISIT exit pilot in My 2007. 6. Develop nd implement process for ging the ccity of the US-VISIT tem. (GAO-05-202) 7. Follow effective prctice for etimting the co of fre increment. (GAO-05-202) 8. Mke ndernding the reltionhip nd dependencie etween the US-VISIT nd ACE progr priority mtter, nd report periodiclly to the Under Secretry on progress in doing o. (GAO-05-202) Implemented: The progrm has developed city mgement hndook tht provideidnce for mging tem ccity nd has incorported the ctivitie to e performed into it Universal Delivery Method. Frther, the progrm office has egn implementing thiidnce. For exmple, it has developed US- VISIT/IDENT business nd ervice ccity baseline. Partially Implemented: According to the progrm office, they hve (1) eablihed Cot Process Action Tem, (2) defined cot etimtion nd ly prctice nd process, (3) developed process for developing oth progrm life cycle cot etimte nd Independent Government Cot Etimte, nd (4) condcted elf- assssment of the progrm’ cot etimting prctice int gideline from the Softwre Engineering Intitte. However, the progrm office has yet to provide docmenttion demontrting tht it i implementing it defined cot etimtion prctice. Implemented: The progrm office has een working with the DHS Screening nd Coordintion Office to, mong other prioritie; develop greter ndernding etween US-VISIT nd other progr, inclding ACE. Frther, ecause the progrm i no longer orgniztionlly within the Office of the Under Secretry, reporting on progress to the Under Secretry i no longer wrrnted. Inted, the Screening nd Coordintion Office, which report directly to the Secretry nd Depty Secretry, i re of progress in thi re. 9. Explore lterntive me of oining ndernding of the fll impct of US-VISIT ll lnd POE, inclding it impct on workforce level nd fcilitie; thelterntive hold inclde surveying the ite tht were not prt of the previous assssment. (GAO-06-296) Implemented: The progrm office reassssed it pl for deploying n exit cability to lnd POE, nd as result, dicontined the demontrtion project in Novemer 2006. 10.For ech US-VISIT contrct ction tht the progrm mge directly, eablind mintin pln for performing the contrctor overight process, as pproprite. (GAO-06- 404) 11.Develop nd implement prctice for overeeing contrctor work mged y other gencie on the progrm office’ ehlf, inclding (1) clerly defining role nd reponilitie for oth the progrm office nd ll gencieging US-VISIT-relted contrct; (2) hving crrent, reliable, nd timely informtion on the fll cope of contrct ction nd ctivitie; nd (3) defining nd implementing tep to verify tht deliverable meet reqirement. (GAO-06-404) Implemented: For contrct ction tht the progrm mge directly, nd where it i pproprite for the progrm office to overee contrctor ctivitie, the progrm office hasablihed nd mintin n overight pln. For exmple, the progrm office has developed individual overight pl for 10-Print, Uniqe Identity, Interim D Sring Model, nd Independent Tend Support Evuation Service. Ech individual overight pln decri the role, reponilitie, nd authoritie involved in condcting contrct dminitrtion nd overight of the contrct ction. Implemented: The progrm office has developed nd implemented prctice for overeeing contrctor work mged y other gencie on the progrm office’ ehlf. Specificlly, it has developed contrctor dminitrtion mgement pln tht inclde (1) clerly defining role nd reponilitie for oth the progrm office nd ll gencieging US-VISIT-relted contrct; (2) hving crrent, reliable, nd timely informtion on the fll cope of contrct ction nd ctivitie; nd (3) defining nd implementing tep to verify tht deliverable meet reqirement. 12.Reqire, throgh greement, thgencie ging contrct ction on the progrm office’ ehlf implement effective contrct mgement prctice content with cqition gidnce for ll US-VISIT contrct ction, inclding minimm, (1) eablihing nd mintining pln for performing contrct mgement ctivitie; (2) assigning reponility nd authority for performing contrct overight; (3) trining the people performing contrct overight; (4) docmenting the contrct; (5) verifying tht deliverable satify reqirement; (6) monitoring contrctor-relted rik; nd (7) monitoring contrctor performnce to ensure tht the contrctor i meeting chedle, effort, cot, nd technicl performnce reqirement. (GAO-06-404) Implemented: The progrm office has mended the lnguage used in it intergency greement (IAA) to reqire gencie tht mge contrct ction on the progrm’ ehlf to implement certin prctice deigned to trengthen contrct mgement nd overight. Thee reqirement re pecified in the My 2007 US- VISIT Contrct Adminitrtion Mgement Plnd hve een inclded in ech of the IAA. Specificlly, ech IAA pecifie tht the gent gency i to (1) eablind mintin pln for performing contrct mgement ctivitie; (2) deignte contrcting officer nd contrcting officer’ technicl repreenttive to mge ll contrctuaction; (3) trin the people performing contrct overight, (4) docment the contrct; (5) verify tht deliverable satify reqirement; (6) monitor contrctor-relted rik; nd (7) monitor contrctor performnce to ensure tht the contrctor i meeting chedle, effort, cot, nd technicl performnce reqirement. 13.Reqire DHS nd non-DHS gencie thge contrct on ehlf of US-VISIT to (1) clerly define nd delinete the US-VISIT work from non-US-VISIT work as performed y contrctor; (2) record, t the contrct level, mont eing illed nd expended on US-VISIT-relted work o tht thee ce trcked nd reported eptely from mont not for US-VISIT prpo; nd (3) determine if they hve received reimbuement from the progrm for pyment not relted to US-VISIT work y contrctor, nd, if o, refnd to the progrny mont received in error. (GAO- 06-404) Partially Implemented: The progrm office report tht it has egn effort to eablih the process thre to (1) ensure thoth DHS nd non-DHS gencie tht mge contrct on ehlf of the progrm clerly define nd delinete the US-VISIT work from non-US-VISIT work performed y contrctor, (2) record, t the contrct level, mont eing illed nd expended on US-VISIT- relted work o tht thee ce trcked nd reported eptely from mont not for US-VISIT prpo; nd (3) determine if they hve received reimbuement from the progrm for pyment not relted to US-VISIT work y contrctor, nd, if o, refnd to the progrny mont received in error; however, they hve yet to demontrte tht thee process re in plce nd eing used ll DHS nd non-DHS gencie. 14.Ensure tht pyment to contrctor re timely nd in ccordnce with the Prompt Pyment Act. (GAO-06-404) 15.Improve exiting mgement control for identifying nd reporting compter processing nd other opertionl prolem as they rit lnd POE nd ensure tht thee control re contently dminitered. (GAO-07-248) Partially Implemented: The progrm office report tht it has egn effort to eablih the control needed to ensure tht pyment to contrctor re mde timely nd in ccordnce with the Prompt Pyment Act. Not Implemented: DHSas yet to implement improved mgement control for identifying nd reporting compter processing nd other opertionl prolem as they rit lnd POE or to implement method for ensuring tht thee control re contently dminitered. 16.Develop performnce measure for assssing the impct of US-VISIT opertion pecificlly t lnd POE. (GAO-07-248) Not Implemented: DHSas yet to develop performnce measure for assssing the impct of US-VISIT opertion t lnd POE. 17.A DHS finlize the torily mndted report decriing comprehenive iometric entry nd exit tem for US-VISIT, tht it inclde, mong other thing, informtion on the co, enefit, nd feasility of deploying iometric nd noniometric exit cabilitie t lnd POE. (GAO-07-248) Not Implemented: DHS report tht it has recently egn to develop the torily mndted report, nd deprtment offici said tht they expect to issue it in erly 2009. DHS offici ted tht they expect it to inclde informtion on co, enefit, nd feasility of iometric nd noniometric exit cabilitie t lnd POE. 18.A DHS finlize the torily mndted report decriing comprehenive iometric entry nd exit tem for US-VISIT, tht it inclde, mong other thing, diussion of how DHS intend to move from noniometric exit cability, such as the technology crrently eing teted, to reliable iometric exit cability tht meet tory reqirement. (GAO-07-248) Not Implemented: DHSas recently egn to develop the torily mndted report, nd deprtment offici ted tht it i to e issued in erly 2009. DHS offici ted tht they expect it to inclde diussion on how it intend to move to iometric exit cability t lnd port of entry. 19.A DHS finlize the torily mndted report decriing comprehenive iometric entry nd exit tem for US-VISIT, tht it inclde, mong other thing, decription of how DHS expect to lign emerging lnd order ecrity inititive with US-VISIT nd wht fcility or fcility modifiction wold e needed to ensure tht technology nd process work in hrmony. (GAO-07-248) Not Implemented: DHSas recently egn to develop the torily mndted report, nd deprtment offici ted tht it i to e issued in erly 2009. DHS offici ted tht they expect it to how how US-VISIT i to lign with emerging lnd order inititive as well as wht fcility modifiction wold e needed to ensure tht technology nd process work in hrmony. Not Implemented: Progrm offici ted tht they periodiclly rief authoriztion nd pproprition committee on nge of progrm ri, inclding thoassocited with not hving flly satified ll expenditre pln legitive condition, reason why they were not satified, nd tep eing tken to mitigte thee ri. However, they did not provide ny verifiable evidence tht thee mtter were diussed, nd ff with the Housnd Sente pproprition committee tht focus on US-VISIT told us tht they re not re of such riefing in which thee mtter were diussed. Implemented: The progrm office has limited plnned expenditre in exit pilot nd demontrtion project y reassssing it pl nd dicontining the exit pilot in My 2007 nd the demontrtion project in Novemer 2006. demontrtion project ntil such invetment re economiclly justified nd ntil ech invetment has well-defined evuation pln. The project hold e justified on the bas of co, enefit, nd ri, nd the evuation pl hold define wht i to chieved nd hold inclde pln of ction nd miletone nd measure for demontrting chievement of pilot nd project go nd deired otcome. (GAO- 07-278) 22.Work with the DHS Enterprie Architectre Bord to identify nd mitigte progrm ri associted with inveting in new US-VISIT cabilitie in the absence of DHS-wide opertionnd technologicl context for the progrm. Thee ri hold reflect the absence of flly defined reltionhip nd dependencie with relted order ecrity nd immigrtion enforcement progr. (GAO-07- 278) Not Implemented: The progrm office provided DHS Enterprie Architectre Bord meeting meeting. However, none of the meeting minte provided contined informtion on identifying nd mitigting progrm ri associted with inveting in new US-VISIT cabilitie in the absence of DHS-wide technologicl context for the progrm. 23.Limit plnned expenditre for progrgement-relted ctivitie ntil such invetment re economiclly justified nd hve well-defined pl detiling wht i to chieved, pln of ction nd miletone, nd measure for demontrting progress nd chievement of deired otcome. (GAO-07- 278) 24.The Secretry of DHS report to the deprtment’ authoriztion nd pproprition committee on it reason for not flly ddressing it expenditre pln legitive condition nd or prior recommendtion. (GAO-07-1065) Not Implemented: The progrm office has yet to provide either n economic justifiction or well-defined pl for it progrm mgement-relted ctivitie detiling wht i to chieved nd inclding pln of ction nd miletone nd measure for demontrting progress nd chievement of deired otcome. Moreover, the mont of fnding for progrm mgement in FY2008 remin t the level mentioned in FY2006 expenditre pln, which was the bas for thi recommendtion. Not Implemented: Progrm offici ted tht they periodiclly rief authoriztion nd pproprition committee on progrm- relted issu, inclding reason for not hving flly satified ll expenditre pln legitive condition nd GAO recommendtion. However, they did not provide ny verifiable evidence tht thee mtter were diussed, nd ff with the Housnd Sente pproprition committee tht focus on US-VISIT told us tht they re not re of such riefing in which thee mtter were diussed. 25.Develop pln for comprehenive exit ability, which inclde, minimm, decription of the cability to e deployed, the cot of developing, deploying nd operting the cability, identifiction of key keholder nd their repective role nd reponilitie, key miletone, nd measuable performnce indictor. (GAO- 08-361) 26.Develop ly of co, enefit, nd ri for propoed exit oltion efore lrge su of money re committed on thooltion, nd use the ly in electing the finoltion. (GAO-08-361) 27.Direct the pproprite DHSrtie involved in defining, mging, nd coordinting reltionhip cross the deprtment’ order nd immigrtion mgement progr to ddress the progrm collabortion hortcoming identified in thi report, such aslly defining the reltionhip etween US- VISIT nd other immigrtion nd order mgement progr nd, in doing o, to employ the collabortion prctice diussed in thi report. (GAO-08-361) Partially Implemented: DHS recently issued notice of propoed rlemking for implementing n exit cability ir nd POE. Thi notice provide high-level decription of propoed Air nd S Exit oltion, nd n etimte of the cot to develop, deploy, nd operte the oltion. Frther, it decri the role nd reponilitie of key keholder, such as ir nd rrier, nd et ome performnce indictor, such as when passenger iometric re to e trmitted to DHS. However, as diussed in thi riefing, thi propoed oltion r er of qtion tht need to e reolved. Partially Implemented: A noted erlier in thi riefing, DHS’ Air nd S Exit regtory impct ly lyzed the co nd enefit of the propoed oltion nd folterntive, nd DHS used thi ly in propoing it exit oltion. However, the cot etimte tht were used in thi ly were not sufficiently reliable to justify the propoed oltion. Partially Implemented: DHSas yet to direct ll of the pproprite prtie involved in defining, mging, nd coordinting reltionhip cross the deprtment’ order nd immigrtion mgement progr to ddress the progrm collabortion hortcoming identified in thi report nd, in doing o, to employ the collabortion prctice diussed in thi report. Specificlly, while US-VISIT has egn to coordinte with pecific order nd immigrtion mgement progr such as the Secre Border Inititive nd Wetern Hemiphere Trvel Inititive. In addition to the individual named above, Tonia Johnson (Assistant Director), Bradley Becker, Season Dietrich, Neil Doherty, Jennifer Echard, Elena Epps, Nancy Glover, Rebecca LaPaze, Anjalique Lawrence, Anh Le, Emily Longcore, Lee McCracken, Freda Paintsil, Karl Seifert, and Jeanne Sung made key contributions to this report. | The Department of Homeland Security (DHS) has established a program known as U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) to collect, maintain, and share information, including biometric identifiers, on certain foreign nationals who travel to and from the United States. By congressional mandate, DHS is to develop and submit an expenditure plan for US-VISIT that satisfies certain conditions, including being reviewed by GAO. GAO's objectives were to (1) determine if the plan satisfies the twelve legislative conditions and (2) provide observations about the plan and management of the program. To accomplish this, GAO assessed the plan and related DHS certification letters against each aspect of each legislative condition and assessed program documentation against federal guidelines and industry standards. The fiscal year 2008 US-VISIT expenditure plan does not fully satisfy any of the eleven conditions required of DHS by the Consolidated Appropriations Act, 2008, either because the plan does not address key aspects of the condition or because what it does address is not adequately supported or is otherwise not reflective of known program weaknesses. More specifically, of the eleven conditions, the plan partially satisfies eight. For example, while the plan includes a listing of GAO recommendations, it does not provide milestones for addressing these recommendations, as required by the act. Further, although the plan includes a certification by the DHS Chief Procurement Officer that the program has been reviewed and approved in accordance with the department's investment management process, and that this process fulfills all capital planning and investment control requirements and reviews established by the Office of Management and Budget, the certification is based on information that pertains to the fiscal year 2007 expenditure plan and fiscal year 2009 budget submission, rather than to the fiscal year 2008 expenditure plan. Moreover, even though the plan provides an accounting of operations and maintenance and program management costs, the plan does not separately identify the program's contractor services costs, as required by the act. With regard to the remaining three legislative conditions, the plan does not satisfy any of them. For example, the plan does not include a certification by the DHS Chief Human Capital Officer that the program's human capital needs are being strategically and proactively managed and that the program has sufficient human capital capacity to execute the expenditure plan. Further, the plan does not include a detailed schedule for implementing an exit capability or a certification that a biometric exit capability is not possible within 5 years. The twelfth legislative condition was satisfied by our review of the expenditure plan. Beyond the expenditure plan, GAO observed that other program planning and execution limitations and weaknesses also confront DHS in its quest to deliver US-VISIT capabilities and value in a timely and cost-effective manner. Concerning DHS's proposed biometric air and sea exit solution, for example, the reliability of the cost estimates used to justify the proposed solution is not clear, the proposed solution would provide less security and privacy than other alternatives, and public comments on the proposed solution raise additional concerns, including the impact the solution would have on the industry's efforts to improve passenger processing and travel. Moreover, the program's risk management database shows that key risks are not being managed. Finally, frequent rebaselining of one of the program's task orders has minimized the significance of schedule variances. Collectively, this means that additional management improvements are needed to effectively define, justify, and deliver a US-VISIT system solution that meets program goals, reflects stakeholder input, minimizes exposure to risk, and provides Congress with the means by which to oversee program execution. Until these steps are taken, US-VISIT program performance, transparency, and accountability will suffer. |
The cost of the census, in terms of cost for counting each housing unit, has been escalating over the last several censuses. The average cost for counting a housing unit increased from about $16 in 1970 to around $97 in 2010 constant dollars (see fig. 1). Meanwhile, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010. Declining mail response rates are significant and have led to higher costs because the mail response rate directly dictates the number of housing units in the nonresponse follow-up (NRFU) universe. NRFU, where the Bureau attempts to contact households that did not mail back questionnaires, was the largest and most costly Bureau field operation in 2000 and 2010 and has had an impact on overall census costs. Over the past several censuses, the Bureau has attempted to address the competing goals of containing costs and improving the quality of census information, but costs continued to rise in part because external factors, such as a growing and increasingly diverse population, required the Bureau to devote more resources in order to ensure a complete count. The Bureau is assessing various measures of the quality of the 2010 Census. This effort, combined with a better understanding of the specific sources of cost growth, could help managers make cost control decisions. Within its financial management system, the Bureau classifies census costs into eight broad categories and hundreds of projects (see fig. 2). These broad categories are further subdivided into individual projects that may be discrete, such as the NRFU operation, which has its costs captured in a single project line, or several project lines may be combined, sometimes from multiple categories, to reflect the total cost of an operation, as is the case with the Local Update of Census Addresses (LUCA) operation. The 2010 Census costs were concentrated in few categories. Planning for the 2020 Census is divided into five phases: (1) options analysis; (2) research and testing; (3) operational development and systems testing; (4) supplemental research and testing; and (5) readiness testing, execution, and closeout. The Bureau has identified a range of design alternatives for the 2020 Census and will narrow this range over the census life cycle. During fiscal year 2012, the Bureau will enter the research and testing phase and intends to develop a preliminary design that when adjusted for inflation will cost less than the $97 per housing unit cost of the 2010 Census but will also maintain quality. During the research and testing phase, the Bureau plans to execute at least 35 research projects to explore how design areas could be modified to control costs or improve quality. For example, the Bureau will examine the feasibility of using administrative records, such as Internal Revenue Service tax records, to collect information from nonresponders and thus reduce the fieldwork. Other research areas include new response options, such as the Internet and social networking sites. The Bureau uses life cycle cost estimates as a starting point for annual budget formulation and revises the estimates based on appropriations and updated budget information. As noted in our Cost Estimating and Assessment Guide, a life cycle cost estimate can be thought of as a “cradle to grave” approach to managing a program throughout its duration. However, in our past work, we found that the Bureau’s 2010 Census life cycle cost estimate was not reliable because it lacked adequate documentation and was not comprehensive, accurate, or credible. The Bureau may continue to be challenged in developing reliable life cycle cost estimates for a program as large, costly, and complex as the census. As part of its planning for 2020, the Bureau has developed an early life cycle cost estimate based on existing information and plans to release a full range of life cycle cost estimates in the budget submission for fiscal year 2015. Of the Bureau’s eight broad budget categories, field data collection and its associated support systems accounted for $3.5 billion of the $4.6 billion life cycle cost increase, or 77 percent of the overall cost growth from 2000 to 2010 (see table 1). This represents a 64 percent growth in the field data collection category from its 2000 totals, which was the largest percentage increase of all budget categories. Field data collection costs include training, labor, and mileage for temporary workers, as well as the support systems needed to run operations, including rental space and office equipment for local census offices (LCO). We previously reported that the field data collection budget category was also the largest contributor to cost growth from the 1990 Census to the 2000 Census. The remaining seven budget categories accounted for less than 25 percent of overall cost growth. The automated data collection category experienced the second largest growth, accounting for 12 percent of overall cost growth from 2000 to 2010. Expenses in this category were $547 million more than in 2000, a 42 percent increase. This category includes data processing activities and related information technology (IT) system costs. Smaller categories experienced cost growth as well, including content, questionnaires, and products; census design, methodology, and evaluation; and census test and dress rehearsal. Other categories actually experienced cost decreases, including program development and management. According to the Bureau, an increased workload—a larger number of housing units to count—is one of the factors driving up census costs. This, however, does not fully explain (1) why the cost to count each housing unit grew at a faster pace than the workload (39 percent increase to count each housing unit compared to 12 percent increase in workload) or (2) why component costs, such as data capture systems, experienced cost increases (see fig. 3). To more fully understand what is driving up census costs aside from an increase in workload, it will be important for the Bureau to analyze cost growth below the category level to determine the specific reasons why cost per housing unit continues to grow at a faster pace than workload. Key questions in this regard include, for example, (1) to what extent did increased labor and gasoline costs contribute to overall increases in field data collection costs, (2) how did additional use of technology contribute to field data collection costs, (3) how did increased investments in non- field-related IT systems affect cost growth, and (4) to what extent did the weak economy in 2010 help the Bureau reduce costs for field operations. While some cost increases, such as rising gasoline prices, might have been outside of the Bureau’s direct control, better information on the sources of census cost growth could enable the Bureau to develop work- arounds and alternatives that could mitigate their impact. Best practices in GAO’s Cost Estimating and Assessment Guide illustrate how an agency can strengthen its ability to control costs by using available cost data to make comparisons over time and identify and quantify trends. However, the Bureau cannot identify specific sources of cost growth below broad budget categories from 2000 to 2010 because the Bureau changed the way it defines projects without creating a crosswalk that documents the changes over time. As a result, the Bureau cannot specifically determine where costs are growing. While it is reasonable for the Bureau to modify its budget structure to accommodate changes from one decennial to the next, a crosswalk would have enabled officials to compare costs for specific projects. For example, for the 2000 Census, 236 projects were identified in the budget. For the 2010 Census, the Bureau changed its budget structure to more precisely capture costs, and as a result, the number of projects listed in the budget increased by almost 400 percent to 1,175 projects. However, the Bureau created no documentation to facilitate comparison for most projects in the budget from 2000 to 2010. For example, costs for LUCA are combined into one project in 2000 data while in 2010 data, LUCA activities were identified in 11 separate projects (for example, LUCA processing and LUCA testing). Without documentation explaining what costs were included in LUCA for the 2000 Census, it is impossible to accurately compare costs for LUCA between the two decennials and determine where any cost growth might have occurred. Further, the Bureau cannot accurately calculate the growth in field infrastructure costs, if any, from 2000 through 2010 because of a similar lack of documentation. Although the $2 billion the Bureau spent on its field infrastructure in 2010—including 12 regional census centers and almost 500 LCOs used to support field activities—represented a major investment, the Bureau lacks the information needed to accurately compare the costs of specific components from one decennial to the next. Such information would enable the Bureau to more accurately determine where any significant cost increases occurred and thus better focus its cost control efforts for the 2020 Census, as well as allow the Bureau to more precisely determine the potential cost savings of any operational changes. Although these structural changes are recent, the absence of documentation has been a challenge in the past as well. In a prior report comparing costs for the 1990 and 2000 censuses, we were unable to compare costs at the project level because of limitations in the available data and documentation. For the 1990 Census, the Bureau provided limited cost data by activity and project, so we were not able to attempt detailed cost comparisons. Moving forward, it will be important for the Bureau to put a process in place to enhance its ability to identify potential factors affecting cost growth and, if necessary, target cost control efforts appropriately. Although the Bureau identified five broad factors affecting cost growth, their ability to help the agency pinpoint and control future costs is limited because they mainly focus on high-level, generic management challenges rather than specific census-taking activities on which the Bureau can assess and take action as appropriate. Additionally, the Bureau has no data to support how much these factors contributed to cost growth. The five factors include 1. the increasing diversity of the population; 2. the demand for the Census Bureau to strive for improving accuracy 3. the lack of full public participation in the self-response phase of the census, requiring the hiring of a large field staff for NRFU; 4. the failure or challenges with linking major acquisitions, the schedule, and the budget; and 5. substantial investments in major national updating of the address frame just prior to enumeration (2009). The Bureau plans to use these factors to guide 2020 Census planning and research efforts. For example, the forthcoming research and testing phase will focus on the decreasing self-response rate; the linkage of acquisitions, schedule, and budget; and updates to the address frame. While these factors, which the Bureau developed through management experience, likely affected the cost of the census, evaluating the extent to which specific operations and activities drove up census costs would provide the Bureau with more actionable information. As one example, the Bureau identified the demand for improved accuracy as a factor, but this effort to improve accuracy involved a number of operations aimed at producing a more complete count, ranging from advertising in different languages to special enumeration programs aimed at hard-to-count populations. What is not clear, and will be important for the Bureau to determine, is how the cost of the special enumeration programs compared to those for 2000, the extent to which they contributed to the cost of the 2010 Census, and whether they produced the desired results. The Bureau has developed a range of design alternatives for the 2020 Census aimed at counting each housing unit at a lower cost than in 2010. The Bureau estimated that if it repeated the design of the 2010 Census, and assuming real costs grow at the same rate they did between 1990 and 2010, it would cost $151 to count each housing unit—more than a 55 percent increase, compared to 2010. The challenge for the Bureau, as recognized in its 2020 Census business plan, is striking a balance between an accurate census, on the one hand, and reducing costs and managing risks, on the other. The Bureau’s 2020 design alternatives have potential for containing costs but at varying degrees of risk for meeting cost, schedule, and performance goals. The design alternatives focus on options to target address canvassing, using the Internet and other social media to increase response rates, and reengineer field and IT infrastructures.shows the current range of 2020 Census design alternatives. According to the Bureau, the final 2020 design is likely to incorporate both existing approaches as well as activities that have never been used in the decennial census, such as a near paperless NRFU. According to the Bureau, the greater the change to the overall design, the greater the potential for cost savings. However, greater design changes also incur greater risk, and further testing will be needed to identify the risks, costs, and benefits of any new approaches. According to the Bureau, alternative one has the lowest risk, as it most closely mirrors the 2010 Census design and is not dependent on implementing innovations such as administrative records and targeted address canvassing. The remaining alternatives incorporate varying degrees of centralized infrastructure; address canvassing; and use of administrative records, the Internet, and social networks. For example, most of the new design options use administrative records, which could save money by reducing labor-intensive and costly field operations. Yet, the Bureau has not previously used administrative records to supplement respondent data on a national level, so the new approach will present a certain degree of risk as such things as data quality and access to records must first be resolved. The Bureau collects data on the costs of its field operations that are a potentially valuable source of information to help guide future cost quality trade-off decisions during the planning process. However, it could make better use of this information in gaining an understanding of return on investment for costly census-taking activities, such as address building and NRFU. According to Office of Management and Budget (OMB) guidance on benefit-cost analysis, agencies should have a plan for periodic, results-oriented evaluation of the effectiveness of federal programs. The guidance also notes that retrospective studies can be valuable in determining if any corrections need to be made to existing programs and to improve future estimates of other federal programs. In addition, our Cost Estimating and Assessment Guide suggests that agencies should seek the best value solution by gathering data on alternatives that inform agencies on cost and performance trade-offs.One way agencies can improve their ability to evaluate benefits and costs is to examine the marginal cost of activities, or the incremental cost of producing one more unit of output. For the Bureau, this means mining its performance and cost data to evaluate the effectiveness of its operations and to identify potential opportunities for improvement. Although the Bureau has a number of efforts under way within two initiatives to help guide 2020 planning, only a handful are aimed at producing return on investment information that enhances its ability to make decisions on cost quality trade-offs. These initiatives are the 2010 Census Program for Evaluations and Experiments (CPEX) looks back at 2010 operations, and the research and testing phase looks ahead at potential design alternatives for 2020. According to Bureau officials, of the more than 100 planned evaluations, assessments, experiments, and quality profiles in CPEX, a few are designed to produce information describing the return on investment of census-taking activities, which can help the Bureau make decisions about cost-quality trade-offs. For example, 2 planned evaluations will examine potential cost savings for address canvassing—one looks at the potential cost reduction associated with targeted address canvassing and the other looks at potential cost savings associated with automated field data collection of address canvassing results. Moreover, of planned 2010 CPEX evaluations for which we have a description, the vast majority will measure aspects of accuracy or coverage. The Bureau may be missing opportunities to mine performance data for information that could help officials increase the efficiency of costly field operations and could help inform difficult decisions for controlling costs and maintaining quality. As part of CPEX, the Bureau has planned about 50 assessments of specific enumeration activities and operations, such as address canvassing and NRFU. These assessments include an analysis of cost that would be of limited usefulness for informing return on investment decisions. For example, the assessments will compare budgeted and actual costs and indicate why an operation was over or under budget, but will not determine the marginal return for different enumeration or address-building operations. Information on the marginal returns on investments could, for example, help the Bureau determine where to focus cost control efforts. As one example, based on our analysis of operational data provided by the Bureau for NRFU, we determined that the marginal cost per questionnaire checked into LCOs was approximately $1,045 in the final weeks of the operation (see fig. 5). During this time, the Bureau completed a little over 2,300 questionnaires or roughly .005 percent of the entire NRFU universe of over 47 million housing units. This estimate is roughly a $1,000 increase per questionnaire compared to the first few weeks of the operation, which began on May 1, when the Bureau completed approximately 39 percent of the NRFU universe. Thus, it cost the Bureau approximately 17 times more per questionnaire in the final weeks of NRFU to attempt to obtain information from nonresponding housing units, units that may have been contacted as many as six times in person or by phone. More extensive analyses of these data could help the Bureau determine the extent to which specific activities contributed to cost growth and help it target control cost effects without compromising accuracy. As the Bureau enters the research and testing phase, several planned projects will yield information that will improve its ability to make decisions balancing the competing goals of cost and quality. According to the Bureau, it is essential to conduct research and testing of multiple design alternatives prior to deciding upon a final census design and technical solution to ensure that the final census design is effective and works within the 2020 Census environment. Our review of Bureau planning documents identified 8 of 35 projects scheduled in the early part of the decade that will include analyses of costs and benefits. For example, a project on reducing and improving in-person follow-up operations is designed to examine the costs and benefits of different contact strategies and whether these will achieve the goals of the operation. However, most projects examine the accuracy and quality implications of conducting enumeration and not cost implications. Without gathering data on cost during this phase, specifically the potential cost savings that could be realized with certain alternatives, the Bureau could be making decisions based on incomplete information on the design alternatives. The lack of emphasis on cost analyses is consistent with our previous reports that fundamental reforms will be needed to ensure that the Bureau’s management, culture, and business practices are aligned with cost-effective enumeration. According to Bureau officials, previous decisions about operational changes were based on a priority to improve quality and were sometimes made without much complete knowledge of cost implications. As we reported in 2009, the Bureau has not always used available information to determine the value added of the operation. For example, the Bureau has the information but has not determined which of its 11 operations for building its address list provide the best return on investment in terms of contributing to accuracy and coverage. The Bureau’s planning documents have not clearly identified and defined decision points that can help avoid cost overruns and schedule delays. OMB guidance for large projects suggests that agencies develop a schedule with defined phases, decision points, and an identified decision authority to evaluate whether an agency should proceed to the next phase in the investment life cycle. In addition, our previous body of work on acquisition policies in high-performing organizations includes the best practice of identifying critical junctures, also known as knowledge or decision points, in the acquisition cycle and requiring executive-level oversight at critical junctures. Agencies can use decision points to determine whether a particular investment is ready to proceed to the next phase. For example, when moving out of an early phase agencies must determine if resources—that is, technology and funding—and needs are matched. The 2020 Census is a complex, costly project with immutable deadlines. Decision points at key phases of the planning process could improve the Bureau’s ability to manage risks as well as achieve desired cost, schedule, and performance outcomes for the decennial. The Bureau’s 2020 business plan has a high-level preliminary schedule for the major phases of the decennial that includes, for example, a yearlong activity at the end of the research and testing phase to determine and refine initial operational designs. However, the schedule has no decision points at the end of research and testing or any phase, as best practices suggest, to determine whether progress was made and ensure that the agency’s needs for quality and accuracy match the available resources—that is, technology, design, time, and funding. In addition, there is no identified executive-level review at any point in the schedule. Since the research and testing phase is intended to develop a preliminary design from a range of alternatives, a decision point at the end of this phase could help the Bureau determine if it has enough information to support the increased investment necessary to move to the next phase of development and testing (see fig. 6). At subsequent stages in the process, decision points could be used to determine that the design was stable enough to meet operational requirements. Later decision points could also be used to determine whether a particular design alternative could be implemented within cost and schedule constraints while meeting quality targets and maintaining reliability. Absent such an approach at each phase, the Bureau lacks assurance that it has obtained the critical technological and design knowledge that best practices call for to avoid cost overruns, schedule slips, and performance shortfalls going forward. According to one Bureau planning memo, cost is one of four categories of criteria that will be used to evaluate design options for the 2020 Census. However, the memo does not describe specifically how the cost criterion will be used to select among design alternatives. For example, the criterion for cost can be expressed by ranking costs (i.e., least costly to most costly), weighting costs for different elements, or specifying that costs fall within a range. We have previously reported that criteria should be clearly defined, well documented, transparent, and consistently applied. Neither the Bureau’s strategic plan nor its early business plan, which outlines and guides the early development of the 2020 Census, describes criteria or identifies when criteria would be used to select the design of the 2020 Census. Bureau officials said they have not established when they will develop specific evaluation criteria for cost. Further, they acknowledged that selecting among design alternatives may take place during the research and testing phase, which begins in fiscal year 2012. In addition, Bureau officials told us that not all 2020 Census planning memos will be updated throughout the course of 2020 planning. Therefore, it is unclear how updates to criteria will be made to this planning memo. As a result, the Bureau may make decisions to eliminate design alternatives before clearly documenting how cost criteria will be applied, as well as how the alternatives will be considered along with the other criteria. The Bureau’s early cost estimates range from $12.8 billion to $18 billion for four of the six design alternatives. Because of the wide range of 2020 cost estimates, documenting and consistently using cost as a criterion when deciding among design alternatives can help the Bureau control costs. It is important for the Bureau to apply cost in decision making because the Bureau has not achieved previous goals for containing costs and made late design changes that proved costly in previous censuses. The Bureau has not yet established policies, procedures, or guidance for developing the 2020 Census life cycle cost estimate and is at risk of not following related best practices. The Bureau uses the life cycle cost estimate as the starting point for the annual budget formulation process and, according to our Cost Estimating and Assessment Guide, a reliable cost estimating process is necessary to ensure that cost estimates— particularly for large, complex projects like the 2020 Census—are comprehensive, well documented, accurate, and credible. Put another way, reliable cost estimates are essential for a successful census because they help ensure that the Bureau has adequate funds and that Congress, the administration, and the Bureau itself can have reliable information on which to base decisions. Our guide identifies 12 steps of a high-quality cost estimation process, including, among other things, determining the estimate’s purpose; defining the program’s characteristics; clearly defining ground rules and assumptions; conducting sensitivity, risk, and uncertainty analyses; and documenting all steps used to develop the estimate. These best practices, if followed correctly, should produce reliable estimates that management can use for making informed decisions (see app. IV). To date, the Bureau has developed a rough-order-of-magnitude estimate, which covers the four 2020 Census design alternatives that are the most similar to the 2010 Census design. Bureau officials stated that this was not an official estimate, but rather a starting point that will be revised and improved as the Bureau gathers more data in the research and testing phase. As the Bureau goes forward in its 2020 planning, it will be important for it to have reliable and accurate cost estimates as it narrows down design alternatives and moves closer to a final design. The Bureau’s early 2020 planning documents note that the Bureau intends to use our cost guide as it develops cost estimates for 2020, and Bureau officials have stated that its cost estimators would follow best practices wherever practicable. Nevertheless, the Bureau has not yet documented how it plans to conduct its cost estimates; and, while officials stated that they plan on developing more detailed documentation in the future, they could not provide a specific time when such documents would be finalized. Although the 2020 Census is still a number of years away, the timeline for the Bureau to develop a cost estimation process is growing short. The Bureau plans to begin work on an official life cycle cost estimate in fiscal year 2013, and plans to include its initial life cycle cost estimate in its fiscal year 2015 budget submission covering initiatives from 2015 through 2018. As a result, the Bureau has about a year to establish and finalize a process for preparing high-quality life cycle cost estimates. The importance of reliable cost estimates is underscored by the Bureau’s experience leading up to the 2010 Census, where we found that the Bureau’s cost estimate lacked detailed documentation on data sources and significant assumptions and was not comprehensive because it did not include all costs. Among other weaknesses, we noted that the Bureau had insufficient policies and procedures for conducting high-quality cost estimation. Partly as a result, some operations had substantial variances between their initial cost estimates and their actual costs. Until the Bureau finalizes its cost estimating policies, procedures, and guidance, the Bureau runs the risk of again developing unreliable cost estimates for 2020. For the Bureau to improve its ability to control the costs of future censuses without sacrificing accuracy, it will be critical for it to have a better understanding of the factors affecting cost increases from prior decennials, as well as how various census-taking activities contributed to the overall quality of the count. Although the Bureau will gain valuable insights from its evaluations of the 2010 Census as well as from research and testing for 2020, this information will only be of limited use in helping the Bureau develop a complete picture of the key drivers of census costs and the steps needed to control costs going forward. Therefore, to improve its capacity to identify cost drivers and effectively target cost control efforts, it will be important for the Bureau to develop a way to compare costs for key activities across censuses and assess the marginal returns of each. The Bureau has set a clear goal for controlling costs while maintaining accuracy for the 2020 Census, and has developed a range of design alternatives aimed at achieving that goal. Given the number of design alternatives the Bureau is evaluating for the 2020 Census, it will be important for the Bureau to set explicit decision points for executive-level review at the end of individual phases to reduce the risk of cost, schedule, and performance shortfalls. Without clearly defined decision points in its 2020 planning phases, the Bureau may not be able to determine that it is on track or make the necessary adjustments in its design approach to achieve a more cost-effective census. Moreover, decision points would allow the Bureau to determine its readiness to move on to the next phase in 2020 planning. In conjunction with scheduled decision points, it will be critical for the Bureau to finalize evaluation criteria that are transparent, thoroughly documented, and consistently applied to maximize its ability to control costs for the 2020 Census. Without specifying explicit cost evaluation criteria for selecting among design alternatives, the Bureau and stakeholders, such as Congress, cannot accurately consider costs and may not have assurance that they are on the path to a more efficient census in 2020. Cost estimates are necessary tools for major programs because they help in developing budget requests and efficiently allocating scarce resources. In a time of constrained budgets, these tools become even more important. However, cost estimates are technically complex and cost estimators face challenges in developing estimates for complex programs such as the 2020 Census. Previously, the Bureau had insufficient policies and procedures for developing reliable and high-quality cost estimates. Without clear guidance in place, there is no assurance that the Bureau will develop life cycle cost estimates for 2020 that are reliable and high- quality and follow best practices. To improve the Bureau’s ability to control costs for the 2020 decennial and balance cost and quality, we recommend that the Secretary of Commerce direct the Under Secretary of the Economics and Statistics Administration, as well as the Director of the U.S. Census Bureau, to take the following four actions: Develop and document a method to compare costs in 2010 to those in future decennials, for example, around major activities or investments, to allow the Bureau to identify and address factors that contribute to cost increases. Analyze data from key census-taking activities to determine their marginal costs and benefits, and use this information to inform decisions on developing more cost-effective methods. Identify decision points at the end of each planning phase and assign decision-making authority at the executive level, as well as consider adding decision points within phases to determine progress and readiness to proceed to the next phase. Finalize how the Bureau will apply cost as an evaluation criterion for choosing among design alternatives for 2020 and ensure that all criteria are transparent, well documented, and consistently applied before alternatives are eliminated. We have previously recommended that the Secretary of Commerce direct the Bureau to establish guidance, policies, and procedures for cost estimation that would meet best practice criteria. To help ensure that the Bureau produces a reliable and high-quality cost estimate for the 2020 Census, we recommend that the Bureau take the following action: Finalize guidance, policies, and procedures for cost estimation in accordance with best practices prior to developing the Bureau’s initial 2020 life cycle cost estimate. The Secretary of Commerce provided written comments from the Bureau on a draft of this report on January 6, 2012. The comments are reprinted in appendix II. The Department of Commerce expressed broad agreement with the overall theme of the report but did not directly comment on the recommendations. It raised concerns about specific aspects of the summary of findings which GAO addressed as appropriate. Specifically, the Bureau agrees with the importance of using analysis, such as assessing marginal returns to help with decision making on balancing the need to control costs while maintaining accuracy. Moving forward, it will be important for the Bureau to also recognize that more in- depth understanding of the growth in costs from prior censuses can, in fact, strengthen its decision-making ability and help it more effectively target cost control efforts in the future. The Bureau said understanding the growth in costs from 2000 through 2010 in depth has not been its highest-priority area for investment of scarce resources. We are sensitive to existing budget constraints. The fiscal issues facing federal agencies make it even more imperative for Bureau decision makers to develop and use actionable information, such as data on the extent to which specific operations and activities drove up costs, to pinpoint problem areas and target cost control efforts accordingly. The Bureau expressed concern that the summary of findings and conclusions on the highlights page seemed premature and unsupported by discussions in the full report. In commenting on the first paragraph, the Bureau stated that it does not believe its inability to identify specific factors affecting past growth will make it difficult to control costs for the 2020 Census. However, our report concludes that the Bureau’s inability to identify specific actionable factors will make it difficult for the Bureau to focus its cost control efforts for the 2020 Census. To help pinpoint problem areas for controlling census costs, it is important for the Bureau to have a better understanding of the specific sources of cost growth. This requires analysis of costs below the broad category level, focusing on projects that tie directly to major operations and investments. We believe understanding how the cost of these programs compared to 2000 and the extent to which they contributed to the cost of the 2010 Census, and whether they produced the desired results can help with decision making on areas where there are trade-offs in cost and accuracy. We added language to the highlights page to reflect the need for analyses to more effectively target future cost control efforts. In commenting on the second paragraph of the highlights page, the Bureau stated that it had not yet received any appropriated funds or had the opportunity to develop program management efforts for the 2020 Census that would allow the agency to establish formal guidance for developing cost estimates. However, the paragraph discusses practices for strengthening agency decision making for large projects rather than establishing formal guidance for developing cost estimates. OMB guidance for large projects and our previous body of work on acquisitions policies in high-performing organizations suggest setting explicit decision points for executive-level review at the end of individual planning phases can help reduce the risk of cost, schedule, and performance shortfalls. By clearly defining decision points in its 2020 planning phases, the Bureau could better ensure that it is on track or make adjustments in its design approach earlier in the 2020 planning process to achieve a more cost- effective census. On the issue of having appropriated funds for planning purposes, we agree that funding for the 2020 Census life cycle did not officially begin until fiscal year 2012. However, the Bureau includes costs of early planning for the next census in the final years of the previous census life cycle (i.e., 2010 appropriations pay for 2020 planning). During our audit, we interviewed individuals who were planning and developing the 2020 Census. We also reviewed informational memos, such as the strategic plan and the business plan issued in 2009 and 2010, respectively, as guiding documents for the 2020 Census planning effort. Moreover, during the audit, the Bureau released newly developed and revised planning documents, such as the updated business plan and rough-order-of-magnitude estimates. While we made changes to the second paragraph of the highlights page, these were not made in response to this comment. The Bureau commented that it was unclear why the graphic in our highlights page focused on costs and mail response rate data over time. We selected mail response rates because, as the Bureau notes, declining mail response rates are significant and have led to higher costs. For example, the mail response rate directly dictates the number of housing units in the NRFU universe. NRFU is the largest and most costly Bureau field operation and has an impact on overall census costs. We agree with the Bureau that the declining mail response rate is only one factor leading to higher census costs. Our report acknowledges other factors that contribute to higher costs. As such, we made no change to the highlights page in response to this comment. The Bureau made a number of technical comments on the body of the report. The Bureau commented that our report implies that the Bureau attributed all cost growth over the decades only to population growth. In fact, our draft report has a section dedicated to the five broad factors the Bureau identified as affecting cost growth. However, we added clarifying language to the discussion on workload and census costs to note workload is one of the factors driving up census costs. The Bureau commented that the statement that the Bureau cannot determine areas of cost growth is a sweeping and premature conclusion given that the 2020 research and testing program just began. The Bureau stated that its primary focus is to study ways to reduce the cost of the next census while maintaining quality. While we acknowledge that the research and testing effort may help identify ways to reduce costs, coupling that information with specific factors of past cost growth could strengthen the Bureau’s ability to target cost reduction efforts in the future. We made no change to the report to address this comment. The Bureau noted that statements in two areas seemed to be based on an assessment of how well the Bureau documented and analyzed costs relative to our Cost Estimating and Assessment Guide. The Bureau states that the guide was not issued until March 2009 and that the Bureau has not fully incorporated all those practices into a program as large, costly, and complex as the census. While it is true that we published the guide in March 2009, we issued an exposure draft in 2007 and shared a copy of our guide with the Bureau during our October 2006-June 2008 audit of Bureau cost estimating practices (GAO-08-554). In fact, the cost guide is based on long-standing industry and government best practices on cost estimation followed long before GAO published them in a concise form in 2009. Moreover, in its June 2008 action plan to address GAO recommendations, the Bureau noted its plan to use the guide, particularly the 12 steps of a high-quality cost estimating process. In less than 1 year from now, the Bureau plans to begin work on its official life cycle cost estimate for the 2020 Census. By not establishing policies, procedures, or guidance for developing life cycle cost estimates, the Bureau again runs the risk of developing cost estimates that are not comprehensive, accurate, or credible. We made no change to the report in response to this comment. The Bureau commented that it was unsure why we presented the NRFU analysis of marginal costs as it was a small percentage of the entire budget. However, we used this as an example of how such an analysis may help point to areas for targeting cost reduction efforts or for modifying the Bureau’s approach to data collection. The analysis does not imply that the Bureau should ignore the remaining households at the end of NRFU as the Bureau’s comment states. Instead, it highlights the importance of considering alternative approaches in order to ensure a complete and cost-effective enumeration. The more important point is that it highlights the increasing marginal costs of contacting certain households at the tail end of the enumeration. We agree that the Bureau cannot ignore hard-to-contact households. By mining performance data on the NRFU operation, the Bureau may be in a better position to identify alternative approaches for the hardest-to-contact households that have the greatest potential to reduce costs without compromising accuracy. We made no change to the report to address this comment. Finally, the Bureau commented that our reported costs for local census operations were incomplete, so we corrected the number based on information provided by the Bureau. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to other interested congressional committees, the Secretary of Commerce, and the Director of the U.S. Census Bureau. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To identify the key factors affecting cost growth from the 2000 Census to the 2010 Census, we reviewed U.S. Census Bureau (Bureau) strategic planning documents for 2000 and 2010, Bureau operational and systems plans for 2000 and 2010, Bureau assessments and evaluations of past census operations, National Academy of Sciences work on decennial census costs, and our prior work on implementation of 2000 and 2010 census operations. We assessed the Bureau’s approach to determine trends in cost data using best practices for cost estimation in GAO’s Cost Estimating and Assessment Guide. The guide illustrates the importance of using cost data to understand trends and drivers. In addition, we interviewed Bureau officials and reviewed agency documentation on actions taken to determine sources of cost growth between decennials. To identify sources of cost growth from the 2000 Census and the 2010 Census, we reviewed and analyzed expenditure data on 2000 Census and 2010 Census life cycle costs from the Bureau’s Commerce Business System (CBS). CBS is the Bureau’s financial management system and the official system of record for expenditures. information at two levels of aggregation for the census: budget categories, which are broad groupings of related items, and budget projects, which are the lowest level of cost information. To determine the level of cost growth from the 2000 Census and the 2010 Census, we developed life cycle totals for each census and life cycle totals for each of the budget categories within those censuses, comparing their absolute and percentage growth. We adjusted all monetary data for inflation using the gross domestic product implicit price deflator. All costs were adjusted to fiscal year 2010 dollars. In addition, we compared costs after adjusting for the number of housing units for each census. We assessed the reliability of the Bureau’s CBS data by reviewing relevant documentation, interviewing knowledgeable agency officials, and conducting comparisons with other data sources. We reviewed previous GAO, Department of Commerce Inspector General, and other Department of Commerce reports covering the system. We conducted interviews with Bureau officials who maintain the system at the Bureau level and its primary users within the Decennial Management Division. The system is referred to in older GAO reports as the Commerce Administrative Management System (CAMS). It is the same system, but the name changed over time. After receiving the cost data covering the 2000 and 2010 censuses, we compared them to financial management reports provided by Bureau officials to determine data consistency. We determined that these data are sufficiently reliable for the purposes of this report. Our review was subject to some limitations. The budget categories and budget projects for 2000 and 2010 varied from census to census. We requested and the Bureau provided recategorized cost data to facilitate comparison of the 2000 budget categories with 2010 budget categories. We requested and the Bureau was unable to provide recategorized cost data to facilitate comparison of 2000 budget projects with 2010 budget projects. We attempted to compare costs at the budget project level from 2000 to 2010 but were unable to do so for the following reasons: (1) the Bureau’s budget projects were not consistent from 2000 to 2010, making it impossible to match projects directly using project descriptions or project codes; (2) the number of projects increased substantially from 236 in 2000 to 1175 in 2010; and (3) the Bureau was unable to provide us with any documentation tracking similar projects from the 2000 Census to the 2010 Census. We attempted to group similar projects in 2000 and 2010 for comparison, but the available project descriptions did not provide enough information to group 2000 costs with the same precision as 2010. Therefore, we could not conduct a comparison of groups of projects. To assess the Bureau’s plans for controlling costs for the 2020 Census and what additional steps, if any, could be taken, we reviewed available documentation on 2020 Census planning and 2010 Census evaluations and assessments, such as 2010 evaluation study plans. We consulted with GAO staff with expertise in economics to determine the potential for leveraging available Bureau cost data to better support the Bureau’s ability to make cost-quality trade-offs. We reviewed Office of Management and Budget guidance on major acquisitions as well as GAO work on acquisition best practices to determine whether the use of decision points could help the Bureau make more informed decisions about census design that could relate to cost control. Further, we reviewed Bureau documentation on criteria for selecting among 2020 design alternatives. We analyzed the marginal cost of conducting nonresponse follow-up (NRFU)—the costliest field operation in the 2010 Census—to determine how the Bureau might be able to further use its cost data in planning for 2020. We used Bureau cost and progress data from the 2010 Census to identify the marginal costs of the NRFU operation in 3-week intervals. This analysis compared the cost of the operation for that period with the number of questionnaires checked in to identify return on investment. We assessed the reliability of the Bureau’s 2010 cost and progress data by consulting with the Bureau about variables we used and reviewing past GAO data reliability work that used cost and progress data. The cost and progress system is a daily management tool used by Bureau officials to track the work completed of various census operations. It includes measures of cost (such as field hours or mileage costs) and measures of work completed (such as questionnaires checked in). Our estimate of the marginal costs of checking in NRFU questionnaires in the early weeks of the operation may be somewhat overstated because, for instance, we included training costs as well as fieldwork costs because training costs were incurred in the early part of NRFU and those costs were not spread over the life of the operation. As a result, costs for the early weeks of the operation could be lower than presented in the graphic. After developing the marginal costs methodology, we followed up with agency officials knowledgeable about the data when we had questions about potential errors or inconsistencies. In addition, we reviewed prior GAO data reliability work on cost and progress data that examined the accuracy and completeness of the entry and processing of data. Based on this work, we determined that the data were sufficiently reliable for gauging the approximate marginal cost increase per questionnaire checked in during the final weeks of the NRFU operation. To assess the extent to which the Bureau’s plans for developing life cycle cost estimates for 2020 are consistent with best practices, we reviewed available Bureau documentation on the Bureau’s life cycle cost estimation processes and procedures. For example, we reviewed documentation from the Bureau’s rough-order-of-magnitude estimate—an early high- level estimate developed from limited data. We reviewed the guidance contained in our Cost Estimating and Assessment Guide and our previous work on census life cycle cost estimates. We also conducted interviews with knowledgeable Bureau officials and contractor staff and received a demonstration of new capabilities in the Bureau’s budgeting tool that will be used for 2020 Census cost estimation. We conducted this performance audit from December 2010 through January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Signora May, Assistant Director; Tom Beall; Tim Carr; Eric Charles; Sara Daleski; Dewi Djunaidy; Ron Fecso; Robert Gebhart; Rich Hung; Kirsten Lauber; Jason Lee; Andrea Levine; Donna Miller; Stacey Steele; and Jack Wang made key contributions to this report. | A complete count of the nations population is an enormous challenge requiring the U.S. Census Bureau (Bureau) to balance requirements for accuracy with the need to control escalating costs. The 2010 Census was the costliest U.S. Census in history at about $13 billion, and was about 56 percent more costly than the $8 billion cost of the 2000 Census (in 2010 dollars). The fundamental challenge facing the Bureau going forward is cost effectively counting a population that is growing steadily larger, more diverse and becoming increasingly difficult to enumerate. As requested, this report assesses (1) the key factors affecting cost growth from the 2000 Census to the 2010 Census; (2) the Bureaus plans for controlling costs for the 2020 Census and what additional steps, if any, could be taken; and (3) the extent to which the Bureaus plans for developing life cycle cost estimates for 2020 are consistent with best practices. The report is based on GAOs analysis of Bureau data and documents as well as interviews with Bureau officials. The average cost to count each housing unit rose from $70 in 2000 to $97 in 2010 (in constant 2010 dollars). While the U.S. Census Bureau (Bureau) made changes to its budget structure from 2000 to 2010, they did not document the changes that would facilitate comparisons over time and cannot identify specific drivers of this cost growth. According to GAOs Cost Estimating and Assessment Guide, an agency can strengthen its ability to control costs by using available cost data to make comparisons over time and identify and quantify trends. The Bureau faces the fundamental challenge of striking a balance between how best to control costs without compromising accuracy. However, the Bureaus inability to identify specific actionable factors affecting past growth will make it difficult for the Bureau to focus its efforts to control costs for the 2020 Census. The Bureau developed several design alternatives for the 2020 Census that could help reduce costs, but has not identified decision points when executives would review progress and decide whether the Bureau is prepared to move forward from one project phase to another. Office of Management and Budget guidance and previous GAO work support the use of these practices to strengthen an agencys decision making on large-scale projects. Incorporating these practices in its 2020 planning could help the Bureau improve its ability to manage risk to achieve desired cost, schedule and performance outcomes. The Bureau is taking steps to strengthen its life cycle cost estimates. However, the Bureau has not yet established guidance for developing cost estimates. The Bureau is scheduled to begin work on the 2020 Census estimate in fiscal year 2013 but has limited time to develop guidance. By finalizing such guidance, the Bureau can better ensure that it is developing comprehensive, accurate, and credible estimates for the 2020 Census. GAO recommends that the Census Director develop a method to identify and address specific factors that contribute to cost increases, identify decision points, and finalize guidance for the 2020 life cycle cost estimate. The Department of Commerce expressed broad agreement with the overall theme of the report but did not directly comment on the recommendations. It raised concerns about specific aspects of the summary of findings which GAO addressed as appropriate. |
More than 75 percent of Haiti’s population lives on less than $2 a day, and Haiti’s unemployment rate is estimated at 60 to 70 percent. These conditions were exacerbated when a large earthquake devastated parts of the country, including the capital, on January 12, 2010. Since the earthquake, Haiti has suffered from a cholera epidemic that, as of March 2013, had affected almost 650,000 persons and caused over 8,000 deaths. In March 2013, the International Organization for Migration estimated that, of the original 2 million persons affected, about 320,000 individuals remained displaced in camps from the earthquake. In response to the earthquake, Congress provided more than $1.14 billion in reconstruction funds for Haiti in the Fiscal Year 2010 Supplemental Appropriations Act. Of this amount, USAID received $651 million through the Economic Support Fund for its bilateral reconstruction activities, as shown in table 1. The Act required State to provide periodic reports to Congress on the program. Specifically, the Act required State to submit five reports to the Senate Committee on Appropriations, beginning in October 2010 and every 180 days thereafter until September 2012, on funding obligations and disbursements and program outputs and outcomes. In addition, the Senate Committee on Appropriations, in its Committee Report accompanying the Act, directed that State’s reports include, among other things, (1) a detailed program-by-program description of USAID’s activities; (2) a description, by goal and objective, and an assessment of the progress of U.S. programs; and (3) amounts of funding obligated and expended on the programs during the preceding 6 months. In our November 2011 report on Haiti reconstruction, we reported that USAID had difficulties securing staff—particularly technical staff such as contracting officers and engineers—who were willing to live and work in the country after the earthquake and who could bring the expertise necessary to plan and execute large, complex infrastructure projects. We also reported that such difficulties had contributed to delays in U.S. efforts. As of December 2012, the USAID mission in Haiti (the mission) had increased its direct-hire staff positions filled from 7 of 17 (41 percent) soon after the earthquake to 29 of 36 (81 percent) positions filled. The overall 5-year U.S. reconstruction strategy for Haiti, known as the Post-Earthquake USG Haiti Strategy: Toward Renewal and Economic Opportunity, is consistent with the government of Haiti’s development priorities in that it seeks, among other goals, to encourage reconstruction and long-term economic development in several regions of the country. These areas, known as “development corridors,” include the Cap-Haïtien region on Haiti’s northern coast and the St-Marc region on Haiti’s western coast; these areas were not close to the earthquake epicenter but were where some people from Port-au-Prince were displaced after the earthquake. The strategy notes that 65 percent of Haiti’s economic activity was located in greater Port-au-Prince and that the U.S. government’s intent is to support new economic opportunities in other development corridors, in addition to assisting with reconstruction in the Port-au-Prince corridor, which suffered the most damage from the earthquake (see fig. 1). On January 11, 2011, the U.S. government, the government of Haiti, the Inter-American Development Bank (IDB), and a private South Korean garment manufacturer, Sae-A Trading Co. Ltd. (Sae-A), signed an agreement to support development of the CIP that included the following commitments: the IDB committed to provide funding to the Haitian government to build the CIP and some associated facilities; the U.S. government committed to build a power plant, contribute toward the building of a nearby port, and support the construction of 5,000 nearby housing units with associated site infrastructure; and Sae-A committed to be the anchor tenant and hire 20,000 local employees at the CIP. In concert with its economic growth efforts, USAID, in coordination with State’s Office of the Haiti Special Coordinator in Washington, D.C., developed the New Settlements program to address the severe post- earthquake permanent housing shortage in Haiti. USAID’s goal was to construct up to 15,000 new permanent houses on previously undeveloped sites in three designated development corridors—10,000 in Port-au-Prince and St-Marc, and 5,000 in Cap-Haïtien. In part, USAID’s program aimed to support the Haitian government’s goal of decentralizing economic growth outside Port-au-Prince by increasing the housing stock in communities near the industrial park planned for northern Haiti. USAID planned to provide funding for the preparation of all the settlement sites, to include activities such as grading the land and providing proper drainage, access roads, pedestrian pathways, and infrastructure for delivery of utility services. Each new settlement site would include a certain number of plots on which USAID or a partner nongovernmental organization (NGO) would then construct a house. Of the 15,000 plots it planned to develop, USAID planned to build 4,000 houses, while NGOs and other donor partners would build 11,000 houses. USAID estimated that, when completed, about 75,000 to 90,000 people would benefit. As of March 31, 2013, the majority of supplemental funding for USAID’s program sector activities had not been obligated or disbursed. The Department of State submitted four of five reports to Congress, as required in the Supplemental Appropriations Act of 2010, but did not submit them in a timely manner. State did not include some information that the Senate Committee on Appropriations had directed State to include in the report on funding, program sector activities, and progress toward achieving the goals and objectives of the program. All reporting requirements have now ended. As of March 31, 2013, 31 percent of the supplemental funding provided for Haiti reconstruction efforts had been disbursed. Of the $651 million in funding from the 2010 Supplemental Appropriations Act that USAID has allocated for bilateral earthquake reconstruction activities, USAID had obligated about $293 million (45 percent) and disbursed about $204 million (31 percent). The amount of funds obligated and disbursed varies among activities in the six sectors of activities to which supplemental funds were allocated. For example, the majority of funding obligated to date has been obligated in just two sectors (shelter and governance and rule of law), as shown in table 2. In its periodic reports to Congress, State reported on the general amounts of supplemental funding obligated and disbursed, as required in the Act. State also included some anecdotal information on program outputs and outcomes, which the Act also required. For example, the report submitted by State in January 2013 noted that work had begun to rehabilitate damaged irrigation systems and that reconstruction of earthquake- damaged health infrastructure was underway. However, State’s reports did not include, among other things, (1) a detailed program-by-program description of USAID’s activities; (2) a description, by goal and objective, and an assessment of the progress of U.S. programs; and (3) amounts of funding obligated and disbursed on the programs during the preceding 6 months, as directed by the Senate Committee on Appropriations in its report accompanying the Act. For example, none of State’s reports included a program-by-program description of USAID’s sector activities, such as shelter and energy, or an assessment of sector progress. In particular, State’s final report, submitted to Congress in January 2013, did not mention that USAID had substantially reduced the number of permanent shelters it had planned to construct. Further, State’s January 2013 report did not mention that USAID had not generated any outputs or outcomes for the port construction project, even though the report did mention that USAID had experienced significant delays in planning the project, including a feasibility study that was initially scheduled to be completed 7 months earlier in May 2012. Finally, while State’s reports included overall cumulative amounts of funding obligated and disbursed, they did not provide such information for specific programs during the preceding 6 months. State’s inclusion of such information, as well as sector-specific funding information directed by the Senate committee, could have been useful in informing Congress of USAID’s progress. State submitted four of the five required reports to Congress on the status of U.S. efforts in Haiti, but none of the submitted reports was delivered in a timely manner. The Act required State to submit the five periodic reports beginning in October 2010 and approximately every 6 months thereafter until September 2012. State did not submit the first report, required in October 2010, because, according to State officials, the supplemental funds had just been received, there was little to no activity to report, and the Post-Earthquake USG Haiti Strategy had not yet been approved. State submitted its initial report in July 2011—more than 2 months after its April 29, 2011, due date for the second required report—included funding and activities through March 31, 2011. The three subsequent reports were submitted in January 2012, June 2012, and January 2013. The submission dates for all four reports ranged from more than 1 month to nearly 4 months late. In addition to the late submission of the reports, the “as of” date of funding data presented in the reports was not timely. For example, the report submitted in January 2013 included funding data as of September 30, 2012—nearly 4 months earlier than the date the report was submitted. All reporting requirements under the Act have ended. We discussed the reports with State officials, who noted that State and USAID routinely provide funding and progress information to Congress through other reporting mechanisms. For example, State and USAID arrange oral briefings and periodic conference calls with congressional staff about every 2 months, and other meetings as requested by members of Congress. State officials emphasized that they considered the reports to Congress to be only one tool in its range of reporting mechanisms. USAID has committed $170.3 million to construct a power plant and port to support the newly developed CIP, with mixed results to date. In June 2012, the USAID mission completed the first phase of the CIP power plant for $17.0 million, 11 percent less than the $19.1 million allocated, and in time to supply the first CIP tenant with power. Planning for the port is behind schedule and will result in port construction beginning at least 2 years later than initially planned. The mission has had a vacant port engineer position for more than 2 years, having made one unsuccessful attempt to fill this position prior to May 2013 when it issued a second solicitation to fill the position. As of June 2013, this position remains unfilled. The lack of port expertise at the mission has contributed to (1) unrealistic initial time frames, (2) delays in awarding the contract for a feasibility study, and (3) incomplete information in the feasibility study. According to initial estimates of port construction costs, USAID funding will be insufficient to cover approximately $117 million to $189 million of projected costs, and it is unclear whether the Haitian government will be able to find a private sector company willing to finance the remainder of the project. Sustainability of the port and power plant depend on the viability of the industrial park, which will generate a substantial portion of the revenue for both facilities, as well as other factors such as the government of Haiti’s capacity to manage or oversee these investments. The U.S. government supports a public-private partnership to develop the CIP in northern Haiti with $170.3 million in funding allocations to projects related to a nearby power plant ($97.9 million) and port ($72.4 million). According to State officials, the U.S. government’s decision to provide funding for the power plant and port was bolstered by review of an economic impact study of the CIP commissioned by the IDB and State’s own calculations. State officials acknowledge that the limited availability of credible data for Haiti can introduce significant margins of error into assessments of the CIP’s impact on the region’s net employment or income. Therefore, such estimates are subject to considerable uncertainty. The findings from the IDB study and State’s calculations included the following: The IDB-commissioned study estimated that the CIP would increase total employment by about 75,000 jobs, including 37,000 permanent jobs at the CIP, and generate $360 million in annual income, including approximately $150 million to CIP employees, most of whom are projected to receive the minimum wage. State officials calculated that the CIP will create up to 65,000 jobs on site by using an estimate of the average number of square meters per factory worker observed in light manufacturing facilities worldwide. This simple calculation assumes that all available factory space in the CIP would be filled and that the tenants would be from those same types of industries. However, these estimates may overstate the impact on total employment and income in Haiti because they do not account for the possibility that people employed in CIP-related jobs might otherwise be employed in the formal or informal sector in absence of the CIP. The IDB’s progress in building the CIP and filling it with tenants is still ongoing (see fig. 2). Sae-A moved in to the first CIP building in March 2012. By December 2012, it had shipped its first container of clothes to the United States and, by January 2013, was employing approximately 1,300 Haitian employees from the surrounding communities. Two other companies, a paint manufacturer and a textile manufacturer, have also moved into the CIP. According to the State Senior Advisor for the CIP, these three tenants project they will together create approximately 21,000 jobs in the CIP by 2016. As of May 2013, according to State officials, the government of Haiti was progressing in talks with four other potential tenants. The USAID mission completed the first phase of the CIP power plant, with a designed capacity of 10 megawatts, for $17.0 million, 11 percent less than the $19.1 million allocated (see table 3). The power plant project benefited from the mission having a Senior Energy Advisor on staff from April 2011 through February 2013 who used his background in electrical engineering to oversee and manage the project. The power plant was commissioned in June 2012, 5 months later than initially planned, but in time to provide power to the CIP as needed (see fig. 3). A contractor completed a required environmental assessment of the power plant project in June 2011, prior to the award of the construction contract for Phase I. The assessment produced more than 200 suggested mitigation measures to reduce the plant’s potential socioeconomic and environmental impacts, all of which USAID has or plans to have implemented. The contractor that performed the design and oversight of construction for Phase I oversaw the implementation of mitigation measures relevant to the construction phase. According to USAID officials, relevant mitigation measures are also incorporated into the operations and maintenance contract for the first 3 years of the plant’s operations, making this contractor responsible for any measures needed to mitigate the impact of the plant on the surrounding environment during that time. Future plans for the plant include: Distribution of electricity outside the CIP: USAID plans to distribute electricity to as many households, local businesses, and public buildings in local communities as feasible over the next 2 years, with an interim goal of connecting 1,800 residences by May 2013. The first several residences were connected in October 2012, and 243 residences and businesses were connected by February 2013. Plan for future expansion: To accommodate the CIP’s future energy needs once it has expanded and the needs of local communities once more of them are connected, USAID has plans (1) to build an adjacent solar energy farm with 2-megawatts capacity and (2) to expand the power plant to at least 25-megawatts capacity, including power from any renewable sources. The time frame for these expansions is dependent on the pace of development of the CIP and its energy needs. Transfer of operations to Haitian government: After the first 3 years, the Haitian government will take over plant operations and therefore will be responsible for implementing any mitigation measures, including those needed to mitigate additional emissions from the plant’s future expansion. USAID has allocated $72.4 million to plan and contribute toward building a new port in northern Haiti; however, only $4.3 million (6 percent) was obligated as of March 2013 due to planning delays (see table 4). In an August 2011 draft Activity Approval Document (AAD) for the port sector, USAID planned for a feasibility study to be completed by the second quarter of fiscal year 2012, with construction to begin in spring 2013 by a private company that would supplement USAID’s funding contribution for construction and then operate the port once it is completed in fall 2015. However, the feasibility study was not completed until February 2013, and the mission has no current projection for when construction of the port may begin or how long it will take because more studies are needed before the port site can be selected and the port designed. Nevertheless, as a result of these planning delays, port construction will not begin until at least two years after initially planned. In addition, USAID officials had initially estimated that port construction would take 2.5 years; however, USAID officials have since learned that port construction may take up to 10 years, depending on the complexity of the port designed. The USAID mission in Haiti lacks staff with technical expertise in planning, construction, and oversight of a port, as there is a vacant position for a port engineer on staff. According to USAID officials, USAID has not constructed a port anywhere in the world since the 1970s, and USAID does not have a port engineer or port project manager among its direct-hire staff. In January 2011, the mission in Haiti put out a solicitation to fill the vacant port engineer position. This solicitation produced two applicants, one of which was offered the position but declined it in May 2011. Since then, no attempts were made to fill the position until another solicitation was sent out in May 2013 to which interested parties were to respond by June 7, 2013. As a result, as of June 18, 2013, the position remains unfilled. According to mission officials, it is difficult to find someone with the right skill set who is willing to work in Haiti, although USAID officials have also commented that, in hindsight, more effort should have been put into ensuring that the mission had port expertise earlier in the port planning process. This lack of a USAID mission staff person with port expertise has contributed to the port project being behind schedule. Delays in the port feasibility study were caused by: Unrealistic initial time frames: Without port expertise, USAID initially estimated that the planning and design process for the port, including the port feasibility study, would take a little over 1 year to complete. Since then, USAID officials have learned from the U.S. Army Corps of Engineers (USACE), which has extensive port expertise, that they expect the port planning and design process to take 2.5 to 5 years. According to USAID officials, this estimate is consistent with the time frames used by the Millennium Challenge Corporation, which has rehabilitated ports in developing countries. Delays in awarding the feasibility study: The contract for the feasibility study was awarded 3 months later than initially planned because at the time, according to USAID officials, mission staff were focused on the CIP power plant. None of these staff had primary responsibility for the port, so the port project did not move forward simultaneously. In addition, USAID needed to clarify the technical requirements and revise the statement of work for the port feasibility study four times, thereby lengthening the time before companies could submit proposals. Incomplete information in the feasibility study: Without a port engineer or project manager to contribute to the statement of work for the feasibility study, USAID did not require the contractor to obtain all the information necessary to help select a port site. According to USAID officials, when the study was completed as planned in May 2012, the contractor had met the requirements in its statement of work. However, the Mission Environmental Officer determined that multiple environmental issues not adequately addressed in the initial study needed additional examination. Subsequently, the contract for the feasibility study was amended six times and extended by 9 months to obtain more information. USAID officials stated that, in retrospect, they realized it would have been helpful to involve other U.S. agencies with port expertise when writing the original statement of work to have avoided the need for so many revisions. In November 2012, the contractor submitted another draft of the study that USAID environmental staff determined to have some gaps. USAID then met with officials from USACE, the U.S. Environmental Protection Agency (EPA), and the National Oceanic and Atmospheric Administration (NOAA) in December 2012 to identify the additional economic, environmental, technical, and other information needed to select a site. Further information was added to the study before it was finalized in February 2013. However, other studies strongly recommended by USACE, EPA, and NOAA, such as building oceanographic navigation models and completing marine mitigation work to protect endangered species in the area, still need to be performed. Port construction costs remain uncertain because the port site, design, and needed mitigation measures have not been determined. However, rough estimates in the February 2013 feasibility study project that the cost of port construction at the two locations still under consideration ranges from $185 million to $257 million. In addition to funding for the port feasibility study, USAID has $68.1 million allocated toward port planning and construction. USAID does not know what portion of this funding is needed for the additional studies and design; however, it is clear that the amount remaining for construction will be a significantly smaller portion than USAID had initially planned to contribute to the project’s total construction cost. As a result, USAID officials recognize that there is a risk that no private company interested in operating the port would be willing to cover the entire remaining costs of construction, particularly given the political risks of operating in Haiti. Therefore, the Haitian government may need to secure additional donor funding to increase the public sector contribution to building the port. Sustainability of the CIP, port, and power plant are interdependent. We identified a number of key issues to the sustainability of each of these projects. The CIP depends on a functioning power plant and port access: Before USAID began its CIP-related investments, northern Haiti did not have reliable energy infrastructure or sufficient port capacity to support a completed industrial park. Other power plants in the region produce intermittent power. The existing ports in Haiti have high port costs and those in the Dominican Republic that currently accommodate cargo traffic are distant from the CIP, raising the cost of doing business at the CIP (see fig. 4). In addition, according to the port feasibility study, the Cap-Haïtien port, the closest current port to the CIP, has limited capacity. The study concluded that the CIP will only succeed if expanded, efficient port facilities are developed nearby. The port and power plant depend on revenues from the CIP: CIP tenants will generate a substantial portion of the revenue for the power plant and port, so the sustainability of these projects will depend on the Haitian government finding additional tenants and maintaining the park. Potential tenants may be wary of moving to the CIP because of Haiti’s history of instability and corruption or the lack of Haitian government capacity, although as noted earlier, according to State officials, there were four additional potential tenants for the CIP as of May 2013. All three projects depend on Haitian government capacity: The Haitian government will be responsible for maintaining and managing the CIP and power plant and for overseeing the private company that will operate the new port. Studies of the CIP have cited concerns about the relevant Haitian government ministry’s ability to manage and maintain the infrastructure in and around the CIP given their limited staff and technical resources. Aware of such concerns, the CIP has contracted a professional industrial facility management firm to operate and maintain the park. According to the September 2011 AAD for the energy sector, the sustainability of investments in the Haitian energy sector depends on legal, regulatory, and management reforms to improve the commercial viability of Haiti’s electrical system and provide resources for its maintenance and operations. To address this for its first 3 years, USAID will pay for a contractor to operate and maintain the power plant, and to prepare the Haitian electricity department to take over these functions after the 3 years. According to USAID documents, Haiti will need institutional and regulatory reforms to ensure efficient customs operations and competitive port charges, to curtail monopolistic practices, and to facilitate private investment in the port sector. Obtaining revenue for the power plant from electricity distribution outside the CIP: As of February 2013, the few customers connected to the power plant outside the CIP had largely paid their initial bills on time. However, according to a 2010 report on the Haitian energy sector, 64 percent of Haitians do not pay their electricity bills in a timely manner and 33 percent do not pay at all. In addition, USAID officials have recognized that it is common throughout Haiti to tap into lines without paying, and this practice is unlikely to have repercussions. As a result, the USAID operations and maintenance contractor plans to provide training to local communities on the use and value of electricity. Attracting a private company to construct and operate the port: The government of Haiti has considered charging $260 for each container coming into the northern port to use the revenues generated for social programs. However, the port feasibility study concluded that such a government surcharge would make the project financially infeasible. State officials have communicated this information to the Haitian government to encourage them to lower the surcharge to allow the port to be successful. Given this and other risks associated with the port listed above, it is unclear whether the Haitian government will be able to find a private company interested in investing in port construction and operations. This uncertainty will remain until USAID and the Haitian government begin work on the solicitation for a private company after all port studies are completed, the site is selected, and the port design is completed. Since its initial planning and cost estimating began in 2010, USAID’s funding for the New Settlements program has significantly increased, while the number of permanent houses USAID projects will be completed has been reduced by over 80 percent. USAID underestimated construction costs at the time the New Settlements program was developed, and construction costs further increased after the Haitian government requested design changes that included larger houses with features such as flush toilets. USAID experienced problems securing clear land title for the new housing sites and in coordinating with NGOs and other partner donors. These issues have resulted in delays, with the program currently expected to be completed nearly 2 years later than initially scheduled. Moreover, the sustainability of these new settlements will depend heavily on the capacity of the Haitian government to provide key services and the ability of residents to maintain their homes. In addition, there is a potential gap in service to support the community management mechanisms that USAID officials consider crucial to the sustainability of each new settlement. If such support is reduced or delayed for some settlements, sustainability risks may increase. USAID underestimated the construction cost of its New Settlements program. These costs are comprised of two main categories: (1) cost of site preparation per plot and (2) cost of construction per house. In its planning documents, USAID originally estimated costs at $1,800 per plot and $8,000 per house. As of April 2013, average costs based on awarded contracts have increased to $9,598 per plot and $23,409 per house. Overall, the cost for USAID to prepare a plot and build a house increased from original estimates of $9,800 to average costs of $33,007. These cost differences stem primarily from the inaccuracy of USAID’s original estimates, and secondarily from Haitian government requests for design changes. Figure 5 compares the original estimates, initial contract costs, and revised contract costs. More details on the reasons for cost differences in this program are outlined below. Original estimates: By November 2010, USAID had developed its original cost estimates for the New Settlements program. Prior to the earthquake, the mission had no housing programs in Haiti, and as a result did not have its own historical data on construction costs and few existing relationships with potential shelter sector partners. The mission hired a Senior Shelter Advisor and staffed a shelter team to develop the original cost estimates, layouts, and design concepts for what would become the New Settlements program. According to USAID officials, these estimates were not adequately supported; they did not document the sources of data or the methodologies used to derive these estimates. Rather, the original estimates were based in part on the USAID shelter team’s calculations and costs reported by the World Bank and an NGO that was building houses in northern Haiti. USAID mission officials noted that these original cost estimates were used to develop the budget and projected goals of the New Settlements program. However, to meet certain technical and financial planning requirements, the shelter team prepared independent government cost estimates prior to issuing solicitations for bids for each site preparation and each housing construction project. The first independent government cost estimates for site preparation and housing construction were conducted in September and November 2011, respectively. Those efforts provided the shelter team with more detailed and accurate information to guide them through the procurement process. Initial contract costs: By April 2012, USAID awarded multiple contracts for construction projects at two settlement sites, where costs exceeded the original estimates. In particular, site preparation per plot increased from $1,800 to $6,165, a 242-percent increase. The inaccuracy of the site preparation estimates had a more substantial impact on USAID’s program budget and goals than the inaccurate estimates of housing construction costs because USAID planned to finance all site preparation costs, while NGOs and other partner donors would finance and build the majority of houses. According to USAID officials, original estimates did not adequately consider the stringent international building codes and disaster resistance standards planned for New Settlement houses and did not take into account the extent or complexity of service infrastructure USAID intended to provide. Furthermore, USAID officials noted that, as multiple reconstruction efforts have progressed, the demand and cost for construction materials has increased. Revised contract costs: By July 2012, USAID signed a revised contract to accommodate design changes requested by the Haitian government, which also increased costs. Specifically, the design changes called for an increase in the size of housing units, from about 275 square feet to about 450 square feet, and the inclusion of flush toilets, rather than a more traditional dry toilet system. USAID agreed to these changes and revised the initial contracts to include these modifications and allow for the increased costs. The Haitian government’s design changes drove total cost increases up from the initial contract costs by 34 percent, from $24,625 to $33,007. Officials noted that providing housing with higher earthquake and hurricane resistance standards and with electricity, plumbing, and flush toilets, takes longer to construct and costs more than options provided by other donors. Based on original estimates, the New Settlements program was allocated approximately $59 million under USAID’s Shelter AAD. However, USAID increased the program budget after receiving multiple bids from private sector contractors for both site preparation and housing construction. USAID also dedicated additional funds to institutional strengthening to support local organizations’ beneficiary selection, and added a community development component. All together USAID increased program funding to approximately $97 million, about a 65- percent increase. As of March 31, 2013, USAID had obligated about $48 million and had disbursed about $32 million for New Settlements permanent housing activities (see table 5). USAID has reduced its program targetss a number of times. As of April 2013, it had reduced the number of houses it expects USAID and its partners to complete, and therefore the number of beneficiaries, by over 80 percent. Of the 15,000 houses originally planned, only 2,649 are expected to be completed, with USAID building 906 houses and NGOs and other partner donors estimated to build 1,743 (see fig. 6). USAID officials noted that USAID would commit no further funds to housing construction and will only commit funds for site preparation if USAID has written agreements with partner donors. Therefore, the estimated number of houses and completion dates may vary from current projections. USAID also reduced the total number of projected beneficiaries from an original estimate of 75,000 to 90,000 to its current estimate of approximately 13,200 to 15,900. USAID originally planned for new settlements to be distributed geographically with 5,000 houses to be built in the northern Cap-Haïtien corridor and 10,000 houses to be built in the Port-au-Prince and St-Marc corridors, closer to where the earthquake’s epicenter occurred. In addition to the overall decline in housing numbers, the distribution of these houses between the north and south also shifted. Current projections are for the Cap-Haïtien corridor to have 1,967 houses, or 74 percent of the total. A combined 682 or 26 percent of the total are to be built in the Port-au-Prince and St-Marc corridors. Of those houses in the Cap-Haïtien corridor, over 90-percent are planned to be within a 13- mile radius of the CIP (see fig. 7). USAID is nearing completion of two settlement sites, Caracol-EKAM in the Cap-Haïtien corridor and DLA 1.5 in the St-Marc corridor. The Caracol-EKAM settlement is projected to provide permanent houses to approximately 3,750 to 4,500 residents, and the DLA 1.5 settlement is projected to provide permanent houses to approximately 780 to 936 residents. Beneficiaries will begin to occupy houses once all construction is complete. The planned move-in date for beneficiaries at both settlements is July 2013 (see fig. 8). The U.S. government’s January 2011 strategy projected that all USAID permanent housing construction and site preparation under the New Settlements program would be completed by July 2012, but the current estimated completion date for planned sites is March 2014, nearly 2 years later. Housing construction began at Caracol-EKAM and at DLA 1.5 in April 2012. NGO and other partner donor financed housing construction on USAID prepared sites is planned but has not yet begun. According to State and USAID officials, USAID faced difficulties trying to secure proper land title for permanent housing, which resulted in delays. These delays affected the implementation of the program and availability of NGO and other partner donor financing. For example, according to USAID officials, USAID spent a substantial amount of time trying to secure clear title to private and government-owned land but was able to acquire only one site through private owners because of difficulties in confirming legitimate ownership. USAID discontinued attempts to partner with private owners in August 2011. Additionally, land titling issues arose with government-owned land. For example, although USAID officials reported that the agency had conducted due diligence and approved 15 potential housing sites in November 2010, USAID later found that the secure land titles for some of these sites could not be confirmed due to unclear or disputed ownership, and thus reduced the number of site options and further delayed site selection. Partnering with NGOs and other donors on the planning and construction of permanent houses was more complicated and time consuming than USAID originally expected. According to USAID officials, NGOs and other partner donors have their own processes, procedures, and goals that often differ from those of USAID. According to USAID officials, the mission shelter team was involved in negotiations with several key donor partners as early as November 2010. In January 2011, the President of the American Red Cross (Red Cross) announced its intention to partner with USAID and provide $30 million to build homes on at least two sites. Later, in June 2011, USAID signed a memorandum of understanding with the Red Cross to build more than 3,000 houses; however, according to USAID officials, that partnership did not materialize because of difficulties and delays in securing land title from privately owned sites near Port-au- Prince. In addition, according to USAID officials, the partnership was further delayed because of turnover in various Red Cross leadership positions, resulting in shifting approaches to the development of housing settlements. According to officials, USAID also had plans to partner with Food for the Poor, an NGO with experience building houses in Haiti, to build 750 of the houses at Caracol-EKAM. However, this discussion ended in part because that NGO decided it did not want to assist in building communities that large. The success of USAID’s New Settlements program relied heavily on partner NGOs. The USAID mission was confident that the program would attract partners because one of the primary challenges NGOs faced in the first year after the earthquake was finding suitable land with clear title. According to one of USAID’s implementing partners, NGOs providing housing assistance hesitate to invest in land for new housing if legal proof of ownership cannot be secured. By securing land title, the program would help partners avoid the complex land tenure issues that were already seriously impeding many of their shelter programs. However, lengthy delays in resolving land title issues contributed to difficulties in solidifying partnerships because the delays allowed time for potential NGO partners to change their shelter strategies or commit their funds to other reconstruction activities. According to USAID, the sustainability of the new housing settlements will depend on broad factors such as the capacity of the Haitian government and regional economic opportunities. USAID is attempting to ensure the viability of settlements by locating them in areas with employment, healthcare, education, and transportation. In the Cap-Haïtien corridor, the United States and other international donors are making multiple investments in new infrastructure, such as the CIP and potential port, to create an economic growth pole in the region. If those efforts do not successfully provide adequate economic opportunities, beneficiaries may not be able to afford the fees and services connected with their new homes, or may have to relocate altogether. USAID is also working with the Haitian government in areas where capacity issues exist, such as energy sector management. In addition, more site-specific factors will affect sustainability. USAID has made some limited mitigation efforts, but notes that further support for community development is necessary to maintain the settlements over time. Local governments and community members need to provide ongoing support, maintenance, and management of the new settlements to ensure their sustainability. Specifically, beneficiaries will face site- specific issues related to affordability, community management, and the possibility of informal expansion or sprawl of shantytowns. Affordability: According to USAID officials, the Haitian government has indicated that beneficiaries must make some number of monthly payments, in an amount to be determined, before title to the house is conferred. Beneficiaries will also face charges for utilities and services, such as electricity and sewage. Housing payments: According to USAID officials, although beneficiaries are scheduled to move into the Caracol-EKAM and DLA 1.5 settlements as early as July 2013, a beneficiary agreement has not yet been finalized, and the exact amount and structure of the monthly payments remain uncertain. USAID officials have said that a contract, or occupancy agreement, will be signed before beneficiaries move in. Fees for utilities and services may or may not be rolled into, collected, and paid through these monthly housing payments. The monthly housing payment structure may be flat or tiered, meaning amounts may be set at a flat rate for every household or may be progressive depending on income level. Electricity: USAID plans to install electricity, with individual meters, in each new house. USAID officials acknowledged that non- payment for electricity is a fairly common practice in parts of Haiti where electrical grids exist. Therefore, it remains to be seen whether the practice of non-payment may also be a challenge at the new settlement locations. Sewage: Prior to the January 2010 earthquake, there were no wastewater treatment plants in Haiti. A temporary facility has been constructed at the CIP and there are plans to build a permanent facility there as well. In addition, a treatment plant was opened in May 2012 near the Port-au-Prince metro area. These facilities may be able to serve some settlements, but it is unclear if they will be able to serve all of the facilities and at what cost to beneficiary households. One senior USAID official acknowledged that if septic tanks are not emptied regularly, there is a potential for a public health risk. Community management: The New Settlements program currently plans to create eight new “communities,” of between 148 to 1,283 households, each with beneficiaries from various locations in Haiti and with varied income levels. USAID officials acknowledged concerns about issues that might arise among the beneficiaries themselves and between the settlement and surrounding communities. Shantytowns: There is a risk that informal dwellings, or shantytowns, may be built around the new settlements to take advantage of the economic opportunities or services available near those locations. If employment opportunities at the CIP draw a large number of people, the current housing stock may be too low to accommodate them. To mitigate these types of site-specific sustainability concerns, USAID obligated $4.8 million for development of the Emergency Capacity Assistance Program to establish community management committees, self-governing bodies made up of selected beneficiaries, and to create other mechanisms intended to support community development. To address issues related to affordability, USAID, through this assistance program, worked to ensure that the household income and employment status were criteria addressed in the beneficiary selection process. To address other issues, USAID planned for the community management committees to promote social cohesion, to serve as a decision-making body, and to act as the residents’ representatives with government counterparts. At the Caracol-EKAM settlement, a provisional community management committee was formed and will be trained to engage with local and national authorities to help ensure that community services such as grounds keeping, infrastructure maintenance, and solid waste collection are undertaken. However, funding for ECAP only allowed for some of these initial activities to take place at the Caracol-EKAM settlement, and, according to officials, the program ended in April 2013. USAID allocated $5 million to support community development efforts at the new settlements. In April 2013, USAID issued a request for applications to find an implementing partner for a community development program for Caracol-EKAM, at an estimated cost of $1.3 to $1.5 million. This partner would provide support for the phased occupation and management of the settlement and engage in an array of activities designed to help ensure its long‐term sustainability. Although still in the planning stage, USAID’s current budget indicates over half of the community development funds will go toward assisting just three sites, including Caracol-EKAM. The remaining five or more settlement sites face the possibility of delayed or reduced support. To address that gap, USAID plans to foster partnerships with other organizations to assist and contribute to these activities. USAID has entered into such a partnership with the International Federation of the Red Cross to provide community development support at DLA 1.5. Additionally, in a memorandum of understanding between USAID and partner donors, it is noted that partner donor funds are to be provided for community development activities at those settlements; however that understanding does not fully secure such a financial commitment. Similarly, according to USAID officials, the agreement USAID is attempting to finalize with the Red Cross will budget for community development activities to be covered with Red Cross funds. However, there is the possibility that such partnerships will not be available to support all the settlements. USAID officials responsible for key parts of the New Settlements program have stated that it is crucial to have these support mechanisms in place to ensure a smooth transition when beneficiaries move in, to set the tone for interaction among beneficiaries moving forward, and to ensure that community management needs are understood and acted upon. Furthermore, USAID documents state that it is critical to initiate the beneficiary organization process as soon as beneficiaries occupy their homes because it may be difficult to work with beneficiaries before they arrive. Failure to find an implementing partner to provide and create these support mechanisms for each settlement may further increase the sustainability risks inherent in large-scale housing reconstruction projects, thus endangering the significant investments already committed to these efforts. Following the January 2010 earthquake in Haiti, the U.S. government made a strong commitment to Haiti’s reconstruction and economic development. As of March 2013, more than 3 years after the earthquake, USAID had obligated only 45 percent and disbursed only 31 percent of the $651 million in supplemental funding it was provided. State’s most recent report to Congress on program funding and progress—its final mandated report—was submitted in January 2013. However, the majority of reconstruction funding has not been disbursed, and a substantial amount of progress on project activities remains to be completed. Without complete and accurate reporting from State, Congress lacks the critical information on program funding and progress it needs to fully oversee the use of the Haiti reconstruction supplemental funding. USAID’s progress in supporting the CIP-related investments of the power plant and port have had mixed results. The power plant was completed in time to provide electricity for the CIP’s first tenant, in part because the USAID mission in Haiti had on staff a senior energy advisor to help plan and oversee the project. However, the mission has not filled an equivalent position to oversee the port project and has experienced delays and challenges associated with this significant project. The USAID mission continues to lack technical port expertise to oversee this project to which more than $72 million in U.S. funding has been allocated, is at least 2 years behind schedule, and has been found to be more complex than initially envisioned. Further, USAID’s contribution to port construction was not intended to fund the entire port, and it is unclear whether the Haitian government will be able to find a private sector company willing to contribute the large amount of remaining funding through a public-private partnership. This uncertainty puts at risk USAID’s investments in port planning and design, as well as the sustainability of the CIP and power plant due to the three projects’ interdependence. USAID developed the budget and projected targets of the New Settlements program using faulty and inaccurate cost estimates, which has led to a significantly reduced number of USAID-funded houses for the Haitian people. USAID agreed to the Haitian government’s request to enlarge and upgrade the houses, further reducing the number of houses it would build. As a result, USAID currently has plans to provide less than a quarter of the houses it originally projected it would build, and at a much greater cost. Difficulties in securing land title and challenges in establishing partnerships with NGOs also delayed and further reduced USAID’s targets. Furthermore, the sustainability of USAID’s New Settlement program is uncertain. The agency has dedicated some funding to help ensure sustainability through the development of community support mechanisms; however, it is unclear if funding for these support mechanisms will be available for each new settlement. In addition, USAID has taken steps to secure commitments for partner donor funding to assist in these efforts, but has not yet secured such commitments for all planned settlements and it is uncertain whether the partner organizations will be able to fulfill their commitments. These community support mechanisms are essential to helping ensure that the settlements become viable, cohesive communities and that beneficiaries maintain them once they move in. Without this support in place, sustainability issues may be exacerbated and USAID’s housing efforts placed at risk of deterioration. To ensure that Congress has current information on the status of Haiti earthquake reconstruction activities and is able to provide appropriate oversight at a time when most funding remains to be disbursed, Congress should consider requiring State to reinstitute the requirement to provide it with periodic reports until most of the funds in each sector are disbursed. In these reports, Congress should consider requiring State to provide information such as progress in U.S. program sectors; amounts of funding obligated and disbursed in each specific sector; sector and project cost increases; changes in project schedules; and existing difficulties and challenges to successful project completion. To strengthen USAID’s ability to complete its projects in Haiti and to maintain their sustainability, we recommend that the USAID Administrator take the following two actions. To ensure proper oversight over the continued planning for and construction of a new port in northern Haiti and to enable the project to move forward in a well planned and timely manner, USAID should fill the vacant port engineer position at its Haiti mission within time frames that avoid future project delays. To promote the sustainability of the New Settlements permanent housing program, and to protect the significant investments already made, the USAID Administrator should direct the USAID Haiti mission to ensure that each new settlement has community support mechanisms in place prior to beneficiary occupation. As part of that process, the mission should consider making additional funds available, as needed, to help ensure this support. We provided a draft of this report to USAID and State for review and comment. USAID provided written comments on a draft of this report, which are reprinted in appendix II. State did not provide written comments. USAID agreed with both of our recommendations. USAID agreed with our recommendation that it fill the vacant port engineer position at the Haiti mission within time frames that avoid future project delays. In its letter responding to our draft report, USAID noted that, in May 2013, it issued a solicitation for a ports advisor, recognizing the need to fill the position to move its program forward. In June 2013, USAID noted that it expected to fill the position soon; however, as of June 18, 2013, the position was vacant. USAID also agreed with our recommendation that each new permanent housing settlement have community support mechanisms in place before the beneficiaries occupy the houses. As noted in our report, USAID stated that $5 million has been set aside to finance community development activities. In its comments on this report, USAID added that the mission is prepared to provide additional resources, if required. USAID also elaborated on the ongoing and planned activities intended to facilitate community development and sustainability at the first two settlement sites. We acknowledge USAID’s efforts to provide community development support at these two sites and support the agency’s intentions to implement our recommendation at future settlement locations. State and USAID both provided technical comments. We incorporated those comments, along with information contained in USAID’s written response, into the report where appropriate As agreed with your offices, unless you publicly announce the contents of the report earlier, we are planning no further distribution until 30 days after the report date. At that time, we will send copies to interested congressional committees, the Secretary of State, and the USAID Administrator. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or any of your staffs have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We reviewed infrastructure-related post-earthquake reconstruction efforts in Haiti undertaken by the U.S. Agency for International Development (USAID). This report addresses (1) USAID’s progress in obligating and disbursing program allocations and the Department of State’s (State) periodic reporting to Congress on the status of the U.S. reconstruction efforts; (2) USAID’s progress in planning and constructing two activities related to the Caracol Industrial Park (CIP)—a power plant and port; and (3) USAID’s progress in planning and constructing permanent housing. In response to a congressional request to examine the Supplemental Appropriations Act, 2010 (the Act), we focused our review on three sectors of USAID reconstruction activities: power plant, port, and permanent shelter. These three activities comprise about $268 million of the overall $651 million in supplemental and other funds allocated to USAID for bilateral reconstruction activities. We also included lesser amounts of regular fiscal year appropriations allocated to the three activities within our scope. To obtain information on the appropriations, allocations, and planned and ongoing uses of U.S. reconstruction funding for Haiti, we reviewed the Act, enacted by Congress in July 2010; State and USAID FY 2010 Supplemental Appropriations Spending Plan, issued by State in September 2010; and the interagency Post-Earthquake USG Haiti Strategy: Toward Renewal and Economic Opportunity, issued by State in January 2011. We also reviewed the Action Plan for National Recovery and Development of Haiti, issued by the government of Haiti in March 2010. In addition, we reviewed the Haiti Reconstruction Grant Agreement, signed by the U.S. and Haitian governments in May 2011. We met in Washington, D.C., and in Port-au-Prince, Haiti, with officials from USAID and State. USAID defines allocation as the identification and setting aside of resources for a specific program action. To determine the amounts of funding obligated and disbursed from USAID’s supplemental funding, as well as funding from other sources for reconstruction activities, we analyzed data reported by USAID as of March 31, 2013. These data include information on obligations and disbursements of supplemental appropriation funding overall, as well as amounts provided for particular activities within our scope. To assess the reliability of the data on planned allocations, obligations, and disbursement, we conducted follow-up correspondence and interviews with cognizant officials from USAID and State. We asked them standard data reliability questions—including questions about the purposes for which funding data were collected, the use of the data, how the data were collected and generated, and how the agencies ensured that the data were complete and accurate. We determined the data to be sufficiently reliable for the purposes of this report. To describe State’s decision for the U.S. government to support the CIP, we interviewed State officials to determine the rationale for the decision and reviewed portions of the framework agreement laying out the terms of the public-private partnership to be followed by the Haitian government, the Inter-American Development Bank (IDB), and the anchor tenant, a private Korean garment manufacturer, Sae-A Trading Co. Ltd. (Sae-A). We reviewed studies and reports on the Haitian economy and the potential economic impact of the CIP that State officials had reviewed before making this determination. We also reviewed any types of calculations that State officials had conducted about the effect of the CIP on job and economic growth in Haiti. When reviewing these studies, reports, and calculations, we noted the methodologies used and any potential limitations those methodologies may have had on their findings. To ascertain the IDB’s progress in building the CIP, we performed a site visit at the CIP in December 2012, interviewed the CIP’s construction manager, and received a tour of the Sae-A facility. In addition, we received copies of more recent photos and videos of the CIP that were taken by IDB staff in January 2013. To determine the Haitian government’s progress in filling the CIP with tenants, we met with State’s Senior Advisor for Industrial Development in Haiti who works with the Haitian government to recruit companies to the CIP to learn about the recruitment process and its progress. From this State official, we also received documents containing summaries of information about new and potential tenants to the CIP. To describe USAID’s progress with the CIP power plant, we reviewed plans for the power plant as outlined in the September 2011 Activity Approval Document (AAD) for the Haitian energy sector and compared these plans with the time frames, costs, and descriptions of the power plant project in award documents and amendments, as well as progress reports from the construction contractor. We also interviewed USAID and State officials in Washington, D.C., and Haiti to determine the reasons for any differences between planned and actual costs and time frames. To describe how USAID assessed the power plant project for its environmental and social impact, we reviewed the June 2011 environmental assessment of the CIP that a USAID contractor had performed. To determine how USAID followed up on mitigation measures suggested in this environmental assessment, we interviewed USAID officials and reviewed contracts for building, overseeing the construction of, and operating and maintaining the power plant, as well as progress reports from the construction and oversight contractors that included updates on the mitigation measures being taken. To determine the planning and progress made regarding electricity distribution from the CIP power plant to residences and businesses outside the CIP, we interviewed USAID officials and reviewed USAID planning documents, the cooperative agreement for initial power distribution outside the CIP, and progress reports from the nongovernmental organization (NGO) responsible for this distribution. To describe USAID’s progress with a new port for the Cap-Haïtien corridor, we reviewed plans for the port such as those articulated in the most recent draft of the AAD for the Haitian port sector dated August 2011, in procurement documents for the port feasibility study, and in interviews with USAID and State officials. We then reviewed the port feasibility study and interviewed USAID officials on the process and results of that study to determine USAID’s progress against its initial plans. To describe USAID’s progress constructing permanent houses under its New Settlements program, we reviewed plans as outlined in the August 2011 AAD and compared these plans with the time frames, costs, and descriptions of the New Settlements program in design packages, award documents and amendments, and progress reports from various site preparation and construction contractors. We also interviewed USAID and State officials in Washington, D.C., and Haiti to determine the reasons for any differences between planned and actual costs, time frames, and expected results. We calculated the weighted average cost of construction per plot and per house by (1) calculating the total cost of plot and house construction for the two sites that had awarded contracts, (2) calculating the total number of plots and houses at both sites, and (3) dividing the first number by the second. For the initial average cost per plot, we used data on costs and numbers of plots and houses at each site obtained from initial contracts. For the revised average cost per plot, we used data on costs and numbers of plots and houses at each site obtained from modifications to the initial contract. To discuss the role of NGOs and other partner donors in the New Settlements program, we reviewed various documents related to partners who had planned or committed to building houses on USAID-developed sites. We interviewed a partner organization housed with the Haitian government and funded by the IDB; however, we were unable to interview other potential partner NGOs because negotiations over the terms of agreements were ongoing. In addition, we interviewed the implementers responsible for a cooperative agreement with USAID related to community development and beneficiary selection efforts for the New Settlements program and reviewed the beneficiary selection data they had gathered for the Caracol-EKAM site. To determine the sustainability of the power plant, port, and new settlements, the definition of sustainability we use is based on the Organisation for Economic Co-operation and Development definition, which defines “sustainability” as “the continuation of benefits from a development intervention (such as assets, skills, facilities, or improved services) after major development assistance has been completed.” We operationalized this definition by specifying that sustainability is the ability of the Haitian government to operate and maintain the USAID-funded power plant, port, and new settlements in such a condition as is required to produce the projected benefits. To determine issues that may affect the sustainability of these three projects, we reviewed reports commissioned by agencies and organizations, such as USAID, State, the International Finance Corporation, the World Bank, and the U.S. Trade and Development Agency, on the Haitian energy, port, and shelter sectors. We reviewed procurement documents, assessments, and progress reports related to these specific projects. We also interviewed USAID officials to understand their key sustainability concerns for these projects. We traveled to Haiti in December 2012 and met with U.S. officials from USAID and State, and representatives from some of USAID’s partners involved in implementing the projects in our review—including the IDB, Sae-A, and construction firms and partner donors involved in the New Settlements program. In the Cap-Haïtien corridor, we visited the CIP, the CIP power plant, one of the sites for the proposed port, and all New Settlement sites under construction or planned for future construction. In the Port-au-Prince corridor, we visited the New Settlement sites under construction and planned for future consideration, sites where temporary shelters were built, and sites damaged by the earthquake. We conducted this performance audit from August 2012 through June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Leslie Holen (Assistant Director), Lynn Cothern, Heather Latta, George Taylor, and Brian Tremblay made key contributions to this report. Ashley Alley, Etana Finkler, Justin Fisher, Courtney LaFountain, and Mary Moutsos provided technical assistance. | On January 12, 2010, an earthquake in Haiti caused about 230,000 deaths, resulted in 300,000 injuries, and displaced about 2 million persons. Following immediate relief efforts, Congress provided $1.14 billion for reconstruction in the Supplemental Appropriations Act, 2010. USAID is responsible for implementing $651 million of this amount, and it has allocated about $268 million of this and other funding to construct a power plant and port to support the CIP in northern Haiti and permanent housing in several locations. The Act required State to report periodically to Congress on funding obligated and disbursed and program outputs and outcomes. GAO was asked to review USAID's efforts in Haiti. This report examines USAID's (1) funding obligations and disbursements and State's reports to Congress on funding and progress; (2) USAID's progress in two CIP-related activities--a power plant and port; and (3) USAID's progress in constructing permanent housing. GAO reviewed documents and interviewed U.S. officials in Washington, D.C., and Haiti, and visited planned and active sites. As of March 31, 2013, the U.S. Agency for International Development (USAID) had obligated $293 million (45 percent) and disbursed $204 million (31 percent) of $651 million in funding for Haiti from the Supplemental Appropriations Act, 2010 (the Act). The Department of State (State) submitted four of five periodic reports to Congress, as required by the Act. The reports included information on funding obligated and disbursed and anecdotal information on outputs and outcomes of some activities, as the Act required. The Senate Appropriations Committee, in its Committee Report accompanying the Act, had also directed State to report more detailed information on funding and sector activities in Haiti, which State did not include in the reports. Although most funds have not been disbursed, State's reporting requirement ended in September 2012. As a result, Congress lacks information on the amounts of funds obligated and disbursed and program-by-program progress of U.S. reconstruction activities. USAID has allocated $170.3 million to construct a power plant and port to support the newly developed Caracol Industrial Park (CIP), with mixed results. According to USAID documents and external studies, the sustainability of the CIP, power plant, and port are interdependent; each must be completed and remain viable for the others to succeed. USAID completed the power plant's first phase with less funding than allocated and in time to supply power to the first CIP tenant. Port construction will not begin until at least 2 years later than originally planned due in part to a lack of USAID expertise in port planning in Haiti. In January 2011, the mission made an unsuccessful attempt to solicit a person to fill a vacant port engineer position but made no additional attempts prior to May 2013 and this position currently remains unfilled. As a result, planning has been hindered by (1) unrealistic initial timeframes, (2) delays in awarding the contract for a feasibility study, and (3) incomplete information in the feasibility study. According to initial estimates of port construction costs, USAID funding will be insufficient to cover a majority of projected costs. The estimated gap of $117 million to $189 million is larger than initially estimated, and it is unclear whether the Haitian government will be able to find a private sector company willing to finance the remainder of the project. USAID has reduced its permanent housing construction targets in Haiti. USAID initially underestimated the funding needed for its New Settlements housing program. As a result, the agency increased the amount allocated by 65 percent, from $59 million to $97 million, and decreased the projected number of houses to be built by over 80 percent, from 15,000 to 2,649. The estimated number of beneficiaries was reduced from 75,000 to 90,000 to its current estimates of approximately 13,200 to 15,900. Cost increases resulted from inaccurate original estimates that used inappropriate cost comparisons and from the Haitian government's request for larger houses with improvements such as flush toilets. USAID currently estimates construction will be completed almost 2 years later than initially scheduled. Delays occurred due to the difficulties of securing land titles and coordination issues with partner donors. USAID is attempting to mitigate potential sustainability risks, such as the possible lack of economic opportunities, affordability of housing and services, and community cohesion, but gaps in the support of community development mechanisms may increase these risks. Congress should consider requiring State to provide it with periodic reports on reconstruction progress, funding, and schedules until most funding for each program sector has been disbursed. Also, GAO is recommending USAID (1) hire a port engineer to oversee port planning and construction and (2) provide timely community support mechanisms for each new settlement to help ensure sustainability of its permanent housing program. USAID agreed with GAOs recommendations. |
In the midst of the Great Depression, Social Security was enacted to help ensure that the elderly would have adequate retirement incomes and would not have to depend on welfare. The program was designed to provide benefits that workers had earned to some degree through their contributions and those of their employers. The benefit amounts would depend in part on how much the worker had earned and therefore contributed. Today, about 10 percent of the elderly have incomes below the poverty line, compared with 35 percent in 1959. However, for about half of today’s elderly, incomes excluding Social Security benefits are below the poverty line. Importantly, Social Security does not just provide benefits to retired workers. In 1939, coverage was extended to the dependents of retired and deceased workers, and in 1956 the Disability Insurance program was added. To restore the long-term solvency and sustainability of the program, reductions in promised benefits and/or increases in program revenues will be needed. Within the program’s current structure, possible benefit changes might include increases in the full retirement age, changes to the benefit formula, or reductions in cost-of-living increases, among other options. Revenue increases might include increases in payroll taxes or transfers from the Treasury’s general fund. Some proposals would change the structure of the program to incorporate a system of individual retirement savings accounts. Many such proposals would reduce benefits under the current system and make up for those reductions to some degree with income from the individual accounts. Individual account proposals also try to increase revenues, in effect, by providing the potential for higher rates of return on the individual accounts’ investments than the trust funds would earn under the current system. Three key distinctions help to identify the differences between Social Security’s current structure and one that would use individual accounts. Insurance versus savings. Social Security is a form of insurance, while individual accounts would be a form of savings. As social insurance, Social Security protects workers and their dependents against a variety of risks such as the inability to earn income due to old age, disability, or death. In contrast, a savings account provides income only from individuals’ contributions and any earnings on them; individuals effectively insure themselves under a savings approach. Defined-benefit versus defined-contribution. Social Security provides a “defined-benefit” pension, while individual accounts would provide a “defined-contribution” pension. Defined-benefit pensions typically determine benefit amounts using a formula that takes into account individuals’ earnings and years of earnings. The provider assumes the financial and insurance risk associated with funding those promised benefit levels. Defined-contribution pensions, such as 401(k) plans, determine benefit amounts based on the contributions made to the accounts and any earnings on those contributions. As a result, the individual bears the financial and insurance risks under a defined- contribution plan until retirement. Pay-as-you-go versus full funding. Social Security is financed largely on a “pay-as-you-go” basis, while individual accounts would be “fully funded.” In a pay-as-you-go system, contributions that workers make in a given year fund the payments to beneficiaries in that same year, and the system’s trust funds are kept to a relatively small contingency reserve. In contrast, in a fully funded system, contributions for a given year are put aside to pay for future benefits. The investment earnings on these funds contribute considerable revenues and reduce the size of contributions that would otherwise be required to pay for the benefits. Defined-contribution pensions and individual retirement savings are fully funded by definition. To evaluate reform proposals, we have suggested that policy makers should consider three basic criteria: 1. the extent to which the proposal achieves sustainable solvency and how the proposal would affect the economy and the federal budget; 2. the balance struck between the twin goals of individual equity (rates of return on individual contributions) and income adequacy (level and certainty of benefits); and 3. how readily such changes could be implemented, administered, and explained to the public. Providing higher replacement rates for lower earners than for higher earners is just one of several aspects of our criterion for balancing adequacy and equity. With regard to adequacy, this criterion also considers the extent to which the proposal changes benefits for current and future retirees; maintains or enhances benefits for low-income workers who are most reliant on Social Security; and maintains benefits for the disabled, dependents, and survivors. In addition, providing higher replacement rates for lower earners than for higher earners does not by itself ensure adequacy. A reform proposal could make replacement rates vary even more by earnings level than under the current system yet provide lower and less adequate benefits. With regard to equity, our criterion for balancing adequacy and equity also considers the extent to which the proposal ensures that those who contribute receive benefits, expands individual choice and control over program contributions, increases returns on investment, and improves intergenerational equity. Moreover, reform proposals should be evaluated as packages that strike a balance among individual reform elements and important interactive effects. The overall evaluation of any particular reform proposal depends on the weight individual policy makers place on each criterion. In 2001, the President created the Commission to Strengthen Social Security to develop reform plans that strengthen Social Security and increase its fiscal sustainability while meeting certain principles: no changes to benefits for retirees or near retirees, dedication of entire Social Security surplus to Social Security, no increase in Social Security payroll taxes, no government investment of Social Security funds in the stock market, preservation of disability and survivor components, and inclusion of individually controlled voluntary individual retirement accounts. The commission developed three reform models, each of which represented a different approach to including voluntary individual accounts as part of Social Security. Under all three models, individuals could have a portion of their Social Security contributions deposited into individual accounts, and their Social Security defined benefits would be reduced relative to those account contributions. A governing board would administer the accounts in a fashion similar to the Thrift Savings Plan for federal employees. To continue paying benefits while also making deposits to the accounts, funds would need to be transferred from the Treasury’s general fund. The models varied in the size of the account contributions. Models 2 and 3 had additional provisions for reducing certain benefits overall and enhancing benefits for surviving spouses and selected low earners. To assess the extent to which the Social Security program or reform options are progressive—distributes in a way that favors lower earners— researchers first select a number of measures and then compare how different groups of earners fare according to those measures. The choice of measures reflects a particular perspective on the goals of the program. For example, those who analyze Social Security from an adequacy perspective are primarily concerned with the program’s role in securing adequate income and consequently tend to use measures of how much income Social Security provides. In contrast, those who view Social Security from an equity perspective focus on whether beneficiaries receive a fair return on their contributions and tend to choose measures balancing lifetime taxes against lifetime benefits. For each perspective, assessing progressivity involves determining how lower earners fare relative to higher earners on appropriate measures. In the context of Social Security reform, those scenarios in which the well-being of lower earners increased proportionally more, or decreased proportionally less, would be considered more progressive. Because of the different kinds of benefits that Social Security provides, many researchers agree that to investigate the distributional effect of the program, aggregating workers and their dependents into households better captures well-being, but doing so poses certain methodological challenges. Since its inception, Social Security’s primary goal has been to provide adequate income, upon entitlement, so as to reduce dependency and poverty among its participants. Studies emphasizing this goal reflect the adequacy perspective; they view the program more as a safety net that helps ensure a minimum level of subsistence. Consequently, such studies use measures of how much income Social Security benefits provide. These measures include absolute benefit levels at a point in time and benefit-to- earnings ratios. Benefit levels are useful for estimating whether Social Security offers adequate protection for people covered by the system. Benefit-to-earnings ratios, which reflect how much of past earnings Social Security benefits replace, help gauge the extent to which the program allows people to maintain their past standard of living. One way to assess the distributional effect of the current Social Security program or of various reform options is to look at how these adequacy measures are distributed across earners. Regarding benefit levels, one possibility is to compute the ratio of benefits received by lower earners to benefits received by higher earners, at a particular point in time. Comparing these benefit ratios under different policies helps determine how the well-being of lower earners changes relative to that of higher earners across reform proposals. If, for example, benefits collected by individuals in the 20th percentile of the earnings distribution relative to benefits collected by those in the 80th percentile increased from one Social Security system to the next, the adequacy perspective would conclude that, other things being equal, the second is more progressive, that it is tilted toward lower earners. Alternatively, one could compute the proportion of total benefits various groups of earners receive relative to the proportion the median gets and determine the manner in which these relative proportions change across proposals. For all groups below the median, for instance, an increase in this ratio would indicate a more progressive system. The distribution of replacement rates also helps assess progressivity. The change in the replacement rate of lower earners relative to that of high earners across reform options shows the extent to which lower earners are able to maintain their pre-entitlement standard of living relative to higher earners. Under the current Social Security system, for instance, the monthly benefit lower earners receive upon entitlement replaces a larger portion of their monthly earnings; from an adequacy perspective, the system is therefore tilted in their favor. A reform proposal that increased the replacement rate of lower earners relative to higher earners would be deemed more progressive than one that did not. By linking benefits to earnings, which link in turn to contributions, Social Security also incorporates the principle of individual equity. Under the current program, people who pay higher taxes generally collect higher benefits upon entitlement but not directly proportionally higher. Studies that reflect the equity perspective focus on whether, over their lifetimes, beneficiaries can expect to receive a fair return on their contributions or get their money’s worth from the system. These studies use such measures as lifetime benefit-to-tax ratios, internal rates of return, and net lifetime benefit-to-earnings ratios. The benefit-to-tax ratio measure compares the present value of Social Security lifetime benefits with the present value of lifetime Social Security taxes. The internal rate of return can be thought of as the interest rates individuals effectively receive on their lifetime contributions, given their lifetime Social Security benefits. Net lifetime benefit-to-earnings ratios show lifetime benefits minus lifetime taxes relative to lifetime earnings. This measure, also called the average rate of net taxation, borrows from the public finance literature the idea that equity measures ought to contain earnings. From an equity perspective, examining the distribution of these measures helps gauge the distributional effects of Social Security or reform options. Many studies adopting the equity perspective find, for example, that the current program favors lower earners because this group enjoys higher rates of return and benefits whose value is larger relative to the value of their contributions. Other studies confirm this result by observing that the net benefit-to-earnings ratio is higher for low earners. If under a reform proposal, these measures increased more for lower earners, then that system would be considered more progressive. Reform options that involve general revenue transfers to ensure solvency make it difficult to evaluate progressivity from an equity perspective because they do not typically specify how such transfers are to be financed or who will eventually bear their burden. Yet general revenue transfers implicitly require future tax increases, spending cuts, or a combination of both, all of which have substantial distributional consequences. Such consequences are difficult to evaluate analytically. Without knowing who will bear the costs of financing these transfers, the equity perspective cannot accurately determine how well lower earners fare relative to higher earners in a given system or across proposed reforms. Even if we knew how the tax burden of general revenues is distributed today, the tax system could change in the future in ways that would alter the distribution. Some proposals with individual account features, for example, involve general revenue transfers. They divert part of existing payroll tax revenues from traditional Social Security benefits and toward individual accounts. Consequently, to remain financially solvent, such proposals typically require additional resources from Treasury’s general fund for several years after implementation. Both the adequacy and the equity perspectives consider families or households, in addition to individuals, in assessing distributional effects. This is particularly relevant in the Social Security context because the program provides not only worker benefits to retired and disabled individuals, but also auxiliary benefits to current and former spouses, children, and surviving spouses. Household analysis has implications for progressivity. Most studies using equity measures find Social Security somewhat less progressive once workers and their dependents are combined in a single unit. This is largely due to the fact that some individuals with little or no earnings, hence “poor” by themselves, end up in high-earning households. The benefit they collect no longer counts as transfers to low earners. However, the household approach presents analytical challenges. Multiple divorces and marriages, for example, make it difficult to define “household” on a lifetime basis. Moreover, age differences between spouses, which imply different retirement dates, complicate the calculation of “total household benefit” at a given point in time. Nonetheless, researchers believe that aggregating workers and their dependents into households provides insight by giving a more complete picture of their well-being. Social Security’s distributional effects reflect program features, such as its benefit formula, and demographic patterns among its recipients, such as marriage between lower and higher earners. The benefit formula for retired workers favors lower earners by design, replacing a larger proportion of earnings for lower earners than for higher earners. Disability benefits use the same progressive benefit formula, and disability recipients are disproportionately lower lifetime earners. However, the extent to which these features favor lower earners may be offset to some degree by demographic patterns and other program features. Household formation reduces the system’s tilt toward lower-income people because some of the lower-earning individuals helped by the program, in fact, live in high- income households. Differences in mortality rates may reduce rates of return for lower earners and increase rates of return for higher earners. In order to help ensure adequate incomes in retirement, Congress designed Social Security’s benefit formula for retired workers to favor lower earners. When workers retire, Social Security uses their lifetime earnings records to determine their Primary Insurance Amount (PIA), on which initial monthly benefits are based. The PIA is determined by applying the Social Security benefit formula to a worker’s Average Indexed Monthly Earnings (AIME). The AIME is the monthly average of a worker’s 35 best years of earnings, with earnings before age 60 indexed to average wage growth. For workers who become eligible for benefits in 2004, PIA equals 90 percent of the first $612 dollars of AIME plus 32 percent of the next $3,077 dollars of AIME plus 15 percent of AIME above $3,689. Consequently, the benefit formula replaces a higher proportion of pre-retirement earnings for lower lifetime earners than for higher lifetime earners. Figure 1 shows replacement rates for illustrative workers under the current benefit formula. The replacement rate for the low earner is 49 percent, while the rate for the high earner is only around 30 percent. The Disability Insurance (DI) program, which provides benefits to workers who are no longer able to work because of severe long-term disabilities, also favors lower lifetime earners. Disability Insurance not only provides earnings replacement during the pre-retirement years but generally results in beneficiaries receiving higher benefits in retirement than they would have received if they had earned the same amount of money but had not received disability benefits. Disability Insurance favors lower earners because it uses the same progressive benefit formula as retired worker benefits and because DI recipients are more likely to be lower earners. Disability Insurance recipients are disproportionately lower lifetime earners because an inability to continue working is necessary to qualify for benefits. Also, researchers have found that individuals with lower levels of educational attainment are more likely to experience disability. An analysis of lifetime benefits using a microsimulation model illustrates DI’s tilt toward lower earners. To examine the distributional impact of DI, we simulated Social Security benefits for individuals born in 1985 under a scenario that pays retirement but not disability benefits and a scenario that pays all categories of Social Security benefits. Because simulations are sensitive to economic and demographic assumptions, it is more appropriate to compare benefits across the scenarios than to focus on the actual estimates themselves. Median lifetime Social Security benefits are 33 percent higher under the scenario that pays all types of Social Security benefits than under the scenario that does not pay disability benefits, with 30 percent of individuals receiving greater lifetime Social Security benefits due to the DI program. According to these simulations, DI increases median lifetime Social Security benefits for workers in the lowest fifth of lifetime earnings by 43 percent while increasing lifetime benefits for the top fifth by 14 percent (see fig. 2). Social Security favors lower earners less when considered from the household perspective. Some of the lower-earning individuals who gain from the benefit formula or disability benefits do not live in low-income households, because they are married to higher earners. The same is often true for lower earners who receive spouse and survivors benefits. Married individuals are eligible for the greater of their own worker benefits or 50 percent of their spouses’ benefits. Similarly, widows and widowers are eligible for the larger of their own worker benefits or 100 percent of their deceased spouses’ benefits. Because of the nature of spouses’ and survivors’ benefits, recipients are on average lower lifetime earners— effectively they must earn less than their spouses to qualify. However, many of the lower-earning individuals that the system favors through spouses’ and survivors’ benefits actually live at some point in higher- income households because of marriage. Some have suggested that household formation may have less of an impact on the degree to which Social Security favors lower earners in the future. Increased female labor force participation and changing marital patterns suggest there will be less earnings differences between spouses in the future as well as fewer people who are married long enough to qualify for spouses’ and survivors’ benefits. Consequently, there may be fewer instances of the system providing high replacement rates to low-earning spouses from high-income households. An analysis of simulated benefits and taxes illustrates how the system favors lower earners less when considered from the household perspective. For individuals born in 1985, figure 3 depicts the ratio of benefits received to taxes paid for the top and bottom fifths of earnings from an individual perspective and a household perspective. For example, the first bar indicates that individuals in the bottom fifth of earnings receive lifetime benefits that are 1.3 times higher than the lifetime taxes they paid to the program. When analyzed from an individual perspective, individuals are classified by their own lifetime earnings and ratios are calculated for their own taxes and benefits. When analyzed from a household perspective, individuals are classified by household earnings and ratios are calculated for household taxes and benefits. In both cases, benefit-to-tax ratios are higher for the bottom fifth than for the top fifth, suggesting that the system favors lower earners. However, the difference in the benefit-to-tax ratios is smaller when considered from the household perspective. The extent to which the benefit formula and disability benefits favor lower earners may be offset to the extent that lower earners have higher mortality rates than do higher earners. A number of studies suggest that lower earners do not live as long as higher earners. As a result, lower earners are likely to receive retirement benefits for fewer years than higher earners. Researchers have generally found that, to some degree, the relationship between mortality rates and earnings reduces rates of return for lower earners and increase rates of return for higher earners. Social Security taxes are levied on earnings up to a maximum level set each year, and earnings beyond the threshold are not counted when calculating benefits. In 2004, the cap on taxable earnings is $87,900, and in recent years about 6 percent of workers had earnings above the cap. Policy makers often argue that the cap helps higher earners because it results in their paying a smaller percentage of their earnings than do individuals whose earnings do not exceed the cap. Also, while the cap limits both lifetime contributions and benefits, it increases equity measures such as benefit-to-tax ratios and rates of return for high earners. If the cap were repealed, the additional contributions paid by high earners would only be partially reflected in increased benefits, because the benefit formula is weighted toward lower earners. Simulations illustrate that the cap on taxable earnings modestly favors higher earners for individuals born in 1985. We simulate benefits and taxes under a scenario with the cap on taxable earnings and one without the cap. Figure 4 shows household benefit-to-tax ratios by top and bottom fifth of earnings and top percentile of earnings. When the cap is removed, the median benefit-to-tax ratio for the bottom fifth remains unchanged and the ratio for the top fifth of earnings decreases from 0.61 to 0.59. Although 83 percent of households in the top fifth are affected by repealing the cap, the increase in median lifetime taxes, 8.9 percent, is almost offset by the increase in median lifetime benefits, 6.5 percent. However, the impact on very high earners is larger. According to these simulations, the median benefit-to-tax ratio for households in the top 1 percent of earnings decreases from 0.52 to 0.45 when the cap is removed, indicating that very high earners gain from the cap; the increase in median lifetime taxes paid by this group, 50.4 percent, is not offset as much by the increase in their median lifetime benefits, 34.4 percent. We analyzed three proposals that illustrate the variation in the potential distributional effects of different approaches to reform. CSSS Model 2 would create a new system of voluntary individual accounts while reducing Social Security’s defined benefits overall but increasing them for surviving spouses and lower earners. The Ferrara proposal would create a system of voluntary individual accounts that would ultimately be large enough to completely replace Social Security’s old-age benefits for workers and their spouses. The Diamond-Orszag proposal would restore long-term solvency without creating a new system of individual accounts by reducing benefits and increasing revenues while also increasing benefits for surviving spouses and lower earners. Under Model 2 of the President’s Commission to Strengthen Social Security, For individuals choosing to participate, the Social Security system would redirect 4 percentage points of the payroll tax (up to a $1,000 annual limit) into personal investment accounts. Participating individuals could access their accounts in retirement, but Social Security defined benefits would be reduced to reflect the amount diverted to their individual accounts. On net, benefits would increase for individuals whose accounts earned more than a 2 percent return beyond inflation. Social Security defined benefits would be lower than benefits promised under the current benefit formula. Changes to the benefit formula would slow the growth in initial benefits from wage growth to price growth. According to Social Security Administration’s (SSA) Office of the Chief Actuary, these formula changes apply to initial benefits for all types of beneficiaries, including disabled workers. Social Security defined benefits would be enhanced for certain surviving spouses and for low earners. When fully implemented, initial benefits for certain low-wage workers with steady work histories could be raised by as much as 40 percent. Beneficiaries who lived longer than their spouses would receive the larger of their own benefit or 75 percent of the benefit that would be received by the couple if both spouses were alive. We used simulations to examine how Model 2 might affect the distribution of Social Security benefits. We did not examine the distribution of equity measures such as benefit-to-tax ratios or rates of return, because the proposal’s individual account feature requires general revenue transfers. General revenue transfers are problematic when calculating equity measures because it is difficult to determine who ultimately pays for the additional financing. Because simulations are sensitive to economic and demographic assumptions, it is more appropriate to compare benefits across the scenarios than to focus on the actual estimates themselves. Since account participation is voluntary, we used two simulations to examine the effects of the Model 2 provisions, one with universal account participation (Model 2-100 percent) and one with no account participation (Model 2-0 percent). We also assumed that all account participants would invest in the same portfolios; consequently we did not capture any distributional effect that might occur if lower earners were to make different account participation or investment decisions than higher earners. We compared benefits under Model 2 with hypothetical benchmark policy scenarios that would achieve 75-year solvency either by only increasing payroll taxes or by only reducing benefits. The tax- increase, or “promised benefits,” benchmark scenario pays benefits defined by the current benefit formula and raises payroll taxes to bring the Social Security system into financial balance. The proportional benefit- reduction, or “funded benefits,” benchmark scenario maintains current tax rates and achieves financial balance by gradually phasing in proportional benefit reductions. In order to compare Model 2 with the benchmarks, we assumed all account participants convert their account balances at retirement into periodic monthly payments. We did not simulate other sources of retirement income, such as employer pensions or other individual retirement savings, and such sources may interact with Social Security policy. (See app. I for more details on the GEMINI microsimulation model, our benchmark policy scenarios, and our assumptions for CSSS Model 2.) Given our assumptions, our analysis suggests that Model 2 would favor lower earners somewhat more than the benchmark scenarios. Figure 5 shows the share of household lifetime benefits received by the bottom and top fifths of earnings for individuals born in 1985 for both Model 2 and for the promised and funded benefits scenarios. For example, households in the bottom fifth of earnings received about 12.5 percent of all lifetime benefits under both benchmark scenarios. According to our simulations, households in the bottom fifth of earnings would receive greater shares of lifetime benefits under both Model 2 scenarios than under the benchmark scenarios, while households in the top fifth of earnings would receive smaller shares under Model 2 than under the benchmarks. It should be noted that while the simulations suggest that the distribution of benefits under Model 2 is more progressive than under the benchmarks, this does not mean benefit levels are always higher for the bottom fifth under Model 2. (See fig. 6.) According to our simulations, median household lifetime benefits for the bottom fifth under Model 2-0 percent would be 3 percent higher than under the funded benefits scenario but 21 percent lower than under the promised benefits scenario. Median household lifetime benefits for the bottom fifth under Model 2-100 percent would be 26 percent higher than under the funded benefits scenario but 4 percent lower than under the promised benefits scenario. While Model 2 may improve the relative position of lower earners, it may not improve the adequacy of their benefits. To further understand how Model 2 distributes benefits toward lower earners, we examined the distributional effects of each of its core features. First we simulated a version of Model 2-100 percent that included the individual accounts and the reductions in Social Security defined benefits, but not the $1,000 cap on account contributions or the enhanced benefits for low earners and survivors. Next we simulated a version that included the defined-benefit reductions and the individual accounts with the $1,000 cap on account contributions. Finally, we simulated the complete Model 2- 100 percent scenario, which included the enhanced benefits for lower earners and survivors. Our analysis suggests that the effect of the individual accounts and defined benefit reductions, which favor higher earners, would be more than offset by the limit on account contributions and the enhanced benefits for lower earners and survivors. Figure 7 shows the distributional impact of each reform feature. First, we simulated adding the individual accounts and reducing Social Security defined benefits. The share of benefits received by the bottom fifth of earnings falls relative to the benchmarks by as much as a percentage point, and the share received by the top fifth increases by about 1.5 percentage points. Under this scenario, benefits from individual account balances effectively replace some of the benefits calculated from the Social Security benefit formula and the disability program. This shift favors higher earners because, unlike the benefit formula, accounts by themselves do not provide higher replacement rates for lower earners and because DI recipients are more likely to be lower earners. Figure 7 also shows the impact of the cap on contributions and the enhanced benefits for low earners and survivors. Adding the cap on contributions would increase the share of benefits for the lowest fifth of earnings by more than a percentage point and would reduce the top fifth’s share by two percentage points. The cap would reduce total benefits more for higher earners than for lower earners because higher earners have a greater proportion of earnings above the limit. As expected, adding the enhanced benefits for low earners and survivors also favors lower earners. The lowest fifth’s share of benefits increases by about a percentage point, and the top fifth’s share of benefits decreases by almost a percentage point. It should be emphasized that these simulations are only for individuals born in 1985, and the distributional impact of Model 2 could be different for individuals born in later years. For example, under the proposal, initial Social Security defined benefits only grow with prices, while initial benefits from account balances grow with wages. Since wages generally grow faster than prices, Social Security defined benefits will decline as a proportion of total benefits, reducing the importance of the progressive benefit formula, disability benefits, and the enhanced benefits for low earners and survivors. It should also be noted that the account feature of Model 2-100 percent likely exposes recipients to greater financial risk. Greater exposure to risk may not affect the shares of benefits received by the bottom and top fifths of earnings. However, greater risk may be more problematic for lower earners who likely have fewer resources to fall back on if their accounts perform poorly. The “Progressive Proposal for Social Security Personal Accounts,” offered by Peter Ferrara, would establish voluntary, progressive individual accounts and reduce the Social Security retirement and aged survivor benefits for those who participate. A governing board would administer the accounts centrally in a fashion similar to the Thrift Savings Plan for federal employees. Specifically, under the proposal, Account contributions would be redirected from the Social Security payroll tax. They would equal 10 percent of the first $10,000 of annual earnings and 5 percent of earnings over $10,000 up to the maximum taxable earnings level, which is $87,900 in 2004. The $10,000 threshold would increase annually according to Social Security’s national Average Wage Index. Participating workers would be guaranteed that the combined benefits from Social Security’s defined benefit and their personal accounts would at least equal the Social Security benefits that current law promises them, as long as they choose the default investment option. The default investment option would have an allocation of 65 percent in broad indexed equity funds and 35 percent in broad indexed corporate bond funds. Those who never participate in the personal account option would be provided benefits promised by the current system. To continue paying benefits while also making deposits to the accounts, funds would be transferred from the Treasury’s general fund. The accounts would eventually completely replace Social Security’s old- age benefits for workers and their spouses, under the assumptions for investment returns used by Social Security actuaries. Accordingly, the proposal anticipates reductions in the Social Security payroll tax in the long term that would be identical for all workers. Social Security benefits for workers who become disabled or who die before retirement would not be affected. Under the Ferrara proposal, no changes would be made to the Social Security defined benefits scheduled under current law for those who choose not to participate in the accounts or for whom the benefit guarantee would apply. In addition, benefits for disabled workers and those who die before retirement would remain in place, and the distributional effects of these parts of Social Security would remain largely unchanged. Thus, any changes to the distribution of benefits would occur through the individual accounts for those choosing the accounts. All workers would initially continue to pay payroll taxes at the same rate as under current law, which is the same for all earnings up to the maximum taxable earnings. At the same time, lower earners would have larger contributions made from the payroll tax to their voluntary individual accounts. As a result, holding all else equal, the annuities that lower earners could receive from their accounts would replace a higher share of their pre-retirement earnings than annuities for higher earners. However, without rigorous quantitative analysis, it remains unclear how the distributional effects of the accounts would compare with and interact with the effects of the current system. In particular, actual investment returns could vary depending on individuals’ investment choices or on market performance, and in some cases returns may not be high enough to completely replace Social Security benefits, in which case the guarantee would apply. The Ferrara proposal also would have significant distributional effects from an equity perspective due to its revenue provisions. The general revenue transfers needed to cover the transition to individual accounts could have substantial effects on rates of return and other equity measures. Also, once the transition is complete and it becomes possible under the proposal to reduce payroll taxes, such tax reductions would also affect equity measures and how they are distributed. A proposal offered by Peter Diamond and Peter Orszag would restore Social Security’s long-term solvency by increasing revenues and decreasing benefits while also increasing benefits for selected old-age survivors and low earners. Also provisions in the proposal ensure that benefits in the aggregate are not reduced for workers who become disabled and for the young survivors of workers who die before retirement. Specifically, under the proposal, Benefit reductions: Social Security benefits would decrease by having initial benefits grow at a slower rate to reflect expected gains in life expectancy. Benefits would decrease for higher earners through a change to the benefit formula. Benefits would decrease by an additional proportional 0.30 percent beginning in 2023. Revenue increases: Payroll taxes would gradually increase by raising the maximum earnings level subject to the payroll tax, which is $87,900 in 2004. Also, Social Security would cover all new state and local government employees. (This would increase revenues from the payroll tax immediately but would not result in additional benefit payments until the newly covered workers became eligible for benefits.) In addition, payroll taxes would increase 3 percentage points (divided equally between employees and employers) for all earnings above the maximum taxable earnings level. Benefit calculations would not reflect the additional earnings taxed under this provision. The tax on earnings above the maximum taxable earnings level would increase by an additional 0.51 percent annually beginning in 2023. Payroll taxes on earnings at or below the maximum taxable earnings level would increase by an additional 0.255 percent annually beginning in 2023. Benefit enhancements: Benefits would increase for lower earners through a new benefit formula for qualifying workers. This provision is conceptually similar to the enhanced benefit for lower earners under CSSS Model 2 but uses a different formula. Benefits would increase for old-age surviving spouses to 75 percent of the benefit the married couple would have received if both were still alive. This provision is conceptually similar to the enhanced survivor benefit under CSSS Model 2 but is specified somewhat differently. Benefits for those workers who become disabled and their dependents and for the young survivors of workers who die before retirement would increase under a “Super-COLA” through changes to the formula for calculating initial benefits, which would be recalculated each year benefits are received. This provision is designed so that the other reform provisions do not affect these beneficiaries. The Diamond-Orszag proposal would make a variety of benefit changes that would affect the distribution of benefits. Reducing benefits to reflect expected gains in life expectancy would be a proportional reduction, decreasing benefits by the same percentage across all earnings levels. The additional reductions beginning in 2023 would also be proportional. Proportional reductions do not, by definition, change the share of benefits received by each segment of the earnings distribution. Still, they represent a downsizing of a redistributive benefit program. As a result the size of the redistributions would be smaller under these proportional reductions than under the current system, holding all else equal. However, in addition, the proposal contains another benefit reduction that affects only higher earners, which would result in their getting a smaller share of total benefits and in increasing shares for all other workers not affected by the reduction. Moreover, the proposal would increase benefits for lower earners and surviving aged spouses. The proposal also preserves benefits for workers who become disabled and for young the survivors of workers who die before retirement. These workers tend to be lower earners, so all of the proposal's benefit increases would generally increase the share of total benefits received by lower earners. Finally, the proposal includes a variety of revenue increases, most of which increase the tax burden on higher earners relative to lower earners. As a result, the distribution of rates of return and other equity measures would favor lower earners more and higher earners less than under the current system. By design, Social Security distributes benefits and contributions across workers and their families in a variety of ways. These distributional effects illustrate how the program balances the goal of helping ensure adequate incomes with the goal of giving all workers a fair deal on their contributions. Any changes to Social Security would potentially alter those distributional effects and the balance between those goals. Therefore, policy makers need to understand how to evaluate distributional effects of alternative policies. The various evaluation approaches reflect varying emphases on Social Security’s adequacy and equity goals, so the methodological choices are connected inherently to policy choices. Regardless of policy perspectives, methodological issues such as the effects of general revenue transfers muddy distributional analysis. Moreover, greater progressivity is not the same thing as greater adequacy. Under some reform scenarios, Social Security could distribute benefits more progressively than current law yet provide lower, less adequate benefits. At the same time, our analysis shows that reform provisions that favor lower earners can offset other provisions that disfavor them. In addition, greater progressivity may result in less equity. As a result, any evaluations should consider a proposal’s provisions taken together as a whole. Moreover, distributional effects are only one of several kinds of effects proposals would have. A comprehensive evaluation is needed that considers a range of effects together. In our criteria for evaluating reform proposals, progressivity is just one of several aspects of balancing adequacy and equity. We provided SSA an opportunity to comment on the draft report. The agency provided us with written comments, which appear in Appendix II. In general, SSA concurred with the methodology, overall findings and conclusions of the report, noting that our modeling results are consistent with SSA’s internal efforts to model the features of Model 2 of the Commission to Strengthen Social Security. Many of SSA’s comments, for example those regarding progressivity measures and equity measure methodology, involve clarifying our presentation or conducting additional analyses to provide more consistency with other analyses or to extend the readers’ understanding. We revised our draft in response to these suggestions as appropriate, given our time and resource constraints. SSA agreed with GAO’s discussion of the complications involved in applying equity measures to reform proposals that include general revenue transfers and concurred that a satisfactory resolution of the issue is complex and methodologically troublesome. SSA suggested some additional analysis relying on some simplifying assumptions, for example assuming any general revenue transfer is financed through a payroll tax increase, that one could use to tackle the problem. We agree that despite its methodological complexity, the use of general revenue transfers raises many important distributional issues. However, the analytical difficulties raised by this issue would require thoughtful and deliberate research that was beyond the scope of the current study, given our time and resource constraints. SSA also had suggestions concerning our choice of benchmark policy scenarios against which to compare reform proposals. For example, while SSA is supportive of GAO’s development of standard benchmarks, they note that our benchmarks do not match the sustainable solvency achieved by Model 2 beyond 75 years and that this distinction should be noted in the report. SSA also suggests that a third benchmark be considered that would characterize a scenario where no reform action is taken and the program could only pay benefits equal to incoming payroll tax revenues. As we have noted in the past, we agree that sustainable solvency is an important objective and that the GAO benchmarks do not achieve solvency beyond the 75 year period. We share SSA’s emphasis on the importance of careful and complete annotation and we have clarified our report, where appropriate, to minimize the potential for misinterpretation or misunderstanding on this matter. However, in this case, we did not revise our benchmarks because we recognized (along with SSA actuaries we consulted early in the assignment) that the use of sustainable benchmarks would not have a noticeable effect on an analysis of the shape of the distribution of benefits and taxes. Regarding the use of a “no action” benchmark, we continue to believe that comparing a proposal that starts relatively soon to one that posits that no legislative action is ever taken does not provide the consistent bounds for reform captured by our current benchmarks. Appendix I of our report discusses the construction and rationale for the benchmarks used in this report. In our view, our set of benchmarks provides a fair and objective measuring stick with which to compare alternative proposals. SSA also provided technical and other clarifying comments that we incorporated as appropriate. We will send copies of this report to appropriate congressional committees and other interested parties. Copies will also be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215, Charles Jeszeck at (202) 512-7036, or Ken Stockbridge at (202) 512-7264, if you have any questions about this report. Other major contributors include Gordon Mermin and Seyda Wentworth. Genuine Microsimulation of Social Security and Accounts (GEMINI) is a microsimulation model developed by the Policy Simulation Group (PSG). GEMINI simulates Social Security benefits and taxes for large representative samples of people born in the same year. GEMINI simulates all types of Social Security benefits including retired workers’, spouses’, survivors’, and disability benefits. It can be used to model a variety of Social Security reforms including the introduction of individual accounts. GEMINI uses inputs from two other PSG models, the Social Security and Accounts Simulator (SSASIM), which has been used in numerous GAO reports, and the Pension Simulator (PENSIM), which has been developed for the Department of Labor. GEMINI relies on SSASIM for economic and demographic projections and relies on PENSIM for simulated life histories of large representative samples of people born in the same year and their spouses. Life histories include educational attainment, labor force participation, earnings, job mobility, marriage, disability, childbirth, retirement, and death. Life histories are validated against data from the Survey of Income and Program Participation, the Current Population Survey, Modeling Income in the Near Term (MINT3), and the Panel Study of Income Dynamics. Additionally, any projected statistics (such as life expectancy, employment patterns, and marital status at age 60) are, where possible, consistent with intermediate-cost projections from Social Security Administration’s Office of the Chief Actuary (OCACT). At their best, such models can only provide very rough estimates of future incomes. However, these estimates may be useful for comparing future incomes across alternative policy scenarios and over time. For this report we used GEMINI to simulate Social Security benefits and taxes for 100,000 individuals born in 1985. Benefits and taxes were simulated under our tax-increase (promised benefits) and proportional benefit-reduction (funded benefits) benchmarks (described below) and under Model 2 of the President’s Commission to Strengthen Social Security (CSSS). We also simulated variations of these scenarios to examine the impact of disability benefits, the cap on taxable earnings, each feature of Model 2, and different assumptions on the return to equities. To examine lifetime earnings, benefits, and taxes on a household basis, we chose a “shared” concept that researchers have used with the MINT3 and DYNASIM microsimulation models. In years that individuals are married, we assign them half of their own earnings, benefits, and contributions and half of their spouses’ earnings, benefits, and contributions. In years that individuals are single, we assign them their entire earnings, benefits, and contributions. This technique accounts for household dynamics including divorce, remarriage, and widowhood. To facilitate our modeling analysis, we made a variety of assumptions regarding economic and demographic trends and how CSSS Model 2’s individual accounts would work. In choosing our assumptions, we focused our analysis to illustrate relevant points about distributional effects and hold equal as much as possible any variables that were either not relevant to or would unduly complicate that focus. As a result of these assumptions as well as issues inherent in any modeling effort, our analysis has some key limitations, especially relating to risk, individual account decisions, and changes over time. The simulations are based on economic and demographic assumptions from the 2003 Social Security trustees’ report. We used trustees’ assumptions for inflation, real wage growth, mortality decline, immigration, labor force participation, and interest rates. The simulations assumed that mortality rates vary by educational attainment and disability status. In every year, mortality rates implied by trustees assumptions are increased for those with lower levels of education and reduced for those with higher levels of education. For example, mortality rates are multiplied by 1.5 for women who do not complete high school, while rates are multiplied by 0.7 for women with four-year college degrees. Adjustment factors for education were chosen to calibrate life expectancy by demographic group with the MINT3 simulation model. Mortality rates are multiplied by a factor of 2 for Disability Insurance (DI) recipients. The adjustment factor for disability was chosen so PENSIM life histories produced aggregate results consistent with 2003 Social Security Trustees Report. Assuming constant adjustment factors over time does not capture any convergence in mortality rates as a birth cohort ages. It may be the case that differences in mortality rates across education levels may narrow by the time a birth cohort retires. If that is the case, our simulations overstate differences in life expectancy at retirement. Rather than model account participation, we instead simulate benefits under two scenarios, one where all individuals participate and another scenario where no one participates. As a result, we do not capture any distributional effects that might result from account participation varying by earnings level. For instance, if lower earners are less likely to participate in the individual accounts, then our simulations may overstate their share of benefits, as account participation is likely to increase benefits. Like the analysis of Model 2 by OCACT we assume all individuals invest in the same portfolio: 50 percent in equities, 30 percent in corporate bonds, and 20 percent in Treasury bonds. We do not capture any distributional effects that might result if portfolio choice varies by earnings level. For instance, if lower earners were more risk averse and therefore choose more conservative portfolios, our simulations overstate the share of benefits for lower earners. We use the same assumptions for asset returns as OCACT: In all years real returns are 6.5 percent for equities, 3.5 percent for corporate bonds, and 3 percent for Treasury bonds, with an annual administrative expense of 30 basis points. For sensitivity analysis, we simulated a version of Model 2 that assumed a 4.9 percent real return to equities, a version that assumed an 8.7 percent real return to equities, and a version that assumed the return to equities varied stochastically across individuals and over time. Shares of benefits by earnings quintile were similar under all specifications. However, if portfolio choice or participation in accounts varied by earnings quintile, then shares of benefits might be more sensitive to rates of return. In order to compare account balances with Social Security defined benefits, we follow the assumption of OCACT that individuals fully annuitize their account balances at retirement. We assume individuals purchase inflation-indexed annuities, while married individuals purchasing inflation-indexed joint and two-thirds survivor annuities. The commission proposal, however, also allows participants to access their accounts through regular monthly withdrawals or through lump sum distributions if their monthly benefits (Social Security defined benefits and any annuity payments) are enough to keep them out of poverty. Given that few defined-contribution pension recipients currently choose to annuitize, it is possible that many retirees under Model 2 would not annuitize their accounts. To the extent that withdrawal decisions vary by earnings level, there may be distributional consequences that our simulations do not capture. For instance, some people may withdraw money too quickly, leaving themselves with inadequate income later in retirement, and such behaviors could vary by earnings level. Our quantitative analysis does not reflect differences in risk across policy scenarios. Because of financial market fluctuations, individual accounts likely expose recipients to greater financial risk. For sensitivity analysis, we simulated a version of Model 2 where the return to equities varied stochastically across individuals and over time. Stochastic rates of return had very little impact on shares of benefits received by earnings quintiles. However, greater risk may be more problematic for lower earners, who likely have fewer resources to fall back on if their accounts perform poorly. Consequently, lower earners may be more risk averse and therefore suffer greater utility loss from increased risk. We simulated benefits for individuals born in 1985 because Model 2’s reform features would be almost fully phased in for such workers. However, the distributional effects of Model 2 might change over time. For example, under the proposal initial Social Security defined benefits only grow with prices, while initial benefits from account balances grow with wages. Since wages generally grow faster than prices, Social Security defined benefits will decline as a proportion of total benefits, reducing the importance of the progressive benefit formula, disability benefits, and the enhanced benefits for low earners and survivors. To capture the distributional impact of pre-retirement mortality, we calculated benefit-to-tax ratios and lifetime benefits for all sample members who survived past age 24. However, our measure of well-being, lifetime earnings, may not be the best way to assess the well-being of those who die before retirement. Some high-wage workers are classified as low lifetime earners simply because they did not live very long, and consequently our analysis overstates the degree to which those who die young are classified as low earners. As a result, our measures underestimate the degree to which Social Security favors lower earners under all of the scenarios we analyze. For sensitivity analysis, we also calculated benefit-to-tax ratios and lifetime benefits only for sample members who lived to age 67 and beyond. While all of the measures of progressivity were lower, the findings were unchanged as the relationships across all of the scenarios remained the same. To assess the reliability of simulated data from GEMINI, we reviewed PSG’s published validation checks, examined the data for reasonableness and consistency, preformed sensitivity analysis, and compared our results with a study by the actuaries at the Social Security Administration. PSG has published a number of validation checks of its simulated life histories. For example, simulated life expectancy is compared with projections from the Social Security Trustees; simulated benefits at age 62 are compared with administrative data from SSA; and simulated educational attainment, labor force participation rates, and job tenure are compared with values from the Current Population Survey. We found that simulated statistics for the life histories were reasonably close to the validation targets. For sensitivity analysis, we simulated benefits and taxes for policy scenarios under a number of alternative specifications including higher and lower returns to equities, stochastic returns to equities, and limiting the sample to those who survive to retirement. Our findings were consistent across all specifications. Finally, we compared our results with those in a memo from the actuaries at the Social Security Administration. Our finding that the lowest earnings quintile receives a greater share of benefits under Model 2-100 percent than under promised benefits is consistent with the actuaries’ projections of benefits for illustrative high- and low- earning couples in 2052. Also, in a previous report we found that GEMINI simulations of promised Social Security benefits were similar to MINT simulations for the 1955 birth cohort. We conclude from our assessment that simulated data from GEMINI are sufficiently reliable for the purposes of this report, particularly since we focus on the differences in simulated measures across scenarios, as opposed to the actual estimates themselves. According to current projections of the Social Security trustees for the next 75 years, revenues will not be adequate to pay full benefits as defined by the current benefit formula. Therefore, estimating future Social Security benefits should reflect that actuarial deficit and account for the fact that some combination of benefit reductions and revenue increases will be necessary to restore long-term solvency. To illustrate a full range of possible outcomes, we developed hypothetical benchmark policy scenarios that would achieve 75-year solvency either by only increasing payroll taxes or by only reducing benefits. In developing these benchmarks, we identified criteria to use to guide their design and selection. Our tax-increase-only benchmark simulates “promised benefits,” or those benefits promised by the current benefit formula, while our benefit-reduction-only benchmarks simulate “funded benefits,” or those benefits for which currently scheduled revenues are projected to be sufficient. Under the latter policy scenarios, the benefit reductions would be phased in between 2005 and 2035 to strike a balance between the size of the incremental reductions each year and the size of the ultimate reduction. At our request, SSA actuaries scored our benchmark policies and determined the parameters for each that would achieve 75-year solvency. Table 1 summarizes our benchmark policy scenarios. For our benefit- reduction scenarios, the actuaries determined these parameters assuming that disabled and survivor benefits would be reduced on the same basis as retired worker and dependent benefits. If disabled and survivor benefits were not reduced at all, reductions in other benefits would be deeper than shown in this analysis. According to our analysis, appropriate benchmark policies should ideally be evaluated against the following criteria: 1. “Distributional neutrality”: the benchmark should reflect the current system as closely as possible while still restoring solvency. In particular, it should try to reflect the goals and effects of the current system with respect to redistribution of income. However, there are many possible ways to interpret what this means, such as a. producing a distribution of benefit levels with a shape similar to the distribution under the current benefit formula (as measured by coefficients of variation, skewness, kurtosis, etc.); b. maintaining a proportional level of income transfers in dollars; c. maintaining proportional replacement rates; and d. maintaining proportional rates of return. 2. Demarcating upper and lower bounds: These would be the bounds within which the effects of alternative proposals would fall. For example, one benchmark would reflect restoring solvency solely by increasing payroll taxes and therefore maximizing benefit levels, while another would solely reduce benefits and therefore minimize payroll tax rates. 3. Ability to model: The benchmark should lend itself to being modeled within the GEMINI model. 4. Plausibility: The benchmark should serve as a reasonable alternative within the current debate; otherwise, the benchmark could be perceived as an invalid basis for comparison. 5. Transparency: The benchmark should be readily explainable to the reader. Our tax-increase-only benchmark would raise payroll taxes once and immediately by the amount of Social Security’s actuarial deficit as a percentage of payroll. It results in the smallest ultimate tax rate of those we considered and spreads the tax burden most evenly across generations; this is the primary basis for our selection. The later that taxes are increased, the higher the ultimate tax rate needed to achieve solvency, and in turn the higher the tax burden on later taxpayers and lower on earlier taxpayers. Still, any policy scenario that achieves 75-year solvency only by increasing revenues would have the same effect on the adequacy of future benefits in that promised benefits would not be reduced. Nevertheless, alternative approaches to increasing revenues could have very different effects on individual equity. We developed alternative benefit-reduction benchmarks for our analysis. For ease of modeling, all benefit-reduction benchmarks take the form of reductions in the benefit formula factors; they differ in the relative size of those reductions across the three factors, which are 90, 32, and 15 percent under the current formula. Each benchmark has three dimensions of specification: scope, phase-in period, and the factor changes themselves. For our analysis, we apply benefit reductions in our benchmarks very generally to all types of benefits, including disability and survivors’ benefits as well as old-age benefits. Our objective is to find policies that achieve solvency while reflecting the distributional effects of the current program as closely as possible. Therefore, it would not be appropriate to reduce some benefits and not others. If disability and survivors’ benefits were not reduced at all, reductions in other benefits would be deeper than shown in this analysis. We selected a phase-in period that begins with those reaching age 62 in 2005 and continues for 30 years. We chose this phase-in period to achieve a balance between two competing objectives: (1) minimizing the size of the ultimate benefit reduction and (2) minimizing the size of each year’s incremental reduction to avoid “notches,” or unduly large incremental reductions. Notches create marked inequities between beneficiaries close in age to each other. Later birth cohorts are generally agreed to experience lower rates of return on their contributions already under the current system. Therefore, minimizing the size of the ultimate benefit reduction would also minimize further reductions in rates of return for later cohorts. The smaller each year’s reduction, the longer it will take for benefit reductions to achieve solvency, and in turn the deeper the eventual reductions will have to be. However, the smallest possible ultimate reduction would be achieved by reducing benefits immediately for all new retirees by over 10 percent; this would create a huge notch. Our analysis shows that a 30-year phase-in should produce incremental annual reductions that would be relatively small and avoid significant notches. In contrast, longer phase-in periods would require deeper ultimate reductions. In addition, we feel it is appropriate to delay the first year of the benefit reductions for a few years because those within a few years of retirement would not have adequate time to adjust their retirement planning if the reductions applied immediately. The Maintain Tax Rates (MTR) benchmark in the 1994-96 Advisory Council Report also provided for a similar delay. Finally, the timing of any policy changes in a benchmark scenario should be consistent with the proposals against which the benchmark is compared. The analysis of any proposal assumes that the proposal is enacted, usually within a few years. Consistency requires that any benchmark also assume enactment of the benchmark policy in the same time frame. Some analysts have suggested using a benchmark scenario in which Congress does not act at all and the trust funds become exhausted. However, such a benchmark assumes that no action is taken while the proposals against which it is compared assume that action is taken, which is inconsistent. It also seems unlikely that a policy enacted over the next few years would wait to reduce benefits until the trust funds are exhausted; such a policy would result in sudden, large benefit reductions and create substantial inequities across generations. When workers retire, become disabled, or die, Social Security uses their lifetime earnings records to determine each worker’s PIA, on which the initial benefit and auxiliary benefits are based. The PIA is the result of two elements—the Average Indexed Monthly Earnings (AIME) and the benefit formula. The AIME is determined by taking the lifetime earnings earnings record, indexing it, and taking the average of the highest 35 years of indexed wages. To determine the PIA, the AIME is then applied to a step- like formula, shown here for 2004. PIA = 90% (AIME $612) + 32% (AIME > $612 and $3689) + 15% (AIME > $3689) where AIME is the applicable portion of AIME. All of our benefit-reduction benchmarks are variations of changes in PIA formula factors. Proportional reduction: Each formula factor is reduced annually by subtracting a constant proportion of that factor’s value under current law, resulting in a constant percentage reduction of currently promised benefits for everyone. That is, x) represents the 3 PIA formula factors in year t and x = constant proportional formula factor reduction. The value of x is calculated to achieve 75-year solvency, given the chosen phase-in period and scope of reductions. The formula for this reduction specifies that the proportional reduction is always taken as a proportion of the current law factors rather than the factors for each preceding year. This maintains a constant rate of benefit reduction from year to year. In contrast, taking the reduction as a proportion of each preceding year’s factors implies a decelerating of the benefit reduction over time because each preceding year’s factors get smaller with each reduction. To achieve the same level of 75-year solvency, this would require a greater proportional reduction in earlier years because of the smaller reductions in later years. The proportional reduction hits lower earners hard because the constant x percent of the higher formula factors results in a larger percentage point reduction over the lower earnings segments of the formula. For example, in a year when the cumulative size of the proportional reduction has reached 10 percent, the 90 percent factor would then have been reduced by 9 percentage points, the 32 percent factor by 3.2 percentage points, and the 15 percent factor by 1.5 percentage points. As a result, earnings in the first segment of the benefit formula would be replaced at 9 percentage points less than the current formula, while earnings in the third segment of the formula would be replaced at only 1.5 percentage points less than the current formula. Hypothetical-account reduction: Each formula factor is reduced by annually subtracting a constant amount that is the same for all factors in all years. That is, where y = constant formula factor reduction. The value of y is calculated to achieve 75-year solvency, given the chosen phase-in period and scope of reductions. This reduction results in equal percentage point reductions in the formula factors, by definition, and subjects earnings across all segments of the PIA formula to the same reduction. Therefore, it avoids hitting lower earners as hard as the proportional reduction. We call this a hypothetical-account reduction because it has the same effect as a benefit reduction based on using a hypothetical account. In fact, we developed this benchmark first using a hypothetical-account approach and then discovered it can be reduced to a simple change in the PIA formula. Hypothetical-account calculations have become a common way to offset benefits under individual account proposals, such as those by the President’s Commission to Strengthen Social Security. Such proposals reduce Social Security’s defined benefit to reflect the fact that contributions have been diverted from the trust funds into the individual accounts. The account contributions are accumulated in a hypothetical account at a specified rate of return and then converted to an annuity value. We used a hypothetical-account offset in our 1990 analysis of a partial privatization proposal. In that analysis, we were charged with finding a benefit reduction that would leave the redistributive effects of the program unchanged while allowing a diversion of 2 percentage points of contributions into individual accounts. We demonstrated the distributional neutrality of this benefit reduction by showing that if all individuals earned exactly the cohort rate of return on their individual accounts, then their income under the proposal from Social Security and the new accounts would be exactly the same as under the current system. For the purposes of developing a benefit-reduction benchmark, we applied the hypothetical-account approach even though there are no actual individual accounts. From our previous analysis, we realized a hypothetical-account approach may produce distributional effects that might in some sense be more neutral than other reduction approaches and therefore worth studying as an alternative. In effect, using it to calculate a benefit-reduction benchmark implies calculating an annuity value of the percent of payroll that represents the system’s revenue shortage. As it turns out mathematically, the hypothetical-account approach to reducing benefits translates into PIA formula factor changes. Such a benefit reduction is proportional to the AIME, not to the PIA, because the contributions to a hypothetical account are proportional to earnings. Therefore, a benefit reduction based on such an account would also be proportional to earnings; that is, Benefit reduction = y AIME Therefore, the new PIA would be PIAnew =(90% - y) AIME + (32% - y) AIME + (15% - y) AIME Thus, the reduction from a hypothetical account can be translated into a change in the PIA formula factors. Because this reduction can be described as subtracting a constant amount from each PIA formula factor, it is reasonably transparent. In our analysis of CSSS Model 2, we found that Model 2 had a benefit distribution that was very close to our hypothetical-account benefit- reduction benchmark. For example, households in the bottom fifth of earnings received about 13.8 percent of all lifetime benefits under Model 2, compared with 13.5 percent under the hypothetical-account benefit- reduction benchmark. In this report, we present the results using the proportional benefit-reduction benchmark because this benefit-reduction approach is more easily understood. Table 2 summarizes the features of our three benchmarks. Social Security Reform: Analysis of a Trust Fund Exhaustion Scenario. GAO-03-907. Washington, D.C.: July 29, 2003. Social Security and Minorities: Earnings, Disability Incidence, and Mortality Are Key Factors That Influence Taxes Paid and Benefits Received. GAO-03-387. Washington, D.C.: Apr. 23, 2003. Social Security Reform: Analysis of Reform Models Developed by the President's Commission to Strengthen Social Security. GAO-03-310. Washington, D.C.: Jan. 15, 2003. Social Security: Program’s Role in Helping Ensure Income Adequacy. GAO-02-62. Washington, D.C.: Nov. 30, 2001. Social Security Reform: Potential Effects on SSA’s Disability Programs and Beneficiaries. GAO-01-35. Washington, D.C.: Jan. 24, 2001. Social Security Reform: Information on the Archer-Shaw Proposal. GAO/AIMD/HEHS-00-56. Washington, D.C.: Jan. 18, 2000. Social Security: Evaluating Reform Proposals. GAO/AIMD/HEHS-00-29. Washington, D.C.: Nov. 4, 1999. Social Security: Issues in Comparing Rates of Return with Market Investments. GAO/HEHS-99-110. Washington, D.C.: Aug. 5, 1999. Social Security: Criteria for Evaluating Social Security Reform Proposals. GAO/T-HEHS-99-94. Washington, D.C.: Mar. 25, 1999. Social Security: Different Approaches for Addressing Program Solvency. GAO/HEHS-98-33. Washington, D.C.: July 22, 1998. Social Security Financing: Implications of Government Stock Investing for the Trust Fund, the Federal Budget, and the Economy. GAO/AIMD/HEHS-98-74. Washington, D.C.: Apr. 22, 1998. Social Security: Restoring Long-Term Solvency Will Require Difficult Choices. GAO/T-HEHS-98-95. Washington, D.C.: Feb. 10, 1998. | Under the current Social Security benefit formula, retired workers receive benefits that equal about 50 percent of pre-retirement earnings for a low-wage worker but only about 30 percent for a relatively high-wage worker. Factors other than earnings also influence the distribution of benefits, including the program's provisions for disabled workers, spouses, children, and survivors. Changes in the program over time also affect the distribution of benefits across generations. Social Security faces a long-term structural financing shortfall. Program changes to address that shortfall could alter the way Social Security's benefits and revenues are distributed across the population and affect the income security of millions of Americans. To gain a better understanding of the distributional effects of potential program changes, the Chairman and Ranking Minority Member of the Senate Special Committee on Aging asked us to address (1) how to define and describe "progressivity," that is, the distribution of benefits and taxes with respect to earnings level, when assessing the current Social Security system or proposed changes to it; (2) what factors influence the distributional effects of the current Social Security program; and (3) what would be the distributional effects of various reform proposals, compared with alternative solvent baselines for the current system. Two distinct perspectives on Social Security's goals suggest different approaches to measuring "progressivity," or the distribution of benefits and taxes with respect to earnings level. Both perspectives provide valuable insights. An adequacy perspective focuses on benefit levels and how well they maintain pre-entitlement living standards. An equity perspective focuses on rates of return and other measures relating lifetime benefits to contributions. Both perspectives examine how their measures are distributed across earnings levels. However, equity measures take all benefits and taxes into account, which is difficult for reform proposals that rely on general revenue transfers because it is unclear who pays for those general revenues. The Social Security program's distributional effects reflect both program features and demographic patterns among its recipients. In addition to the benefit formula, disability benefits favor lower earners because disabled workers are more likely to be lower lifetime earners. In contrast, household patterns reduce the system's tilt toward lower earners, for example, when lower earners have high-earner spouses. The advantage for lower earners is also diminished by the fact that they may not live as long as higher earners and therefore would get benefits for fewer years on average. Proposals to alter the Social Security program would have different distributional effects, depending on their design. Model 2 of the President's Commission to Strengthen Social Security proposes new individual accounts, certain benefit reductions for all beneficiaries, and certain benefit enhancements for selected low earners and survivors. According to our simulations, the combined effect could result in lower earners receiving a greater share of all benefits than promised or funded under the current system if all workers invest in the same portfolio. |
The space shuttle is the world’s first reusable space transportation system. It consists of a reusable orbiter with three main engines, two partially reusable solid rocket boosters, and an expendable external fuel tank. Since it is the nation’s only launch system capable of carrying people to and from space, the shuttle’s viability is important to NASA’s other space programs, such as the International Space Station. NASA operates four orbiters in the shuttle fleet. Space systems are inherently risky because of the technology involved and the complexity of their activities. For example, thousands of people perform about 1.2 million separate procedures to prepare a shuttle for flight. NASA has emphasized that the top priority for the shuttle program is safety. The space shuttle’s workforce shrank from about 3,000 to about 1,800 full- time equivalent employees from fiscal year 1995 through fiscal year 1999. A major element of this workforce reduction was the transfer of shuttle launch preparation and maintenance responsibilities from the government and multiple contractors to a single private contractor. NASA believed that consolidating shuttle operations under a single contract would allow it to reduce the number of engineers, technicians, and inspectors directly involved in the day-to-day oversight of shuttle processing. However, the agency later concluded that these reductions caused shortages of required personnel to perform in-house activities and maintain adequate oversight of the contractor. Since the shuttle’s first flight in 1981, the space shuttle program has developed and incorporated many modifications to improve performance and safety. These include a super lightweight external tank, cockpit display enhancements, and main engine safety and reliability improvements. In 1994, NASA stopped approving additional upgrades, pending the potential replacement of the shuttle with another reusable launch vehicle. NASA now believes that it will have to maintain the current shuttle fleet until at least 2012, and possibly through 2020. Accordingly, it has established a development office to identify and prioritize upgrades to maintain and improve shuttle operational safety. Last year, we reported that several internal studies showed that the shuttle program’s workforce had been negatively affected by downsizing. These studies concluded that the existing workforce was stretched thin to the point where many areas critical to shuttle safety—such as mechanical engineering, computer systems, and software assurance engineering— were not sufficiently staffed by qualified workers. (Appendix I identifies all of the key areas that were facing staff shortages). Moreover, the workforce was showing signs of overwork and fatigue. For example, indicators on forfeited leave, absences from training courses, and stress- related employee assistance visits were all on the rise. Lastly, the program’s demographic shape had changed dramatically. Throughout the Office of Space Flight, which includes the shuttle program, there were more than twice as many workers over 60 years old than under 30 years old. This condition clearly jeopardized the program’s ability to hand off leadership roles to the next generation. According to NASA’s Associate Administrator for the Office of Space Flight, the agency faced significant safety and mission success risks because of workforce issues. This was reinforced by NASA’s Aerospace Safety Advisory Panel, which concluded that workforce problems could potentially affect flight safety as the shuttle launch rate increased. NASA subsequently recognized the need to revitalize its workforce and began taking actions toward this end. In October 1999, NASA’s Administrator directed the agency’s highest-level managers to consider ways to reduce workplace stress. The Administrator later announced the creation of a new office to increase the agency’s emphasis on health and safety and included improved health monitoring as an objective in its fiscal year 2001 performance plan. Finally, in December 1999, NASA terminated its downsizing plans for the shuttle program and initiated efforts to begin hiring new staff. Following the termination of its downsizing plans, NASA and the Office of Management and Budget conducted an overall workforce review to examine personnel needs, barriers to achieving proper staffing levels and skill mixes, and potential reforms to help address the agency’s long-term requirements. In performing this review, NASA used GAO’s human capital self-assessment checklist. The self-assessment framework provides a systematic approach for identifying and addressing human capital issues and allows agency managers to (1) quickly determine whether their approach to human capital supports their vision of who they are and what they want to accomplish and (2) identify those policies that are in particular need of attention. The checklist follows a five-part framework that includes strategic planning, organizational alignment, leadership, talent, and performance culture. NASA has taken a number of actions this year to regenerate its shuttle program workforce. Significantly, NASA’s current budget request projects an increase of more than 200 full-time equivalent staff for the shuttle program through fiscal year 2002—both new hires and staff transfers. According to NASA, from the beginning of fiscal year 2000 through July 2001, the agency had actually added 191 new hires and 33 transfers to the shuttle program. These new staff are being assigned to areas critical to shuttle safety—such as project engineering, aerospace vehicle design, avionics, and software—according to NASA. As noted earlier, appendix I provides a list of critical skills where NASA is addressing personnel shortages. NASA is also focusing more attention on human capital management in its annual performance plan. The Government Performance and Results Act requires a performance plan that describes how an agency’s goals and objectives are to be achieved. These plans are to include a description of the (1) operational processes, skills, and technology and (2) human, capital and information resources required to meet those goals and objectives. On June 9, 2000, the President directed the heads of all federal executive branch agencies to fully integrate human resources management into agency planning, budget, and mission evaluation processes and to clearly state specific human resources management goals and objectives in their strategic and annual performance plans. In its Fiscal Year 2002 Performance Plan, NASA describes plans to attract and retain a skilled workforce. The specifics include the following: Developing an initiative to enhance NASA’s recruitment capabilities, focusing on college graduates. Cultivating a continued pipeline of talent to meet future science, math, and technology needs. Investing in technical training and career development. Supplementing the workforce with nonpermanent civil servants, where it makes sense. Funding more university-level courses and providing training in other core functional areas. Establishing a mentoring network for project managers. We will provide a more detailed assessment of the agency’s progress in achieving its human capital goals as part of our review of NASA’s Fiscal Year 2002 Performance Plan requested by Senator Fred Thompson. Alongside these initiatives, NASA is in the process of responding to a May 2001 directive from the Office of Management and Budget on workforce planning and restructuring. The directive requires executive agencies to determine (1) what skills are vital to accomplishing their missions, (2) how changes expected in the agency’s work will affect human resources, (3) how skill imbalances are being addressed, (4) what challenges impede the agency’s ability to recruit and retain high-quality staff, and (5) what barriers there are to restructuring the workforce. NASA officials told us that they have already made these assessments. The next step is to develop plans specific to the space flight centers that focus on recruitment, retention, training, and succession and career development. If effectively implemented, the actions that NASA has been taking to strengthen the shuttle workforce should enable the agency to carry out its mission more safely. But there are considerable challenges ahead. For example, as noted by the Aerospace Safety Advisory Panel in its most recent annual report, NASA now has the difficult task of training new employees and integrating them into organizations that are highly pressured by the shuttle’s expanded flight rates associated with the International Space Station. As we stressed in our previous testimony, training alone may take as long as 2 years, while workload demands are higher than ever. The panel also emphasized that (1) stress levels among some employees are still a matter of concern; (2) some critical areas, such as information technology and electrical/electronic engineering, are not yet fully staffed; and (3) NASA is still contending with the retirements of senior employees. Officials at Johnson Space Center also cited critical skill shortages as a continuing problem. Furthermore, NASA headquarters officials stated that the stress-related effects of the downsizing remain in the workforce. Addressing these particular challenges, according to the Aerospace Safety Advisory Panel, will require immediate actions, such as expanded training at the Centers, as well as a long-term workforce plan that will focus on retention, recruitment, training, and succession and career development needs. The workforce problems we identified during our review are not unique to NASA. As our January 2001 Performance and Accountability Series reports made clear, serious federal human capital shortfalls are now eroding the ability of many federal agencies—and threatening the ability of others—to economically, efficiently, and effectively perform their missions. As the Comptroller General recently stated in testimony, the problem lies not with federal employees themselves, but with the lack of effective leadership and management, along with the lack of a strategic approach to marshaling, managing, and maintaining the human capital needed for government to discharge its responsibilities and deliver on its promises.To highlight the urgency of this governmentwide challenge, in January 2001, we added strategic human capital management to our list of federal programs and operations identified as high risk. Our work has found human capital challenges across the federal government in several key areas. First, high-performing organizations establish a clear set of organizational intents—mission, vision, core values, goals and objectives, and strategies—and then integrate their human capital strategies to support these strategic and programmatic goals. However, under downsizing, budgetary, and other pressures, agencies have not consistently taken a strategic, results-oriented approach to human capital planning. Second, agencies do not have the sustained commitment from leaders and managers needed to implement reforms. Achieving this can be difficult to achieve in the face of cultural barriers to change and high levels of turnover among management ranks. Third, agencies have difficulties replacing the loss of skilled and experienced staff, and in some cases, filling certain mission-critical occupations because of increasing competition in the labor market. Fourth, agencies lack a crucial ingredient found in successful organizations: organizational cultures that promote high performance and accountability. At this time last year, NASA planned to develop and begin equipping the shuttle fleet with a variety of safety and supportability upgrades, at an estimated cost of $2.2 billion. These upgrades would affect every aspect of the shuttle system, including the orbiter, external tank, main engine, and solid rocket booster. Last year, we reported that NASA faced a number of programmatic and technical challenges in making these upgrades. First, several upgrade projects had not been fully approved, creating uncertainty within the program. Second, while NASA had begun to establish a dedicated shuttle safety upgrade workforce, it had not fully determined its needs in this area. Third, the shuttle program was subject to considerable scheduling pressure, which introduced the risk of unexpected cost increases, funding problems, and/or project delays. Specifically, the planned safety upgrade program could require developing and integrating at least nine major improvements in 5 years—possibly making it the most aggressive modification effort ever undertaken by the shuttle program. At the same time, technical requirements for the program were not yet fully defined, and upgrades were planned to coincide with the peak assembly period of the International Space Station. Since then, NASA has made some progress but has only partially addressed the challenges we identified last year. Specifically, NASA has started to define and develop some specific shuttle upgrades. For example, requirements for the cockpit avionics upgrade have been defined. Also, Phase I of the main engine advanced health monitoring system is in development, and Friction Stir Welding on the external tank is being implemented. In addition, according to Shuttle Development Office officials, staffing for the upgrade program is adequate. Since our last report, these officials told us that the Johnson Space Center has added about 70 people to the upgrade program, while the Marshall Space Flight Center has added another 50 to 60 people. We did not assess the quality or sufficiency of the added staff, but according to the development office officials, the workforce’s skill level has improved to the point where the program has a “good” skill base. Nevertheless, NASA has not yet fully defined its planned upgrades. The studies on particular projects, such as developing a crew escape system, are not expected to be done for some time. Moreover, our previous concerns with the technical maturity and potential cost growth of particular projects have proven to be warranted. For example, the implementation of the electric auxiliary power unit has been delayed indefinitely because of technical uncertainties and cost growth. Also, the estimated cost of Phase II of the main engine advanced health monitoring system has almost doubled, and NASA has canceled the proposed development of a Block III main engine improvement because of technological, cost, and schedule uncertainties. Compounding the challenges that NASA is facing in making its upgrades is the uncertainty surrounding its shuttle program. NASA is attempting to develop alternatives to the space shuttle, but it is not yet clear what these alternatives will be. We recently testified before the Subcommittee on Space and Aeronautics, House Committee on Science on the agency’s Space Launch Initiative. This is a risk reduction effort aimed at enabling NASA and industry to make a decision in the 2006 time frame on whether the full-scale development of a reusable launch vehicle can be undertaken. However, as illustrated by the difficulties NASA experienced with another reusable launch vehicle demonstrator—the Lockheed Martin X-33—an exact time frame for the space shuttle’s replacement cannot be determined at this time. Consequently, shuttle workforce and upgrade issues will need to be considered without fully knowing how the program will evolve over the long run. In conclusion, NASA has made a start at addressing serious workforce problems that could undermine space shuttle safety. It has also begun undertaking the important task of making needed safety and supportability upgrades. Nevertheless, the challenges ahead are significant—particularly because NASA is operating in an environment of uncertainty and it is still contending with the effects of its downsizing effort. As such, it will be exceedingly important that NASA sustain its attention and commitment to making space shuttle operations as safe as possible. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or Members of the Subcommittee may have. For further contact regarding this testimony, please contact Allen Li at (202) 512-4841. Individuals making key contributions to this testimony included Jerry Herley, John Gilchrist, James Beard, Fred Felder, Vijay Barnabas, and Cristina Chaplain. | In August 2000, the National Aeronautics and Space Administration's (NASA) space shuttle program was at a critical juncture. Its workforce had declined significantly since 1995, its flight rate was to double to support the assembly of the International Space Station, and costly safety upgrades were planned to enhance the space shuttle's operation until at least 2012. Workforce reductions were jeopardizing NASA's ability to safely support the shuttle's planned flight rate. Recognizing the need to revitalize the shuttle's workforce, NASA ended its downsizing plans for the shuttle program and began to develop and equip the shuttle fleet with various safety and supportability upgrades. NASA is making progress in revitalizing the shuttle program's workforce. NASA's current budget request projects an increase of more than 200 full-time equivalent staff through fiscal year 2002. NASA has also focused more attention on human capital management in its annual performance plan. However, considerable challenges still lie ahead. Because many of the additional staff are new hires, they will need considerable training and will need to be integrated into the shuttle program. Also, NASA still needs to fully staff areas critical to shuttle safety; deal with critical losses due to retirements in the coming years; and, most of all, sustain management attention to human capital reforms. Although NASA is making strides in revitalizing its workforce, its ability to implement safety upgrades in a timely manner is uncertain. |
Many federal agencies, as well as state and local entities, play a part in the U.S. food safety regulatory system. While the federal agencies regulate the food production chain from farms to food manufacturers, state and local agencies primarily regulate food safety in retail food establishments. Table 1 summarizes the food safety responsibilities of the federal agencies. In addition to their established food safety and quality responsibilities, following the events of September 11, 2001, these agencies began to address the potential for deliberate contamination of agriculture and food products, with the Department of Homeland Security providing overall coordination on how to protect the food supply from deliberate contamination. The agencies’ food safety authorities stem from 30 principal laws related to food safety. For a listing of the principal food safety laws, see appendix IV. As a result of this division of responsibility, the federal food safety system is fragmented. In some instances, agencies perform nearly identical activities—both USDA and FDA inspect food-processing facilities that produce foods under the regulatory responsibility of each agency, referred to as dual jurisdiction establishments (DJE). DJEs are those that manufacture or process food products that contain ingredients regulated by more than one federal agency. For example, both USDA and FDA inspect facilities that make canned baked beans with 2 percent or more bacon (a USDA-regulated food) and canned baked beans without meat (an FDA-regulated food). While these agencies each perform food safety inspections, the frequency of their inspections varies. Generally, USDA inspectors have a more regular presence at DJEs. As another example, both FDA and NMFS inspect seafood-processing facilities, although NMFS’s inspections are conducted at the request of the facility through a contract between the facility and NMFS. Four agencies—USDA, FDA, EPA, and NMFS—are involved in key program functions related to food safety—including inspection and enforcement, research, risk assessment, education and outreach, rulemaking and standard setting, surveillance and monitoring, food security, and administration. Examples of activities under these functions include: inspecting domestic food-processing facilities and imported food items at U.S. ports of entry; researching foodborne chemical and biological contaminants, as well as conducting risk assessments of foodborne physical, chemical, and biological contaminants to inform rulemaking, allocation of agency resources, or risk communication; developing and distributing guidance to consumers and industry related to food safety topics such as appropriate food temperatures; and issuing/promulgating HACCP, sanitation, and good manufacturing practices regulations. In fiscal year 2003, the four federal agencies spent nearly $1.7 billion on food safety-related activities. USDA and FDA are responsible for most federal food safety resources (as fig. 1 shows). These agencies spent about $921 million (55 percent) in fiscal year 2003 on inspection/enforcement functions, including inspections of domestic and imported food (as fig. 2 shows). Risk assessment - $149,402,717 USDA’s Food Safety and Inspection Service did not provide Surveillance expenditure data. The agencies’ expenditures vary by program function (as shown in fig. 3). For example, USDA’s inspection/enforcement expenditures made up almost three-quarters of the total spent by these agencies for that program function. That is, the majority of federal food safety inspection expenditures are directed toward USDA’s programs for ensuring the safety of meat, poultry, and egg products. In contrast, FDA accounts for more than half of the agencies’ expenditures for food safety education/outreach programs. National Research Council and Institute of Medicine, Ensuring Safe Food: From Production to Consumption (Washington, D.C.: 1998). As a result of the multiple laws and regulations governing food safety, several federal agencies conduct activities—inspections of domestic and foreign foods, training, research, risk assessment, education, and rulemaking—that can serve overlapping, if not identical, purposes. As a result, federal agencies spend resources on similar food safety activities. Table 2 illustrates similar activities conducted by the four federal agencies we examined. These activities, such as laboratory analysis and risk assessment, may be product specific. USDA and FDA spent $884 million in fiscal year 2003 on inspection and enforcement activities—roughly 60 percent of their total food safety expenditures. Neither USDA nor FDA has estimated the total costs associated with inspecting jointly regulated facilities. FDA estimated that it spends about $4,000 per inspection. As figure 4 shows, USDA and FDA both inspect 1,451 known DJEs located across the country. USDA and FDA inspect these establishments with different frequencies. For example, USDA inspects a canning facility at least daily if it produces food containing meat and poultry. If the facility also produces canned soups containing beans or seafood, FDA inspects it every 1 to 5 years. Because of their split jurisdiction, each agency is responsible for inspecting different food products at these facilities; but the agencies’ inspections have common key elements, including verifying the facilities’ compliance with sanitation standards (as defined by USDA) or good manufacturing practices (as defined by FDA). For example, both agencies’ inspectors verify that facilities do not have rodent or insect infestations. Figure 5 summarizes some of the common elements of USDA and FDA inspections. At jointly regulated facilities, USDA and FDA inspectors also verify that HACCP systems are in place. In these instances, each agency verifies that the facility has created and implemented a HACCP plan specific to the products that the agency regulates. The regulations require the facility to maintain separate HACCP plans for each product and to develop separate analyses of critical control points and separate strategies to mitigate or eliminate food contaminants. For example, at a facility we visited that produces both crab cakes and breaded chicken, the manager is required to maintain a seafood HACCP plan and a poultry HACCP plan. The manager said that although both plans have similar elements, each agency’s inspectors expect different levels of detail for the plans—something the manager finds confusing and difficult to comply with. USDA and FDA have new tools that could help reduce overlaps in inspections. Under the Bioterrorism Act, FDA could allow USDA inspectors, who are present every day at these jointly regulated facilities, to inspect FDA-regulated food. In doing so, FDA could reduce overlapping inspections and redirect resources to other facilities for which it has sole jurisdiction. While they did not disagree in principle with the benefits of such an arrangement, FDA officials said that the savings would be somewhat offset because FDA would likely have to reimburse USDA for the costs of those inspections. FDA officials said that they do not currently plan to pursue this option and have not conducted any analyses of the costs or savings associated with authorizing USDA officials to conduct FDA inspections at these facilities. USDA officials commented that their inspectors are fully occupied and that they would need to be trained before conducting joint inspections. Overlaps also occur at seafood-processing facilities that both FDA and NMFS inspect. NMFS currently inspects approximately 275 domestic seafood facilities that FDA also inspects. NMFS safety and sanitation inspections, as well as other product quality inspections are conducted on a fee-for-service basis. NMFS inspectors verify sanitation procedures, HACCP compliance, and good manufacturing practices—many of the same components of an FDA inspection. Although NMFS and FDA seafood safety inspections are similar, FDA does not take into account whether NMFS has already inspected a particular facility when determining how frequently its inspectors should visit that same facility. FDA officials said they do not rely on NMFS inspections for two reasons. First, FDA officials believe that NMFS has a potential conflict of interest because companies pay NMFS for these inspections; and therefore, as a regulatory agency, FDA should not rely on them. NMFS officials disagree with FDA’s viewpoint, stating that their fee-for-service structure does not affect their ability to conduct objective inspections. NMFS officials said that, when NMFS inspectors find noncompliance with FDA regulations, they refer companies to FDA and/or to state regulatory authorities. NMFS officials stated that companies that contract with NMFS need the agency’s certification in order to satisfy their customers. Second, it is difficult for FDA to determine which facilities NMFS inspects at any given time because NMFS inspection schedules fluctuate often, according to changes in NMFS’s contracts with individual companies. If FDA were to recognize the results of NMFS inspection findings in targeting its resources, it could decrease or eliminate inspections at facilities that NMFS inspectors find are in compliance with sanitation and HACCP regulations. Both USDA and FDA maintain inspectors at 18 U.S. ports of entry to inspect imported food products. In fiscal year 2003, USDA spent almost $16 million on these inspections, and FDA spent more than $115 million. FDA spends about 7 times as much on import inspections because the agency is responsible for about 80 percent of the U.S. food supply, including imports from about 250 countries, compared with USDA’s responsibility for inspecting imports that come from 34 countries. However, the agencies are not leveraging inspection resources at these ports. USDA officials told us that FDA-regulated, imported foods are sometimes stored in USDA- approved inspection facilities at these ports. USDA inspectors have no authority to inspect FDA-regulated products, although USDA inspectors are present at these ports more often than FDA inspectors. As a result, some FDA-regulated products may remain at the facility for some time awaiting inspection. FDA has the authority to commission federal officials to conduct inspections at jointly regulated facilities. In 2003, FDA exercised this authority to conduct inspections by entering into an agreement with the Department of Homeland Security’s Customs and Border Protection (CBP) so Customs’ officials can help FDA inspect products at ports and other facilities subject to CBP jurisdiction. FDA could also leverage USDA’s efforts to ensure the safety of imported food by using information that USDA compiles in its determinations that exporting countries’ food safety systems are equivalent to the U.S. system. Under the Meat and Poultry Products Inspection Acts, the Secretary of Agriculture is required to certify that countries exporting meat and poultry to the United States have equivalent food safety systems for producing meat and poultry products that are exported to the United States. That information could inform FDA’s decision making about which countries to visit for its overseas inspections. Currently, FDA visits foreign countries to inspect individual food-processing firms. In 2004, USDA determined that the food safety systems of 34 countries it evaluated were equivalent to that of the United States. A substantial portion of what USDA evaluates— sanitation procedures and compliance with HACCP rules—could be useful to FDA in deciding what countries to visit when conducting inspections of foreign firms that export products under its jurisdiction. FDA officials told us, however, that the agency does not use that information when deciding which countries to visit. As a result, FDA at times conducts inspections in the same countries that USDA has evaluated. For example, USDA and FDA each visited Brazil, Costa Rica, Germany, Hungary, Mexico, and Canada last year. USDA spent almost $500,000, and FDA spent almost $5 million, on its foreign country visits in fiscal year 2003. USDA and FDA officials said these agencies do not share information from their overseas visits because their different statutory responsibilities make such information of little advantage. That is, USDA’s focus during foreign country visits is to evaluate the meat and poultry inspection systems to determine if they are equivalent to that of the United States, whereas FDA focuses its visits on specific companies that produce food under the agency’s jurisdiction. USDA and FDA provide similar training to their inspectors. For example, both agencies train inspectors on sanitation requirements, good manufacturing practices, and HACCP. Agency officials agreed that the training programs have a common foundation but pointed out that there are differences, as each agency applies these principles to the specific foods it regulates. USDA spent $7.8 million, while FDA spent about $1.5 million, during fiscal year 2003 to train their food inspection personnel. FDA’s comparatively lower training costs reflect a contractual agreement with a private firm that has produced an online curriculum. This curriculum includes over 106 courses that address topics common to both USDA and FDA—ranging from foodborne pathogens, HACCP requirements, and good manufacturing practices, to courses that are specific to FDA’s regulations and enforcement authorities. Another agency—NMFS—uses 74 of these online courses to train its own seafood inspectors. The benefits NMFS officials cited include accessibility to training materials at times other than when inspectors are “on duty” and no charge to NMFS for the training materials. USDA officials said they are exploring the possibility of entering into an agreement with the company that developed FDA’s online curriculum to allow USDA inspectors access to some of this training. In addition to the costs associated with developing inspector training programs, USDA estimates that it spends an average of about $900,000 per year on training-related travel, not including other costs related to replacing inspectors in food-processing facilities while they participate in the training. A joint USDA-FDA training program could reduce duplication in developing training materials and in providing instruction, and potentially achieve some savings. Other federal agencies have consolidated training activities that have a common purpose and similar content. For example, in 1970, the Consolidated Federal Law Enforcement Training Center (the Center) brought together the training programs of 75 federal law enforcement agencies that had maintained separate training programs. Specifically, the Center provides standardized programs for criminal investigators and uniformed police officers across the federal government. While standardizing basic training, the Center also offers specialized courses for individual agencies to address their particular needs. In addition, according to the Center, the interaction with students from other agencies promotes greater understanding of other agencies’ missions and duties, and therefore provides for a more cooperative federal law enforcement system. We identified overlapping activities in the areas of food safety research and risk assessment, consumer and industry education, and rulemaking. USDA and FDA participate in similar food safety research efforts; that is, both agencies collect and analyze food samples for chemical and biological contaminants. During fiscal year 2003, the agencies spent over $245 million on these types of activities. For example, because of the agencies’ split jurisdiction, both USDA and FDA maintain separate laboratory capability to sample and analyze the foods that they regulate for chemical contaminants such as pesticides and dioxins. EPA uses USDA and FDA data to inform EPA’s risk assessments on human exposure to pesticides. Specifically, in 2003, FDA analyzed 11,331 food samples for pesticides and chemical contaminants to help estimate the dietary intake of pesticide residues. In 2000, the most recent year for which data are available, USDA’s Food Safety and Inspection Service (FSIS) analyzed over 33,000 samples of meat, poultry, and egg products. In addition, USDA’s Agricultural Marketing Service’s (AMS) Pesticide Data Program samples and tests commodities across the food spectrum to help inform EPA in making decisions on acceptable levels of pesticide residues (tolerances). According to EPA officials, USDA data are their primary source of information. However, FDA provides additional information on a greater range of foods and chemicals that EPA also uses to form its decisions. EPA officials said that the overlap in data collection and analysis adds value because USDA’s data comes from a well-controlled survey of food samples taken at the wholesale level, and FDA’s data helps fill in the gaps with samples of food at different points in the distribution chain. USDA and FDA also both conduct risk assessments of foodborne pathogens that can contaminate food products under their respective jurisdictions. For example, in 2000, USDA released a draft risk assessment for Escherichia coli ( E. coli) O157:H7 in ground beef and, FDA released a draft risk assessment for Vibrio parahaemolyticus in raw molluscan shellfish. The agencies also conduct joint risk assessments when addressing the same pathogen or the same food product. In the case of eggs, regulatory responsibility shifts as eggs make their way from the farm to the table, with FDA being primarily responsible for the safe production and processing of eggs still in the shell (known as shell eggs), and USDA being responsible for food safety at the processing plants where eggs are broken to produce egg products. In 1996, the agencies began work on a joint risk assessment for salmonella in eggs to evaluate the risk to human health of salmonella in shell eggs and in liquid egg products and to identify potential risk reduction strategies. In 1998, USDA and FDA jointly published an advance notice of proposed rulemaking, based on this risk assessment, to identify farm-to-table actions that would decrease the food safety risks associated with eggs. However, the agencies have not issued a joint rule to help eliminate foodborne illnesses caused by salmonella in eggs. In 2004, FDA issued a proposed rule that would require shell egg producers to implement measures to help prevent salmonella from contaminating eggs on the farm. Although USDA released a new draft risk assessment in 2004, the department has not yet issued a proposed rule to help prevent salmonella from contaminating egg products. Both agencies have also committed personnel resources to a World Health Organization effort to conduct a risk assessment of salmonella in eggs and broiler chickens. USDA, FDA, and EPA conduct education and outreach on food safety—and spent more than $107 million in fiscal year 2003. In some cases, these agencies’ efforts overlap. For example, these agencies create and distribute educational materials to consumers, or host hotlines or other forums that address food safety issues. In some cases, these efforts target the same food safety topic. For example, USDA, FDA, and EPA each develop consumer guidance on chemical food contaminants such as pesticides, dioxin, and mercury. In addition, USDA and FDA each develop similar food safety guidance for (1) consumers on general topics, such as cooking and chilling food, and safe handling practices, such as using a food thermometer and (2) industry, on their HACCP regulations and sanitation and good manufacturing practices. These overlapping efforts, caused in part by the agencies’ divisions in jurisdiction, can be confusing to both consumers and industry representatives. For example, USDA officials said they receive calls from consumers and industry representatives to their consumer hotline about FDA-regulated food products. These agencies have made some efforts to reduce overlaps in their consumer education activities. For example, USDA and FDA developed the “Fight Bac” program to educate the public about safe food handling to help reduce foodborne illness. In addition, for the first time in 2004, FDA and EPA issued a joint consumer advisory about mercury in fish and shellfish for women who might become pregnant, who are pregnant, who are nursing, as well as for young children. However, the joint advisory recommends different consumption levels, depending on whether the fish is commercially caught (regulated by FDA) or recreationally caught (regulated by EPA). Specifically, for fish purchased from a store, the guidance recommends up to 12 ounces per week. However, if the fish is recreationally caught, the guidance recommends that women consume up to 6 ounces per week. EPA said that the consumption guidance differs due to different mercury levels in recreationally and commercially caught fish. As the principal agencies responsible for food safety, both USDA and FDA engage in rule-making and standard-setting activities under their respective statutes. While the rulemakings USDA and FDA undertake vary under those statutes, there are some similarities. For example, USDA and FDA promulgated separate HACCP regulations for industry, but both agencies’ regulations require food processors to incorporate certain sanitation processes into their HACCP systems. The HACCP rules are based on the same model, though applied to the different food products each agency regulates. For example, FDA requires seafood-processing facilities to address contaminants that are likely to be found in seafood, such as Vibrio vulnificus (a bacteria found in raw seafood, particularly in oysters). USDA requires all meat and poultry processing facilities to address contaminants, such as E. coli and salmonella, which are likely to be found in these products. While quantifying the resources dedicated to rule-making activities is difficult for the agencies, the costs are significant. FDA estimates that it spent between $650,000 and almost $1 million to issue its seafood HACCP rule in 1995. NMFS officials said they spent $5 million to support the FDA rule by developing a model seafood surveillance project. USDA was unable to calculate how much it spent to develop its meat and poultry HACCP rule. In some cases, the agencies collaborate in the early stages of rulemaking. For example, USDA and FDA participated in a joint Listeria monocytogenes (listeria) risk analysis of ready-to-eat foods. However, the agencies will promulgate separate listeria rules for the products under their jurisdiction. The principal food safety agencies—USDA, FDA, EPA, and NMFS—have entered into 71 interagency agreements to coordinate the full range of their food safety activities and to support their mission to protect the public health. About one-third of the agreements include as objectives, the coordination of activities, reductions in overlaps, and/or leveraging of resources. The agencies’ ability to take full advantage of these agreements is hampered by the absence of adequate mechanisms for tracking them and, in some cases, by ineffective implementation of the provisions of these agreements. Of the 71 interagency agreements we identified, the largest proportion (43 percent) reflect the agencies agreement to increase cooperation on inspection and enforcement activities. The agencies also spent the largest share of their food safety resources on these activities (as fig. 2 showed). Other agreements address activities such as education/outreach, food security, and monitoring/surveillance related to food safety and quality (as shown in fig. 6). Furthermore, 24 agreements specifically highlight the need to reduce duplication of effort by clarifying responsibilities, reducing overlaps, and/or making efficient and effective use of resources. Appendix III provides additional information about the 71 agreements. In some instances, the agencies entered into multiple agreements to coordinate and ensure the safety of a single type of food. For example, we identified seven agreements that focus on seafood inspection and enforcement activities; signatories to one or more of these agreements include USDA, FDA, NMFS, the Department of Defense, and the Interstate Shellfish Sanitation Conference (ISSC). In addition to these formal interagency agreements, the agencies cooperate through other mechanisms such as the Foodborne Diseases Active Surveillance Network (also known as FoodNet) that USDA, FDA, and the Centers for Disease Control and Prevention use to help track the incidence of foodborne illness and track the effectiveness of food safety programs in reducing foodborne illness. Agency officials also noted that they informally cooperate and collaborate with one another on a regular basis. USDA, FDA, EPA, and NMFS do not have adequate mechanisms to track interagency food safety agreements. Consequently, the agencies could not readily identify the agreements that they have entered into, could not determine which agreements are still in effect, and were unable to determine which are still needed. As a result, we could not determine which agreements are currently being used by the agencies. Agency officials did not agree on the number of food safety-related agreements they have entered into, and only 7 of the 71 agreements that we ultimately compiled were identified to us by all signatory agencies. For example, FDA and EPA provided a copy of an agreement they had entered into with USDA about residues in drugs, pesticides, and environmental contaminants in foods; however, USDA officials said the agency is not party to any agreement on residues. Forty-one additional agreements were identified to us by only one of the multiple signatories. In addition, we found three agreements—through our prior work or Internet searches—that none of the agencies had identified to us. Of the agencies we reviewed, only USDA’s Animal and Plant Health Inspection Service maintained a database that allowed it to readily identify all the agreements it was party to. The other agencies did not have such databases. During the course of our review, EPA officials said that, without such tools, they had difficulty identifying the agreements they have entered into. Officials also said they are planning to develop an electronic system to identify and track these agreements. EPA said that, if developed, this system could offer one means of tracking and managing information related to, and contained in, interagency agreements. The weaknesses in tracking agreements may also affect the agencies’ ability to determine whether these agreements are still needed and whether specific provisions are still in effect. First, about one-third of the agreements we identified were created decades ago and may no longer be relevant to current needs. Technological and scientific advances have made some provisions of these agreements obsolete. Second, some agencies that were party to the agreements have ceased to exist because of internal reorganizations; but the agreements have not been modified to reflect such changes, indicating that the agencies may not be actively monitoring the status and relevance of these agreements. For example, a 1978 agreement between USDA and FDA on education programs to assist livestock and poultry producers in using animal drugs has not been modified to reflect the fact that USDA’s Science and Education Administration no longer exists. Also, we found that some agreements have not been updated to account for changes in a signatory’s responsibilities since they were signed. For example, NMFS officials said that the ISSC has taken on some responsibilities that once belonged to NMFS and that the 1985 agreement on shellfish-growing waters signed by FDA, EPA, NMFS, and the U.S. Department of the Interior’s Fish and Wildlife Service should be updated to reflect this change. Because the agencies spend most of their food safety resources on inspection and enforcement, we evaluated the implementation of two comprehensive inspection and enforcement agreements: one that pertains to DJEs and one that pertains to inspections of fishery products. These agreements were established to make more efficient and effective use of agency resources through improved coordination and information sharing. Although the agencies are exchanging some information as called for in the agreements, they are generally missing opportunities to make more effective and efficient use of their resources, such as leveraging inspection and enforcement resources. In 1999, USDA and FDA signed an interagency agreement to facilitate the exchange of information between the agencies about food-processing facilities that they both inspect. The agreement stated that the exchange of information will permit more efficient use of both agencies’ resources and contribute to improved public health protection. The agreement was to be the first step toward allowing USDA’s FSIS inspectors to conduct FDA’s inspections at DJEs, according to a former USDA senior food safety official who signed the agreement. The agreement was developed in response to a 1997 report by the President’s Food Safety Council, which recommended increased cooperation among agencies. Specifically, the report recommended that USDA and FDA take steps to ensure that the resources and experience of FDA and USDA’s FSIS be used as efficiently as possible to avoid duplication of effort and that the agencies consider using FSIS inspectors to conduct FDA inspections at DJEs. The report stated that because FSIS inspectors are already in these plants, they could be used to maximize use of federal resources without loss of inspection coverage for FSIS-regulated foods. In 2000, USDA and FDA evaluated the agreement’s implementation and concluded that the experience had been largely successful because the agencies learned about each other’s operations and about ways to cooperate more effectively. However, the evaluation also included recommendations to strengthen, clarify, or otherwise improve the agreement’s implementation. Among other things, the evaluation recommended that FDA provide FSIS with access to FDA’s inspection database, ensure more frequent updates to the list of jointly regulated facilities, and train inspectors on the provisions of the agreement. The evaluation cited the potential for significant resource savings over time as the agreement is implemented, particularly in personnel, administrative, and travel costs. Since the 2000 evaluation, officials at USDA and FDA said they have not again monitored the agreement’s effectiveness, nor have they implemented their own recommendations to realize resource savings. We found that the agencies are not systematically exchanging information about DJEs, as called for in the agreement. First, the agreement called for USDA and FDA to develop, maintain, and annually update a list of such establishments. We found that although the agencies created such a list in 1999 when the agreement was signed, they had not updated it until 2004, when we brought the matter to their attention. As a result, the agencies had great difficulty identifying the current number of DJEs to which this agreement pertains. For example, during the course of our review, USDA and FDA provided several different lists of jointly regulated establishments. The number of establishments that were listed ranged from 1,152 to 1,867. In December 2004, FDA headquarters officials provided us a list of 1,451 known DJEs, and the agency is verifying approximately 400 establishments that may be added to the list. Second, the agreement calls for the district offices of each agency to share certain findings with their counterpart district offices and for the agency receiving a finding of noncompliance to track and use that information in its program evaluation, work planning, and consideration of whether action against the facility is warranted. The agreement also calls for the receiving agency to inform the notifying agency of the disposition of the notification, including any actions that it plans or takes, within 30 days. During field visits to three USDA and FDA district offices that, together, are responsible for food safety in 13 states, we found that USDA and FDA field inspection personnel are not routinely communicating these findings of mutual concern, such as sanitation problems at facilities they jointly regulate. Nor have USDA and FDA explored the feasibility of developing a system to track and exchange information when each agency finds instances of noncompliance. As a result, work planning by each agency cannot take advantage of the other agency’s inspection findings. Because FDA inspectors visit DJEs less frequently than USDA inspectors, we believe that FDA staff could benefit from the compliance information that USDA inspectors collect. Generally, problems with a facility’s manufacturing processes or sanitation procedures affect all products produced at the establishment. As a USDA district official told us, “a rodent doesn’t distinguish between FDA-regulated products and USDA-regulated products, so a problem affecting one agency’s product is likely to affect the other agency’s product.” Third, the agreement calls for the agencies to explore the feasibility of granting each other access to appropriate computer-monitoring systems so each agency can track inspection findings. However, the agencies maintain separate databases, and the inspectors with whom we spoke continue to be largely unaware of a facility’s past history of compliance with the other agency’s regulations. Inspectors told us that compliance information might be helpful when inspecting DJEs so that they could focus attention on past violations. Fourth, the agreement calls for the agencies to develop and provide appropriate training in the inspectional techniques and processes of each agency to ensure that the contacts for each agency have an appropriate understanding of the working of the other agency. In addition, the 2000 evaluation found that more training was needed, particularly at the field level, to achieve the results of the agreement. Although USDA and FDA held 28 joint training sessions during the first year of the agreement’s implementation, no additional training has been provided since then. Because USDA and FDA on-site inspectors are often the first agency staff to become aware of deficiencies at a plant, effective sharing of this type of information depends upon those inspectors being adequately trained. According to agency officials, the agreement has helped agency coordination and communication, particularly when major public health concerns arise. USDA and FDA headquarters officials identified instances of major enforcement actions resulting from FDA’s notification to USDA of problems with products under its jurisdiction. For example, in one case, FDA investigators learned that a sample of chicken salad tested positive for listeria and alerted USDA, resulting in a voluntary recall. In a second case, USDA and FDA cooperated by exchanging information on a severe rodent infestation at a DJE, resulting in the seizure of millions of pounds of USDA- and FDA-regulated product. In addition, USDA and FDA district managers were able to assist each other during the recall of beef and beef products, after the December 2003 discovery of a cow infected with bovine spongiform encephalopathy (BSE, also known as mad cow disease). However, we found that the stated purpose of the agreement—which is to facilitate an exchange of information permitting more efficient use of both agencies’ resources and contributing to improved public health protection—has not been maximized. USDA and FDA are not making better use of each other’s inspection resources to reduce overlap and duplication of effort, particularly at establishments that both agencies inspect. Depending on the type and layout of the facility, a USDA inspector may have a more regular presence in an area where FDA-regulated products are maintained. For example, at a plant that produces both meat and seafood products, a USDA inspector told us that as part of his daily routine inspections he walks through the seafood processing and storage section of the plant. (See fig. 7). However, because FDA regulates seafood, the USDA inspector does not monitor or inspect the seafood storage section. The inspector noted that, with minimum training on seafood temperature controls, he could inspect this section of the plant as well. USDA officials at headquarters said the agency’s inspectors are capable of taking on FDA’s inspection responsibilities at jointly regulated facilities, given the proper resources and training. A 1974 agreement between FDA and NMFS recognizes the two agencies’ related responsibilities for inspecting seafood facilities and standardization activities, and it details actions the agencies can take to enable each agency to discharge its responsibilities as effectively as possible. The agreement states that these actions should minimize FDA inspections in the approximately 275 domestic seafood facilities that NMFS inspects under contract, as long as FDA’s inspection requirements are followed. Among other items in the agreement: FDA is to (1) request information about NMFS-inspected products when FDA is considering an enforcement action, (2) provide timely notification to NMFS of any products seized from NMFS-inspected plants, (3) inform NMFS of FDA industry guidelines and its standards for establishing compliance action levels, and (4) invite NMFS inspectors to observe FDA inspections of companies under contract to NMFS. NMFS is to (1) supply FDA headquarters with a list of all NMFS- inspected processing and packing establishments; (2) apply FDA requirements to NMFS-inspected products and establishments and decline to inspect, grade, or certify products that FDA would consider adulterated or misbranded; (3) upon request, provide FDA with information on NMFS-inspected products when FDA is taking or considering compliance action; and (4) cooperate with FDA in investigations of food poisoning, product recalls, and problems concerning food contamination caused by disasters or other phenomena. FDA and NMFS may meet periodically and, when appropriate, with industry to promote better communication and understanding of regulations, policy, and statutory responsibilities. If either agency believes that a particular violation is occurring in several seafood- processing plants, it may request a meeting with the other agency to consider investigative steps and, when necessary, mutually agreeable remedial action. We found that FDA is not using the agreement to minimize its inspections in seafood plants that NMFS has inspected and certified as meeting FDA’s safety standards. FDA officials said the agency does not recognize the NMFS inspections as aiding FDA in enforcing pertinent statutes. As a result, FDA is missing opportunities to leverage inspection resources and possibly avoid duplication of effort. In addition, we found that FDA is not carrying out provisions in the agreement. For example, FDA rarely provides notification of seizure actions it takes against NMFS-inspected plants, as outlined in the agreement. Furthermore, according to a senior NMFS official, NMFS communicates its inspection results to FDA, but FDA does not share its results with NMFS. FDA officials recently said they do not rely on NMFS’ inspection information for two reasons. First, NMFS conducts a fee-for-service inspection, and therefore, FDA officials believe that NMFS could have a conflict of interest because, as a nonregulatory body, it is paid for its services by the industry that it inspects. Second, FDA does not know which firms NMFS is inspecting at any given time. FDA officials said the list of firms NMFS inspects changes, depending on market fluctuations that affect each company’s need for NMFS’ services. Furthermore, FDA officials said that they already have a risk-based system in place to determine which firms to inspect, and at what frequencies, and that NMFS’ inspections are not a factor in its determination of risk. NMFS officials disagreed with FDA’s reasons for not using NMFS inspection results. First, they pointed out that NMFS’s relationship to industry is similar to USDA’s Agricultural Marketing Service, which also conducts fee-for-service grading and certification of poultry, meat, eggs, and other agricultural commodities. Second, NMFS officials said they maintain an up-to-date list of firms that the agency inspects and post this information on its Web site, most recently revised in January 2005. NMFS readily provided us with a list of the firms it was inspecting when we spoke with officials during the course of our review. Although FDA is not implementing the agreement, the agency has recognized the potential benefits of working with NMFS to leverage resources. In a January 2004 letter to the Under Secretary of Commerce for Oceans and Atmosphere, the then-Commissioner of Food and Drugs proposed ways that the two agencies could enhance coordination, including commissioning NMFS inspectors to help FDA meet its public health responsibilities. The Commissioner noted that using NMFS inspectors could be cost effective because the NMFS inspectors may already be on-site and the FDA inspector therefore would not have to travel to conduct an inspection. FDA has not used NMFS inspection resources under the terms of the fishery products agreement, nor has the agency used its authority under the Bioterrorism Act to commission NMFS officials. However, FDA used this authority to commission CBP officers to assist FDA at ports of entry. FDA officials said the agency has not yet considered using the act to enter into similar agreements with other federal agencies. NMFS officials said the agency would be willing to enter into such an agreement with FDA, thereby assisting FDA in reaching its goal of conducting annual inspections at all high-risk facilities. Industry associations, food-processing companies, consumer groups, and academic experts we contacted disagree on the significance of overlapping activities in the federal food safety system. However, most of these stakeholders agree that the laws and regulations governing the system should be modernized so that science and technological advancements can be used to more effectively and efficiently control current and emerging food safety hazards. While we found agreement among the stakeholders about the need for modernization, they differed about whether food safety functions should be consolidated into a single federal agency. The stakeholders we contacted disagree on whether federal agencies’ food safety functions overlap, specifically with regard to inspections. Industry associations that we spoke with, such as the Food Products Association, National Fisheries Institute, American Frozen Food Institute, and Grocery Manufacturers of America, told us that overlaps occur but do not harm the safety of food and therefore are not significant. These overlaps, they noted, occur primarily in dual jurisdiction establishments—those regulated by both USDA and FDA—or facilities inspected by both FDA and NMFS. However, some overlaps occur outside these establishments. For example, although the United Fresh Fruit and Vegetable Association’s (UFFVA) member companies are primarily inspected and regulated by FDA, companies that sell fruit and vegetables to the school meals programs are also inspected by USDA. UFFVA officials pointed out that USDA inspects fruits and vegetables to be included in school lunches; and the companies, already subject to FDA inspections, incur additional expenses for these USDA inspections. UFFVA also cited overlaps in USDA’s and FDA’s sampling and testing for pesticides and microbiological contaminants on fruits and vegetables. Other stakeholders, including the U.S. Tuna Foundation and the American Meat Institute, reported that they do not think the federal agencies’ programs overlap because USDA and FDA have specific, defined areas of responsibility for their industries. The U.S. Poultry and Egg Association added that the regulatory delineations do not always make sense, citing the split jurisdiction between USDA and FDA over the regulation of eggs. Specifically, FDA regulates an egg farm as a “food factory” within its area of jurisdiction, and USDA regulates plants that process eggs into products such as powdered eggs. Other stakeholders—generally food companies that are regulated by both USDA and FDA—told us that overlaps can be burdensome. These stakeholders did not see the added value of FDA’s once-a-year (or less) inspections, because USDA inspectors already visit their plants daily. For example, managers at these facilities told us the following: At an egg- and potato-processing company, each agency uses different frequencies for monitoring and ensuring food safety, with USDA inspecting the physical plant, usually daily, while FDA’s inspections usually take place annually. According to a senior plant manager, FDA’s inspections place more responsibility for food safety on the company. From the manager’s perspective, the most effective inspection strategy would be to combine elements of both agencies’ inspections into a single inspection program. At a facility that produces USDA- and FDA-regulated foods on three different production lines, the facility must maintain different sets of paperwork for each food that the company processes in order to meet USDA and FDA HACCP and sanitation requirements. Since the USDA inspector is at the facility every day, the manager said he does not see any value added by FDA’s inspection because that inspector examines the same areas of the facility—the processing lines and the refrigerated storage area—which are covered by the USDA inspector. A facility that cans a variety of soups and bean products experienced contradictory instructions from USDA and FDA during overlapping inspections. USDA inspectors did not want the company to paint its sterilization equipment because they determined that paint chips could contaminate the food. Subsequently, an FDA inspector told the company to paint the same equipment because he determined that it would be easier to identify sanitation problems on lightly painted equipment than on the dark-colored metal. The manager of the facility said the company had to paint and then remove the paint from equipment in order to satisfy both the USDA and FDA inspectors. In addition, at a seafood-processing plant that is inspected by both NMFS and FDA, the manager said that when FDA collects product samples for testing, it does not report test results in a timely fashion. According to the manager, NMFS inspections are preferable because the agency is able to provide test results more rapidly than FDA, which, according to the manager, allows the company to know its products are safe before they enter the market. A few stakeholders also saw value in some of the overlapping activities. For example: The American Frozen Food Institute noted that USDA and FDA inspections and their complementary expertise—independent scientific assessment, research, and education—provide value in addressing food safety issues. The quality assurance manager at a dual jurisdiction establishment with whom we spoke said he liked having a “second pair of eyes” inspecting the facilities for food safety. The company produces smoked salmon—a high-risk food—and the manager noted that having inspections by FDA and NMFS helps to ensure that products are safe and of high quality. The majority of stakeholders we contacted said they believed that the federal food safety system needs to be modernized, though they did not agree on what direction this modernization should take. Stakeholders’ views included (1) minor changes to improve coordination among the food safety agencies, (2) statutory changes to make the system more science and risk based, and (3) consolidating federal food safety functions into a single agency. Some large industry associations (e.g., the Grocery Manufacturers of America, the Food Products Association, the American Frozen Food Institute, the National Fisheries Institute, and the United Fresh Fruit and Vegetable Association) saw the need for only minor changes within the existing regulatory framework to enhance communication and coordination among the existing agencies. Some industry officials said that the current food safety system protects consumers, and they cited decreases in illnesses caused by foodborne bacteria, such as salmonella and listeria. The Grocery Manufacturers of America said that the food safety system must be flexible enough to allow resources to be directed toward identifying and addressing serious food safety problems but that this alteration would not require changing the food safety structure. The Grocery Manufacturers of America also reported that the current food safety system could be enhanced, and perhaps made more efficient, through enhanced interagency coordination. Other stakeholders—including representatives from industry associations, academia, consumer groups, public policy organizations, and individual food companies—believe that the system needs to be modernized through statutory changes to make it more science and risk based. According to the Consumer Federation of America, a science- or risk-based system should consider not only the risk posed by the food but also the history of the plant—whether is has a track record for producing high quality, safe food. Resources for the Future stated that the current food safety laws undermine a successful food safety system. That is, the laws do not build prevention into the farm-to-table continuum and divide responsibility and accountability for food safety among federal agencies. Further, the laws prevent risk-based allocation of resources across the federal food safety agencies. Further, USDA’s carcass-by-carcass organoleptic inspections exemplify the outmoded requirements of the current food safety system and cannot identify and control the microbiological hazards associated with meat and poultry products, such as E. coli O157:H7. Additionally, such inspections waste resources because new technologies are more efficient and effective. For example, the manager at a jointly regulated canned- goods company told us that daily inspections of meat and poultry products is wasteful and inefficient for most, if not all, heat-processed meat and poultry products, such as canned chicken or pork, since the canning process kills all the bacteria. Some stakeholders, including the Institute for Food Technologists, believe that modernizing the food safety system could be accomplished by rewriting the food safety statutes. Finally, some of the stakeholders that cited the need for modernization also believe that these changes should be accompanied by consolidation of federal food safety programs into a single agency for the following reasons: Consolidation of functions could allow a single food agency to manage the safety of the whole food chain, not just its parts, according to food safety experts at the Center for Science in the Public Interest, the Consumer Federation of America, and the University of Illinois. Consolidation would eliminate overlap between the agencies, especially at DJEs, according to several individual companies, and could generate substantial savings in terms of administrative efficiency and overall consistency in the application of policy, according to the Food Marketing Institute and food safety experts at Kansas State University and the University of Georgia. The legislative changes needed to accomplish consolidation could also be used as the vehicle for modernizing the food safety statutes or establish a scientific basis for distributing food safety resources, according to several individual food companies and food safety experts at the Center for Science at the Public Interest and Resources for the Future. The stakeholders we contacted also identified a number of roadblocks to changing the system, whether or not they supported such consolidation. First, industry is reluctant to change from a familiar regulatory framework to one that is untested. According to the Food Marketing Institute, food companies tend to prefer the inspection process that is known to them. Second, according to some industry associations, a transition to a single agency could create a period of uncertainty, as limited resources are diverted from the existing programs, and could therefore cause vulnerabilities in the food supply. Third, the transition costs to a single agency would be higher in the short term, according to food safety experts at the University of Illinois and the University of California. Fourth, current agency employees would be concerned that a consolidation would adversely change their working lives and that institutional knowledge would be lost. Finally, some stakeholders, including the Consumer Federation of America, said that some congressional committees may be reluctant to lose jurisdiction over food safety functions. We recognize that current statutory authorities require the food safety agencies to carry out regulatory activities that have resulted in some overlapping or duplicative activities. We have recommended in the past that federal food safety statutes be streamlined and that food safety functions be consolidated into a single agency to ensure the logical and most effective use of government resources and to protect consumers. Even within the current statutory framework, the agencies can take practical steps to reduce overlap and duplication and thereby free resources for more effective oversight of food safety. The Congress has recognized this possibility in the Bioterrorism Act by authorizing FDA to commission other agencies’ officials to conduct FDA’s inspection activities. Other avenues are open to the agencies as well. For example, the two interagency agreements that we examined in detail, could address problems in duplicative inspections if they were more effectively implemented. Other interagency agreements designed to reduce overlap might also prove fruitful. By not effectively implementing these agreements and by not exercising the new authorities under the Bioterrorism Act, the agencies are missing opportunities to make the system more efficient and effective. We are making seven recommendations designed to reduce or eliminate duplication and overlaps, leverage existing resources, and enhance coordination efforts among the principal federal food safety agencies. We recommend that the Secretary of Agriculture and the Commissioner of the Food and Drug Administration work together to ensure the implementation of the interagency agreement that calls for, among other things, sharing inspection- and enforcement-related information at food-processing facilities that are under the jurisdiction of both agencies; examine the feasibility of establishing a joint training program for food consider the findings of USDA’s foreign country equivalency evaluations when determining which countries to visit. To better use FDA’s limited inspection resources and leverage USDA’s resources, we recommend that, if appropriate and cost effective, the Commissioner of the Food and Drug Administration, as authorized under the Pubic Health Security and Bioterrorism Preparedness and Response Act of 2002, enter into an agreement to commission USDA inspectors to carry out FDA’s inspection responsibilities for food establishments that are under the jurisdiction of both agencies. To better use FDA’s limited inspection resources and leverage NMFS’s resources, we recommend that the Commissioner of the Food and Drug Administration and the Under Secretary of Commerce for Oceans and Atmosphere ensure the implementation of the interagency agreement that calls for FDA to recognize the results of NMFS inspections when determining the frequency of its seafood inspections. To strengthen management controls and maximize the effectiveness of interagency agreements that are designed to reduce overlap, increase coordination, and leverage resources, we recommend that the Secretary of Agriculture, the Commissioner of the Food and Drug Administration, the Administrator of the Environmental Protection Agency, and the Under Secretary of Commerce for Oceans and Atmosphere identify and inventory all active interagency food safety-related evaluate the need for these agreements and, where necessary, update the agreements to reflect recent legislative changes, new technological advances, and current needs. We provided USDA, HHS, EPA, and NOAA with a draft of this report for their review and comment. We received written comments on the report and its recommendations from USDA, HHS, and NOAA. EPA provided minor technical comments. In commenting on a draft of this report, USDA expressed serious reservations about the report, asserting that it oversimplifies food safety regulatory functions within USDA and FDA and that it exaggerates the extent of regulatory overlap. USDA appears to have misinterpreted the focus of this report, and we disagree with USDA’s characterization. This report examines overlapping activities rather than the regulatory framework that allows these activities. Nevertheless, the report contains a clear and accurate acknowledgment that the agencies operate under a statutory framework that gives them different authorities and responsibilities to regulate different segments of the food supply. While we recognize that the agencies operate under different authorities, the activities they perform under these authorities are similar in nature, leading us to question why the federal agencies must continue to spend resources on overlapping, and sometimes duplicative, food safety activities. For example, we disagree with USDA’s assertion that the agencies’ inspection activities are vastly different. As we identify in the report, these inspections have sufficiently common features, including the verification of sanitation procedures and good manufacturing practices at food processing facilities that make them candidates for consolidation within one agency. USDA’s comments and our detailed response are contained in appendix V. In commenting on a draft of this report, HHS raised concerns about our terminology regarding overlapping activities. We disagree with HHS comment that our report’s title should substitute the word duplication for overlap. That is, our report distinguishes between “overlap”—which we define as those similar activities being performed by more than one agency and “duplication”—which we define as essentially identical activities performed by more than one agency. Given the definitions we lay out in the report, if we modified the title as HHS suggests, we would risk implying that the agencies are literally duplicating efforts in every instance, even though we are fully aware that, under the current statutory framework, the agencies do not exactly replicate food safety activities. We also disagree with HHS’s comment that our report overstates similarities in USDA and FDA inspections because, as our report clearly states, USDA and FDA inspections have similar key elements: sanitation, good manufacturing practices, and HACCP compliance oversight. It also makes it clear that USDA and FDA inspections vary depending on whether the product is a USDA- or FDA-regulated food product. Furthermore, we disagree with HHS’s comment that the training programs are vastly different. As our report discusses, FDA’s training curriculum includes dozens of courses that address topics common to USDA and FDA. Overall, HHS agreed with three of the report’s seven recommendations, including (1) the usefulness of USDA’s foreign country evaluations, (2) identifying and inventoring all interagency agreements, and (3) the need to evaluate and update the agreements. HHS disagreed with our recommendation regarding joint training of USDA and FDA food inspectors. HHS took no position on our recommendation for using the Bioterrorism Act authorities. Finally, HHS partially concurred with two other recommendations dealing with the implementation of two interagency agreements. HHS also provided technical comments which we incorporated in our report, as appropriate. HHS’s comments and our detailed response are contained in appendix VI. EPA did not provide official comments, but it provided minor technical comments that we incorporated as appropriate. The technical comments noted that EPA will consider GAO’s recommendation for better tracking of interagency agreements when the agency sets priorities for future investments in information technology. NOAA provided written comments and agreed with the report’s recommendations that pertain to NMFS. NOAA also commented that our report does a fair and thorough job of describing the food safety activities of NMFS. NOAA stated that the positions expressed in GAO’s previous work continue to be germane to the issue of coordination with FDA on inspections of seafood. NOAA’s comments are contained in appendix VII. As agreed with your offices, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. We are sending copies of this report to the Secretary of Agriculture, the Acting Commissioner of the Food and Drug Administration, the Acting Administrator of the Environmental Protection Agency, and the Under Secretary of Commerce for Oceans and Atmosphere. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, I can be reached at (202) 512-3841, or robinsonr@gao.gov. Major contributors to this report are included is appendix VIII. To identify overlaps that may exist in the federal food safety system, we collected fiscal year 2003 budget data for food safety-related activities from the U.S. Department of Agriculture (USDA), the Food and Drug Administration (FDA), the Environmental Protection Agency (EPA), and the National Marine Fisheries Service (NMFS). We selected these agencies because they have broad food safety-related inspection and enforcement related responsibilities, the function to which most federal food safety funding is dedicated. We defined overlaps as similar activities being performed by more than one agency, such as training food inspectors. In contrast, we defined duplication as essentially identical activities performed by more than one agency, such as inspecting the same food- processing facility for compliance with sanitation and good manufacturing practices requirements. We used categories of food safety activities contained in the National Academy of Science’s 1998 report Ensuring Safe Food to group the agencies’ food safety activities. These categories include: monitoring/surveillance, inspection/enforcement, education/outreach, research, and risk assessment. We included three additional categories—food security, administration, and rulemaking/standard setting—to capture other relevant activities. We defined the categories of food safety activities into the following program functions: Monitoring/Surveillance: activities related to the monitoring of foodborne illness or disease, as well as monitoring the agents of illness in the food supply, including the collection of baseline data for contaminants; Inspection/Enforcement: activities related to ensuring compliance with agency food safety regulations, including premarket application or petition approval; Education/Outreach: activities related to communicating food safety- related information or guidance to the public, industry, or agencies’ other clients; Research: activities related to the study of food safety-related topics, which support agency policy decisions; Risk assessment: activities related to evaluation of the likelihood and severity of an adverse event (e.g., illness or death) on the public health as a result of the likelihood of exposure to a particular hazard; Food security: activities related to preparing for and responding to deliberate attacks on the food supply; Administration: supporting activities that enable the agencies to perform their food safety responsibilities. Examples of administrative activities include: procurement, human resources support, financial management, travel management, and information technology support; and Rulemaking/Standard setting: activities related to food safety policy decisions, development of regulations, and administration of regulatory review processes. Specifically, we obtained actual expenditures and staffing level data in full- time equivalents (FTE) from the following USDA units: Agricultural Marketing Service; Agricultural Research Service; Animal and Plant Health Inspection Service; Cooperative State Research, Education, and Extension Service; Economic Research Service; Grain Inspection, Packers and Stockyards Administration; Food Safety and Inspection Service; and National Agricultural Statistics Service. We also obtained data from the following FDA units: Center for Food Safety and Applied Nutrition, Center for Veterinary Medicine, National Center for Toxicological Research, and Office of Regulatory Affairs. EPA’s Office of Pesticide Prevention and Office of Water, as well as the NMFS’s Seafood Inspection Program and Office of Sustainable Fisheries also provided data. To assess the reliability of the staffing and expenditure data, we questioned knowledgeable agency officials and reviewed existing documentation regarding the data and the systems that produced them. We determined that the data were sufficiently reliable for the purposes of identifying overlaps. After the agencies provided budget data, we contacted agency budget and program officials to determine what activities were linked to the expenditure and staffing data. We categorized expenditures and staffing and asked agency officials if they considered our categorizations to be appropriate. We adjusted our categorization when agency officials told us it was inaccurate. In some cases, the agencies’ preferred categorization of the same activity varied. For example, FDA considered inspector training as an education/outreach activity, whereas USDA considered it an inspection- related activity. These discrepancies are noted in appendix II. After we categorized the budget and staffing level data, we identified cases in which more than one agency performed similar activities. In some instances, the agencies estimated budget and staffing levels, or they were unable to separate budget data into specific categories. As a result, some agencies did not provide expenditures and staffing data for categories such as administration, food security, and rulemaking. The agencies’ officials explained that these expenditures are distributed among more than one of the other categories. To examine the extent to which federal food safety agencies are using interagency agreements to leverage existing resources to reduce any such overlaps, we requested that agencies provide copies of all active interagency food safety-related agreements, and we selected two agreements to analyze their implementation. We compared the agreements that the agencies provided to determine where there were differences. In the cases where agency signatories provided us an agreement, and one or more agency signatories did not, we followed up with those agencies to reconcile the discrepancies. In some cases, we provided the agreement to an agency to obtain confirmation that it was a signatory to the agreement. We asked the agencies to categorize the agreements according to primary program function. We also independently categorized the agreements using information from their introduction and/or background to ensure consistent categorization across the agencies. If the agencies’ categorization differed with ours, we considered their rationale and changed the categorization as appropriate. We selected two inspection-related interagency agreements for in-depth review because the agencies spend most of their resources on inspection activities; one agreement that pertains to dual jurisdiction establishments (DJE) and one that pertains to inspections of fishery products. In addition, these agreements encompassed a broad range of intended coordination efforts between the agencies involved. We conducted site visits to three USDA and FDA field offices to obtain information related to implementation of the agreements. The site visits included USDA district offices in Philadelphia, Pennsylvania and Boulder, Colorado; and FDA district offices in Philadelphia, Pennsylvania; Denver, Colorado; and Seattle, Washington, which are responsible for food safety in a total of 13 states. We met with USDA and FDA district managers, inspectors, and other staff to discuss the agreement’s implementation. We also selected two to four DJEs at each location and visited them to discuss implementation of the agreements with plant managers and, in some cases, with USDA or FDA inspectors assigned to these facilities. In some cases, we accompanied inspectors as they conducted inspections of the facilities. To obtain the views of regulated industry and other stakeholders regarding opportunities to reduce overlap by consolidating federal food safety functions, we contacted a total of 35 stakeholders from food industry associations, food manufacturers, consumer groups, and academic experts using a structured interview format consisting of 22 questions. In selecting associations, organizations, and experts, we included contacts from our previous reports and testimonies and considered recommendations from USDA, FDA, EPA, and NMFS. In selecting which food manufacturers to interview, we used Food Processing’s Top 100 Companies to identify the largest food manufacturers. Of those companies, we selected and contacted food manufacturers that have facilities that produce food regulated by both USDA and FDA. We conducted our review from May 2004 through March 2005 in accordance with generally accepted government auditing standards. We asked agencies within USDA, FDA, EPA, and NMFS to provide all expenditures and staffing levels, in FTEs, related to food safety activities they conducted in fiscal year 2003. Tables 3 and 4 categorize these expenditures and staffing levels according to program function. Though most agencies were able to identify expenditures and staffing levels related to inspection and enforcement, research, risk assessment, education and outreach, and monitoring and surveillance—many were unable to provide expenditures linked to rulemaking and standard setting, food security, and administration. For this reason, table 4 containing this data is included separately. As table 3 shows, inspection and enforcement-related spending accounts for most of these agencies’ food safety-related spending in fiscal year 2003. We solicited all active food safety-related interagency agreements from USDA, FDA, EPA, and NMFS. Table 5 categorizes the 71 agreements by program function. Twenty-four agreements state the need to reduce duplication of effort, reduce or clarify overlaps, or increase the efficient or effective use of resources between agencies. Table 5 also provides the year each agreement became effective and indicates the signatory agencies. The bolded agreements are the two that were analyzed in-depth. As we have noted in this and other reports, the federal framework for food safety is based on a patchwork of numerous laws. Table 6 lists the 30 laws that we have identified as the principal federal laws related to food safety in order of their enactment, along with the agency or agencies that have food safety responsibilities under each law and a brief discussion or example of each law’s food safety-related provisions. Included in the table are several laws that primarily deal with health claims or labeling, which we consider to be food-safety related, as well as some laws that are largely amendments of the Federal Food, Drug and Cosmetic Act. This table does not provide an exhaustive list of all food safety-related laws and amendments, nor does it detail all of the food safety provisions for those laws listed. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated March 10, 2005. 1. We disagree with USDA’s assertion that our report overly simplifies the food safety regulatory functions within USDA and FDA and that it exaggerates the extent of regulatory overlap. USDA has misinterpreted the focus of this report. This report examines overlapping activities rather than the regulatory framework that allows these activities. Nevertheless, the report contains a clear and accurate acknowledgment that the agencies operate under a statutory framework that gives them different authorities and responsibilities to regulate different segments of the food supply. While we recognize that the agencies operate under different authorities, the activities they perform under these authorities are similar in nature, leading us to question why the federal agencies must continue to spend resources on overlapping, and sometimes duplicative, food safety activities. We also disagree with USDA’s assertion that the agencies activities are vastly different. For example, we find that the agencies’ inspection activities to ensure that food manufacturers comply with regulatory requirements are quite similar. As we document in our report, these inspections have sufficiently common features, such as verifying proper sanitation procedures at food processing facilities, that make these inspections activities candidates for consolidation under one agency. We further disagree with USDA’s comment that the report’s recommendations rely upon overly simplistic interpretations of food safety authorities, regulations, inspection requirements, and training needs. To the contrary, our recommendations address specific areas where the agencies could improve coordination to better leverage resources. USDA did not comment directly on these recommendations. 2. We believe that USDA mischaracterizes our report in noting that it states that a “significant” overlap in inspection authorities exists. Specifically, our report examines inspection activities, not inspection authorities, and certainly does not identify “significant” overlaps in authorities. In fact, the report clearly states that, because of the agencies’ split jurisdiction, they are each responsible for inspecting different food products at jointly regulated facilities. As a result, both agencies send inspectors into these facilities—USDA on a daily basis and FDA less regularly. While we agree with USDA that the number of jointly regulated establishments may be a relatively small portion of all regulated food establishments, USDA and FDA had great difficulty identifying the current number of jointly regulated facilities. Consequently, the magnitude of potential savings is difficult to calculate. We continue to believe that, because USDA maintains a daily presence at hundreds of these facilities, FDA could make more effective use of its resources— an average cost of $4,000 per inspection—if it redirected its inspectors to other facilities for which FDA has sole jurisdiction. 3. Contrary to USDA’s assertion, our report does not suggest that USDA (FSIS) and FDA have comparable HACCP regulations. Instead, our report clearly distinguishes between the elements of the agencies’ HACCP regulations that are comparable and those that are not. For example, the report acknowledges that, given the agencies’ different statutory authorities, both require jointly regulated facilities to maintain separate HACCP plans and states that the contents of these plans differ because the agencies regulate different products. However, as USDA itself notes, the two sets of HACCP regulations (USDA and FDA) are quite similar, and as we point out, they have certain features in common, such as certain sanitation and manufacturing processes. Therefore, we continue to believe that USDA and FDA could consolidate HACCP-based inspections at these jointly regulated facilities. To provide further clarification on what commodities are currently subject to HACCP regulations, we modified our report to indicate, as USDA suggests, that FDA currently requires HACCP plans for seafood and juice products only. 4. We disagree that our report oversimplifies or inaccurately describes federal food safety functions at ports of entry. For example, USDA noted that in order for meat and poultry and egg products to be eligible for import to the United States, foreign food safety regulatory systems must employ equivalent sanitary measures that provide the same level of protection against food safety hazards as is achieved domestically under USDA regulations. We disagree with USDA’s comment, because our report clearly and accurately describes the requirement for certification of those countries wishing to export meat and poultry into the United States, including a finding by the Secretary of Agriculture that the countries have equivalent food safety systems. The main point of our report is that the agencies are not leveraging inspection resources at ports of entry, especially regarding FDA-regulated imported foods that, according to USDA officials, are being stored at USDA-approved inspection facilities. Therefore, we continue to believe that there are opportunities to leverage inspection resources, as we are recommending. Furthermore, USDA’s comment that it shares the results of its overseas equivalency determinations contradicts what USDA and FDA officials told us during the course of our review. However, we note that USDA is now sharing this information. Indeed, FDA commented that it would consider the results of USDA’s foreign country equivalency determinations. 5. Any successful consolidation of inspectors’ training would of course require work. However, we continue to believe that, as USDA’s comments note, there is merit in examining the feasibility of conducting joint training activities when workable commonalities can be found. Our report identifies more than 100 courses in FDA’s inspector training curriculum that include topics common to both USDA and FDA. 6. We disagree with USDA’s assertion that implementation of the 1999 interagency agreement has been largely successful. Our report highlights several deficiencies, even as it gives the agencies credit for improved communication in times of crisis, such as during major recalls. These deficiencies include agencies’ (1) difficulty identifying the establishments to which this agreement pertains, (2) lack of routine communication on inspection findings between agencies’ inspection personnel on such findings of mutual concern as sanitation problems at jointly regulated facilities, and (3) lack of a system to track and exchange information when each agency finds instances of noncompliance. Finally, we also found that the agencies’ efforts to develop and provide training on each other’s inspection techniques and processes did not continue past the first year of the agreement’s implementation. As a result, we continue to believe that the stated purpose of the agreement—to facilitate an exchange of information permitting more efficient use of both agencies’ resources—has not been maximized. We further disagree with USDA’s comment that our report inaccurately characterizes the Bioterrorism Act. As we state in the report, FDA is authorized under the act to enter into an agreement to commission other agency officials, including USDA officials, to carry out inspections on its behalf—for FDA-regulated foods—at establishments under the jurisdiction of both agencies. The following are GAO’s comments on the U.S. Department of Health and Human Services’ (HHS) letter, dated March 11, 2005. 1. We disagree with HHS’s comment that our report’s title should substitute the word duplication for overlap. Given the definitions we lay out in the report, if we modified the title as HHS suggests, we would risk implying that the agencies are undertaking many duplicative efforts, even though we are fully aware that under the current statutory framework the agencies do not exactly replicate food safety activities. That is, our report distinguishes between “overlap”—which we define as those similar activities being performed by more than one agency and “duplication”—which we define as essentially identical activities performed by more than one agency. We also disagree with HHS’s comment that our report overstates similarities in USDA and FDA inspections. First, our report clearly states that USDA and FDA inspections have common key elements: sanitation, good manufacturing practices, and HACCP compliance oversight. Second, the report makes it clear that USDA and FDA inspections vary, depending on whether the product is a USDA- or FDA- regulated food product. Furthermore, we disagree with HHS’s comment that the training programs are vastly different. As our report discusses, FDA’s training curriculum includes dozens of courses that address topics common to USDA and FDA. Despite HHS’s disagreement with our recommendation that the agencies examine the feasibility of establishing a joint training program for food inspectors, we continue to believe that such an examination has merit. In its comments, USDA agreed that there is merit in examining the feasibility of conducting joint training activities when commonalities can be found. 2. We agree with HHS’s comment that, if a single agency were to be responsible for the safety of all food products, different organization units within that agency may need to coordinate their activities. However, we believe that some economies of scale would be derived from combining overlapping activities, including those that our report highlights. For example, with a single food safety agency, the federal government would not need to have two separate food inspection workforces or two separate training programs. 3. We acknowledge that some elements of the 1999 interagency agreement on dual jurisdiction establishments have been implemented and that the agreement has enhanced coordination. However, we continue to believe that the agreement could be better implemented. We further disagree that the report does not identify the distinct difference between FDA interagency agreements and memoranda of understanding. We acknowledge that, as used in our report, the term interagency agreement refers generally to memoranda of understanding, memoranda of agreement, and interagency agreements identified by USDA, FDA, EPA, and NMFS. The report includes a footnote to indicate that FDA makes a distinction, which the other agencies do not, between interagency agreements and memoranda of understanding. The footnote explains that, according to FDA, FDA memoranda of understanding do not provide for exchanges of funds. FDA refers to agreements that involve exchanges of funds, personnel, or property as interagency agreements. We did not consider this type of agreement in our analysis. 4. We understand the differences between HACCP principles and plans. Our report acknowledges that while HACCP principles are the same for both FDA and USDA, the HACCP plans are different as they address different risks associated with different products (i.e., seafood, juice, meat, or poultry). Our report’s identification of HACCP requirements as another area of overlap between the two agencies refers to the fact that both agencies have issued HACCP regulations that are based on a similar HACCP model. We have modified our report to indicate that, short of consolidating all inspection functions, consolidating inspections of the similar elements in the agencies’ HACCP plans would reduce overlap. We further note that USDA’s HACCP rule applies to both meat and poultry products, although these products present different hazards. Thus, we believe it is possible to issue broad regulations based on common principles that can then be applied to specific products. We have made minor modifications in the report to avoid confusion regarding HACCP principles and HACCP plans and to indicate that different risks are associated with different food products and, therefore, require different HACCP plans. In addition to those named above Lawrence J. Dyckman, Katheryn Hubbell, Jane Kim, Sara Margraf, Carol Herrnstadt Shulman, Michele Fejfar, Amy Webbink, and Katherine Raheb made key contributions to this report. | GAO has documented many problems resulting from the fragmented nature of the federal food safety system and recommended fundamental restructuring to ensure the effective use of scarce government resources. In this report, GAO (1) identified overlaps in food safety activities at USDA, FDA, EPA, and NMFS; (2) analyzed the extent to which the agencies use interagency agreements to leverage resources; and (3) obtained the views of stakeholders. Several statutes give responsibility for different segments of the food supply to different agencies to ensure that the food supply is safe. The U.S. Department of Agriculture (USDA) and the Food and Drug Administration (FDA) within the Department of Health and Human Services (HHS) have the primary responsibility for regulating food safety, with the Environmental Protection Agency (EPA) and the National Marine Fisheries Service (NMFS) also involved. In carrying out their responsibilities, with respect to both domestic and imported food, these agencies spend resources on a number of overlapping activities, such as inspection/enforcement, training, research, or rulemaking. For example, both USDA and FDA conduct similar inspections at 1,451 dual jurisdiction establishments--facilities that produce foods regulated by both agencies. Under authority granted by the Bioterrorism Act of 2002, FDA could authorize USDA inspectors to inspect these facilities, but it has not done so. Furthermore, USDA and FDA maintain separate training programs on similar topics for their inspectors that could be shared. Ultimately, inspection and training resources could be used more efficiently. GAO identified 71 interagency agreements that the agencies entered into to better protect public health and to coordinate their food safety activities. However, the agencies have weak mechanisms for tracking these agreements that, in some cases, lead to ineffective implementation. Specifically, USDA and FDA are not fully implementing an agreement to facilitate the exchange of information about dual jurisdiction establishments, which both agencies inspect. In addition, FDA and NMFS are not implementing an agreement designed to enable each agency to discharge its seafood responsibilities effectively. GAO spoke with selected industry associations, food companies, consumer groups, and academic experts, and they disagree on the extent of overlap and on how best to improve the food safety system. Most of these stakeholders agreed that laws and regulations should be modernized to more effectively and efficiently control food safety hazards, but they differed about whether to consolidate food safety functions into a single agency. |
According to the International Organization for Migration, the February 2006 bombing of the Al-Askari Mosque in Samara triggered sectarian violence, which increased the number of displaced Iraqis. Although military operations, crime, and general insecurity remained factors, sectarian violence became the primary driver for population displacement. Many Iraqis fled their country and immigrated to neighboring countries, particularly to Syria and Jordan. According to United Nations High Commissioner for Refugees (UNHCR), the 1951 United Nations Convention Relating to the Status of Refugees and its 1967 Protocol provide the foundation for modern refugee protection. According to the Convention, a refugee is someone who, “owing to a well- founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group, or political opinion, is outside the country of his nationality, and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country.…” UNHCR is mandated to find solutions to the plight of refugees. According to UNHCR, three solutions are available: First, voluntary repatriation is the preferred solution for the majority of refugees. Most refugees prefer to return home as soon as circumstances permit (generally when a conflict has ended and a degree of stability has been restored). UNHCR promotes, supports, and facilitates voluntary repatriation as the best solution for displaced people, provided it is safe and reintegration is viable. Second, UNHCR may help refugees integrate and settle in the “asylum,” or host, country where they reside as refugees. Some refugees cannot or are unwilling to return because they would face persecution. According to UNHCR, relatively few host countries allow refugees to settle. Third, UNHCR may assist refugees in permanently resettling in third countries. According to UNHCR, only a small number of nations take part in UNHCR resettlement programs worldwide and accept annual quotas of refugees. According to State, historically, less than 1 percent of registered refugees are resettled in third countries. Of the Iraqis resettling in third countries in 2009, UNHCR referred 75 percent (about 62,000) for resettlement in the United States. This report focuses on the third solution—those Iraqis resettled in the United States. When Iraqi refugees and SIV holders arrive in the United States, they have access to federal- and state-funded assistance to help them reach self-sufficiency in their new communities. State has primary responsibility for funding and administering initial reception and placement benefits for refugees and SIV holders upon their arrival in the United States. State’s PRM has cooperative agreements with 10 resettlement agencies that coordinate with local affiliates across the country to make referrals and to administer resettlement services and other assistance. HHS’s ORR administers cash and medical assistance, and employment and other social services through the states and resettlement agencies that coordinate services for refugees across the country. Regarding federal government employment, individuals are generally employed in the competitive, excepted, or Senior Executive Service. When hiring for competitive service positions, agencies use a competitive examination process set forth in Title 5 of the U.S. Code. Some agencies have excepted service positions for which they are not required to follow OPM’s competitive examination process; instead, the agencies have the authority to establish their own hiring procedures. When agencies hire for career senior executive positions—top-level policy, supervisory, and managerial positions—the individual’s executive and managerial qualifications must be reviewed and approved by an OPM-administered Senior Executive Service Qualifications Review Board. According to OPM data, the majority of civil service employees in the United States are in the competitive service. Between fiscal years 2006 and 2009, the United States has admitted 34,470 Iraqi refugees under State’s Refugee Admissions Program. Since fiscal year 2007, State has issued 4,634 SIVs to Iraqis. Resettlement agencies, working under cooperative agreements with State, have resettled Iraqis throughout the United States, but particularly in California and Michigan. These agencies have found that Iraqis arrive in the United States with high levels of trauma, injury, and illness, which contribute to the challenges they face in resettling in a new country. In addition, entry-level jobs normally available to refugees are scarce and more competitive in the current economic downturn. State’s PRM manages the U.S. Refugee Admissions Program (USRAP)— the U.S. government’s program for accepting and processing refugee applications for resettlement in the United States. PRM’s regional refugee coordinator accepts referrals from UNHCR, embassies, and certain nongovernmental organizations (NGO). Certain categories of Iraqis with U.S. affiliations do not need a referral and may apply directly for refugee consideration under a direct access program in Jordan, Egypt, and Iraq. Overseas processing entities (OPE), working under a cooperative agreement with State, prescreen the referrals and prepare application forms by collecting and verifying personal and family information, obtaining details of persecution or feared harm, and initiating security name checks. Once the OPE prescreens the case, it is provided to DHS’s U.S. Citizenship and Immigration Services (USCIS), which makes periodic visits to the region to interview refugees and adjudicate their applications for resettlement in the United States. Once USCIS preliminarily approves cases, they are returned to the OPE, which continues processing medical screenings, sponsorship (i.e., the identification of the U.S.-based resettlement agency that will provide initial resettlement benefits), travel arrangements, and cultural orientation, among other things. The cultural orientation, which is a voluntary course for all refugees over the age of 15, addresses essential topics related to processing, travel, and resettlement, such as the role of the resettlement agency, housing, employment, health, and money management. While the OPE coordinates outprocessing, PRM secures a sponsoring resettlement agency in the United States. From fiscal years 2006 through 2009, the United States admitted 34,470 Iraqi refugees (see table 1). DHS and State’s Bureau of Consular Affairs also have implemented two SIV programs, established by Congress, to further assist qualified Iraqis who worked for or on behalf of the U.S. government and who want to immigrate to the United States. Both programs cover the principal Iraqi applicants and their dependents. Iraqi SIV holders are admitted into the United States as lawful permanent residents. The first SIV program, established under section 1059 of the NDAA for fiscal year 2006, targets Iraqi and Afghan translators and their dependents. The second SIV program, established under section 1244 of the NDAA for fiscal year 2008, targets certain Iraqis who had been U.S. government employees, contractors, or subcontractors and their dependents. In January 2008, Congress authorized that up to 5,000 Iraqis per year for the next 5 fiscal years, who had worked for or on behalf of the U.S. government in Iraq and had experienced or were experiencing an ongoing serious threat as a consequence, can receive SIVs. Some Iraqi refugees may also qualify for the SIV programs. To apply for special immigrant status, eligible Iraqis may file a petition, including a favorable recommendation from their U.S. civilian or military supervisor documenting their service. USCIS sends approved petitions to State’s National Visa Center, which contacts applicants to set up an in- person interview at an embassy or a consulate. Consular officials interview applicants, review the submitted documents and security and medical clearances, and issue an immigrant visa if candidates satisfy all criteria. At the end of fiscal year 2009, State had issued 2,389 SIVs to principal Iraqi applicants out of a maximum authorized 11,050 principal- applicant visas. Under the two programs, the United States issued 4,634 Iraqi SIVs from fiscal years 2007 through 2009 (see table 2). It is unclear how many Iraqis with SIVs have entered the United States. USCIS provided us with data on the number of Iraqi and Afghan SIV holders who were admitted into the United States as permanent residents (or green card holders) between fiscal years 2007 and 2009. Iraqi and Afghan SIVs are issued based on an applicant’s nationality. USCIS provided us these data by applicants’ country of birth, but could not provide the data by nationality. Therefore, we report only Iraqi SIV issuance data. Since fiscal year 2006, Iraqi refugees and SIV holders have resettled in communities across the United States. Placement decisions consider the location of an individual’s family members, potential medical needs, and municipal and sponsoring agency capacity to accept and provide for refugees and SIV holders. The largest populations of recently resettled Iraqis are in California, Michigan, Texas, Arizona, Illinois, and Virginia (see fig. 1 and app. II for more information). According to NGOs and resettlement agencies, the U.S. refugee resettlement program has been strained by a growing number of Iraqi and Afghan refugees and the economic downturn in the United States. In June 2009, the International Rescue Committee reported that the high levels of trauma, injury, and illness among Iraqi refugees contribute to the precarious nature of their resettlement. Moreover, unemployment and homelessness threaten Iraqi refugees and other populations recently resettled in the United States, according to NGOs and resettlement agencies. In October 2009, the Georgetown Law School reported that a Michigan resettlement office received funding in 2008 for 300 refugees, but served more than 1,200. Caseworkers, dealing with an average of 120 cases at a time—up from 30 the year before—could not provide what they considered sufficient employment services. According to the International Rescue Committee report and resettlement agency officials we interviewed, some Iraqi refugees face eviction because they cannot pay their rent. The present economic downturn has made jobs normally available to refugees, such as entry-level jobs with limited English proficiency, scarce and more competitive. An ORR official stated that, before the current economic recession, refugees could regularly secure such jobs, but since the recession these positions are generally not available. Most of the resettlement agencies stated that it is taking longer than usual—often as long as 6 months, and in some cases, 9 to 10 months—for incoming refugees to find employment. U.S. officials and resettlement agencies stated that without jobs, some refugees are unable to get by on the levels of assistance afforded them by the U.S. refugee resettlement program. Iraqi refugees, in particular, have faced difficulties finding work despite their relatively high levels of education, according to PRM, ORR, and USCIS officials, and representatives from the resettlement agencies. According to an ORR official and resettlement agency officials, the U.S. resettlement program does not take into account refugees’ prior work experience and education in job placements. Rather, the focus of the program is on securing early employment for refugees. PRM data indicate that many Iraqi refugees who were resettled in the United States in fiscal years 2007 through 2009 reported having some secondary education. PRM, ORR, and the resettlement agencies reported that educated Iraqis are struggling to find entry-level employment in the United States, much less employment in their professional field of work. For example, we interviewed three Iraqi refugees about their experience searching for employment in the United States. Two had worked for the U.S. government in Iraq, and one was unable to find an entry-level position requiring no formal education. This individual estimated that he had applied for more than 30 low-skill jobs, such as for a busboy and cleaner, before his former U.S. supervisor in Iraq helped him find a job. Iraqi refugees and SIV holders are eligible for PRM-funded basic needs support and services upon arrival in the United States. In addition, qualified Iraqi refugees and—as a result of December 2009 legislation— qualified Iraqi SIV holders can receive certain assistance generally for up to 7 years through public benefits programs. Prior to December 19, 2009, Iraqi SIV holders’ eligibility for public benefits generally ceased after 8 months. Both groups can receive up to 8 months of ORR-funded cash and medical assistance. According to PRM, its assistance typically lasts for 30 days; however, support may continue for up to 90 days if basic needs have not been met. All refugees automatically receive this assistance, which includes travel arrangements to their assigned resettlement location, basic housing, food allowances, school enrollments, and referrals for medical needs, through the resettlement agencies. As of January 1, 2010, PRM provides the resettlement agencies $1,800 per refugee to cover the direct and administrative costs of the assistance. Prior to January 1, 2010, PRM provided resettlement agencies $900 per refugee. Iraqi SIV holders do not automatically receive these benefits; they must sign up to receive them within 10 days of receiving their visas. SIV holders who do not accept PRM benefits make their own travel arrangements and may resettle anywhere in the United States. According to PRM data, 1,995 SIV holders (out of 4,634 total issued visas for these years) have participated in the PRM program since 2007, when Iraqi SIV holders were first authorized to access these benefits. Qualified Iraqi refugees and, as of December 19, 2009, qualified Iraqi SIV holders may be eligible for federal public benefit programs, including Temporary Assistance for Needy Families (TANF), Medicaid and State Children’s Health Insurance Program (SCHIP), Supplemental Security Income (SSI), and Supplemental Nutrition Assistance Program (SNAP, formerly the Food Stamp Program), for generally up to 7 years, depending on the program and the state. Permanent residents (such as Iraqi SIV holders) are generally barred from receiving certain public benefits for their first 5 years in the United States. However, in 2007, Congress passed legislation establishing that Iraqi SIV holders could receive public benefits for up to 6 months. In 2008, Congress extended their allowance to 8 months. The DOD Appropriations Act for fiscal year 2010 included a provision which allows Iraqi SIV holders to be eligible for public benefits to the same extent, and for the same period of time, as refugees. Relevant agencies are in the process of issuing guidance to further define the application of this provision to Iraqi SIV holders. In addition, ORR funds social services, for which Iraqi refugees and SIV holders may be eligible, for up to 5 years. ORR social services, which include job preparation, English language classes, and assistance with job interviews, do not have income requirements and are designed to find refugees employment within 1 year of enrollment. Figure 2 provides information on the types of resettlement assistance available to qualified Iraqi refugees and SIV holders, and the impact of the December 19, 2009, legislation on the duration of time for which they may be eligible for this assistance. As figure 2 also shows, Iraqi refugees and SIV holders who are not eligible for TANF or Medicaid may be eligible for ORR-funded Refugee Cash Assistance (RCA) and Refugee Medical Assistance (RMA) for up to 8 months. According to ORR, most Iraqi refugees and SIV holders who do not qualify for TANF or Medicaid are eligible for RCA and RMA. Refugee resettlement assistance programs, such as cash assistance, ensure that refugees become self-sufficient as quickly as possible after they arrive in the United States. To participate in RCA, qualifying refugees and SIV holders must register for employment services and generally accept the first job offered, unless they can show good cause for not accepting the position. Current requirements make it difficult for qualified Iraqi refugees and SIV holders to obtain U.S. government employment. Specifically, most federal jobs in the United States require U.S. citizenship and background investigations, and Arabic language positions often require security clearances, which noncitizens cannot obtain. Over the course of our work, we identified two institutes at DOD and State that have some flexibility in hiring noncitizens for U.S. positions. Finally, DOD and State have not implemented a program intended to employ SIV holders under authority granted in 2009 legislation. U.S. government hiring requirements limit the extent to which noncitizens—including Iraqi refugees and SIV holders—can be employed in federal government positions in the United States. Iraqi refugees and SIV holders seeking federal government employment also face challenges posed by requirements for background investigations, and, for certain positions, security clearances. First, U.S. government agencies are restricted from employing noncitizens in competitive service positions. For example, USCIS reported that it may employ only U.S. citizens and nationals as Arabic language specialists because the positions are in the competitive service. Under a provision passed in the fiscal year 2010 Consolidated Appropriations Act, agencies can use appropriated funds to employ qualifying permanent residents and refugees seeking U.S. citizenship in the excepted service or the Senior Executive Service. Second, a particular agency may have specific legislation that prohibits that agency from employing noncitizens in certain positions. For example, State may employ only U.S. citizens in the Foreign Service, including its overseas positions that require Arabic. Similarly, DHS’s Transportation Security Administration may only employ U.S. citizens as Transportation Security Officers. According to OPM officials, it is difficult to complete background investigations, which are required for all U.S. government employees, on Iraqi refugees and SIV holders. For example, it is difficult to obtain the information necessary to verify Iraqi refugees’ or SIV holders’ employment history and other information required for the investigation. In addition, OPM officials stated that the background checks used to hire Iraqis as part of the U.S. mission in Iraq are not sufficient to substitute for the background investigation required for civil service employment in the United States. In addition, some U.S. government positions may also require security clearances to ensure that national security information is entrusted only to those who have proven reliability and loyalty to the nation; however, noncitizens cannot obtain security clearances. Four of the five agencies we reviewed reported that security clearances are required for most or all of their positions that require or prefer knowledge of Arabic or Iraq; USAID requires security clearances for all direct-hire positions. For example, DOD, DHS, and DOJ have intelligence positions that may require Arabic, but all such positions require a security clearance. Similarly, USAID officials said that, while they have a preference for persons who speak Arabic or have knowledge of Iraq, all civil service and all Foreign Service positions at USAID require security clearances. In addition, officials in the Human Rights Violators and War Crimes Unit in DHS’s Immigrations and Customs Enforcement reported that, as of September 2009, there were 20 open investigations that would benefit from Arabic language skills. However, all staff in the unit must have security clearances. Certain federal positions in the United States at DOD and State are open to noncitizens, including Iraqi refugees and SIV holders. Specifically, as of November 6, 2009, DOD’s Defense Language Institute (DLI) reported having 501 Arabic positions—including 32 open positions; all were available to noncitizens. Similarly, all 21 Arabic positions at State’s Foreign Service Institute (FSI) are available to noncitizens, according to FSI (see table 3). Both DLI and FSI reported that they had previously hired foreign nationals to fill these types of positions. DLI and FSI can hire noncitizens, including Iraqi refugees and SIV holders, because language instructor positions at DLI and FSI are in the excepted service. Neither DLI nor FSI require security clearances because Arabic instructors do not require access to classified information, according to personnel officials at each institute. However, the positions do require background investigations. The positions may also require degrees or other educational backgrounds. In fiscal year 2009, the NDAA authorized DOD and State to jointly establish a temporary program to employ Iraqi SIV holders who have resettled in the United States as translators, interpreters, and cultural awareness instructors, but the agencies have not done so. According to OPM officials, DOD and State are authorized to hire Iraqi SIV holders as (1) temporary employees in excepted service positions, or (2) as personal services contractors, in which case they are not federal employees. In the committee report for the fiscal year 2010 NDAA, the House Armed Services Committee noted that Iraqi SIV holders’ fluency i Arabic and knowledge of Iraq could be useful to the U.S. government. The committee also noted that many of the SIV holders worked on behalf of the United States and coalition forces for years, often at great risk to themselves or their families. Although DOD and State have needs for Arabic speakers, such as language instructors at DLI and FSI, DOD policy officials and State human resource officials stated that the agencies do not plan to establish this program to employ qualified Iraqi SIV holders to fill any unmet needs. A senior DOD policy official stated that DOD’s human resources divisions did not have a need for additional Arabic speakers. Moreover, DOD and State officials stated that the departments did not receive any funding for the program. DOD provided written comments on a draft of this report (see appendix III). State, DHS, and HHS provided technical comments, which we incorporated, as appropriate. We also sent a draft of this report to DOJ, USAID, and OPM, but they did not provide comments. DOD noted that it is meeting its need for translators, interpreters, and cultural awareness instructors with knowledge of Arabic or Iraq through existing hiring authorities. Therefore, as we state in our report, DOD has not identified a need to establish the temporary employment program for Iraqi SIV holders pursuant to the NDAA for fiscal year 2009. We are sending copies of this report to interested congressional committees and the Secretaries of State, Defense, Health and Human Services, and Homeland Security, as well as the Attorney General, the Administrator of USAID, and the Director of OPM. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In this report, we (1) provide information on the status of resettled Iraqis in the United States and the initial challenges they face, (2) review the benefits afforded to Iraqi refugees and special immigrant visa (SIV) holders, and (3) review the challenges faced by Iraqi refugees and SIV holders in obtaining employment with the federal government. To provide information on the number and location of resettled Iraqis and the initial challenges they face, we collected and analyzed documentation and interviewed officials from the Department of State’s (State) Bureau of Population, Refugees, and Migration (PRM) and Consular Affairs; the Department of Health and Human Services’ (HHS) Office of Refugee Resettlement (ORR); and the Department of Homeland Security’s (DHS) U.S. Citizenship and Immigration Services (USCIS). In addition, we interviewed representatives from 10 resettlement agencies that work with PRM and ORR to provide benefits and services to Iraqi refugees and SIV holders: Church World Service; Episcopal Migration Ministries; Ethiopian Community Development Council; Hebrew Immigrant Aid Society; Iowa Department of Human Services, Bureau of Refugee Services; International Rescue Committee; Lutheran Immigration and Refugee Service; U.S. Committee for Refugees and Immigrants; U.S. Conference of Catholic Bishops; and World Relief. We also interviewed two nongovernmental organizations (NGO) that work with PRM and ORR to provide technical assistance to resettlement agencies on refugee employment and cultural adjustment issues. We reviewed reports issued by NGOs on the status of Iraqi refugees in the United States and the challenges they face in resettling in this country. We interviewed several Iraqi refugees about their resettlement experiences; their views or experiences may not be representative of other refugees or SIV holders. To determine the reliability of Consular Affairs data on Iraqi SIV issuances, we interviewed the Consular Affairs official who maintains this data. We determined that the data were sufficiently reliable to report on the number of Iraqi SIVs issued between fiscal years 2007 and 2009. USCIS provided us with data on the number of Iraqi and Afghan SIV holders who were admitted into the United States as permanent residents (or green card holders) between fiscal years 2007 and 2009. Iraqi and Afghan SIVs are issued based on an applicant’s nationality. USCIS provided us these data by applicants’ country of birth, but could not provide the data by nationality. As a result, we determined that these data were not sufficiently reliable to indicate how many Iraqi SIV holders were admitted into the United States during this time period. Therefore, we report only Iraqi SIV issuance data. To determine the reliability of PRM data on resettled Iraqi refugees and SIV holders, we interviewed the PRM officials who monitor and use these data. We determined that the data were sufficiently reliable to report on the number, locations, and reported general education levels of resettled Iraqis between fiscal years 2006 and 2009. To review the benefits afforded Iraqi refugees and SIV holders, we collected and analyzed relevant laws, regulations, and agency policies regarding federally and state-funded and administered refugee resettlement programs. We interviewed officials from PRM and ORR to determine the types of benefits available and their eligibility requirements. The majority of our audit work was completed prior to the December 2009 passage of the fiscal year 2010 Department of Defense (DOD) Appropriations Act, which changed Iraqi SIV holders’ eligibility for public benefits. To review the challenges Iraqi refugees and SIV holders face in obtaining employment with the federal government, we analyzed relevant laws, regulations, executive orders, and agency policies on U.S. government employment and personnel security requirements. The majority of our audit work was completed prior to the December 2009 passage of the fiscal year 2010 Consolidated Appropriations Act, which made changes to a long standing restriction on the use of appropriated funds to employ noncitizens by the federal government in the United States. We interviewed officials from the Office of Personnel Management (OPM) regarding requirements for U.S. government employment. We also interviewed program, human resource, and security officials from five key agencies—DOD (specifically, the Army), State, DHS, the Department of Justice, and the U.S. Agency for International Development (USAID)— regarding their employment and personnel security requirements positions in the United States. We chose these agencies because they have national security missions, ongoing programs in Iraq, and needs for personnel with Arabic language skills; we did not include the intelligence community. We focused on employment in the United States because generally Iraqi refugees and SIV holders who want to apply for U.S. citizenship must reside in the United States for a certain period of time. In addition, refugees’ ability to apply for permanent resident status could be delayed if they travel overseas. We did not develop an inventory of the agencies’ needs for Arabic language skills or Iraqi expertise. We also interviewed policy officials at DOD and State regarding the temporary program authorized by the fiscal year 2009 Duncan Hunter National Defense Authorization Act to employ Iraqi SIV holders who have resettled in the United States as translators, interpreters, and cultural awareness instructors at DOD and State. To assess the reliability of data on Arabic positions at DOD’s Defense Language Institute (DLI) and State’s Foreign Service Institute (FSI), we interviewed human resource officials at DLI, DOD’s U.S. Army Training and Doctrine Command, and FSI. We determined that the data were sufficiently reliable to report on the number of Arabic positions at DLI and FSI. Table 4 provides data on the numbers of Iraqi refugees and special immigrant visa (SIV) holders who were resettled in the United States from fiscal years 2006 through 2009. The six states with the highest numbers in each category are noted with an asterisk. In addition to the contact named above, Tetsuo Miyabara, Assistant Director; Kathryn H. Bernet; Muriel Brown; Lynn Cothern; Martin de Alteriis; Etana Finkler; Corissa Kiyan; Mary Moutsos; Steven Putansu; and Lindsay Read made key contributions to this report. | Since the February 2006 bombing of the Al-Askari Mosque in Samara that triggered the displacement of thousands of Iraqis, the United States has taken a lead role in resettling the displaced. The administration has indicated its intent to assist those Iraqis who supported the United States in Iraq. In addition, Congress authorized the Departments of Defense (DOD) and State (State) to jointly establish and operate a program to offer temporary employment to Iraqi special immigrant visa (SIV) holders in the United States. This report provides information on the (1) status of resettled Iraqis in the United States and the initial challenges they face, (2) benefits afforded Iraqi refugees and SIV holders, and (3) challenges they face obtaining employment with the federal government. GAO conducted this review under the Comptroller General's authority. GAO analyzed data on Iraqi refugees and SIV holders in the United States, and laws and regulations on the benefits afforded to them. GAO also analyzed U.S. government employment and personnel security requirements. GAO interviewed officials from five key agencies regarding these requirements. This report does not contain recommendations. DOD provided official comments. State and the Departments of Homeland Security and Health and Human Services (HHS) provided technical comments. GAO incorporated these comments, as appropriate. Between fiscal years 2006 and 2009, the United States admitted 34,470 Iraqi refugees under State's Refugee Admissions Program. In addition, State issued 4,634 SIVs to Iraqis pursuant to two programs, established by Congress to help Iraqis who previously worked for the U.S. government in Iraq. Resettlement agencies, working under cooperative agreements with State, have resettled Iraqis throughout the United States but particularly in California and Michigan. These agencies have found that Iraqis arrive in the United States with high levels of trauma, injury, and illness, which contribute to the challenges they face in resettling in a new country. In addition, entry-level jobs normally available to refugees are scarce and more competitive in the current economic downturn. Iraqi refugees generally have high levels of education, according to U.S. officials and representatives from the resettlement agencies. Nevertheless, Iraqis have struggled to find entry-level employment in the United States. Iraqi refugees and SIV holders are eligible for resettlement assistance and public benefits upon arrival in the United States. State provides resettlement agencies $1,800 per person to cover basic housing, food, and assistance for accessing services during their first 30 days in the United States; however, support may continue for up to 90 days if basic needs have not been met. Refugees automatically receive these benefits; Iraqi SIV holders must elect to receive them within 10 days of receiving their visas. In addition, qualified Iraqi refugees and, as a result of December 2009 legislation, qualified SIV holders can receive certain assistance for up to 7 years through public benefits programs. Prior to December 19, 2009, Iraqi SIV holders' eligibility for public benefits generally ceased after 8 months. Both groups can also receive up to 8 months of cash and medical assistance from HHS if they do not qualify for public benefits. In addition, HHS funds social services, including job preparation, English language classes, and assistance with job interviews, for which Iraqi refugees and SIV holders may be eligible for up to 5 years. Iraqi refugees and SIV holders, including those who acted as interpreters and linguists for civilian agencies and military commands in Iraq, have limited opportunities for federal employment. Most federal positions in the United States require U.S. citizenship and background investigations; certain positions, including most positions related to Arabic or Iraq, also require security clearances, which noncitizens cannot obtain. However, GAO did identify positions at DOD's Defense Language Institute and State's Foreign Service Institute open to qualified noncitizens. Finally, State and DOD have not established the temporary program intended to offer employment to Iraqi SIV holders under authority granted the agencies in fiscal year 2009 legislation. Although both agencies have positions requiring Arabic language skills, neither identified any unfilled needs that could be met by employing Iraqi SIV holders through this joint program. |
Reserve components participate in military conflicts and peacekeeping missions in areas such as Bosnia, Kosovo, and southwest Asia, and assist in homeland security. From fiscal year 1996 through fiscal year 2001, an average of about 11,000, or 1 percent, of the roughly 900,000 reservists were mobilized each year. The length of mobilizations can be as long as 2 years with the mean length of mobilizations for the 6-year period we reviewed being 117 days. As of April 2002, about 80,000, or 8 percent, of reservists had been mobilized for 1 year for operations related to September 11, 2001. At the same time, additional reserve personnel continued to be deployed throughout the world on various peacekeeping and humanitarian missions. The rights of mobilized personnel of the reserve components are protected under the Soldiers’ and Sailors’ Civil Relief Act of 1940 (SSCRA), as amended, and by the Uniformed Services Employment and Reemployment Rights Act of 1994 (USERRA), as amended. Included in these acts are protections related to health care coverage. For example, SSCRA provides protections for reservists who have individual health coverage. Specifically, for individually covered reservists returning from active duty, SSCRA requires private insurance companies to reinstate coverage at the premium rate they would have been paying had they never left. Under SSCRA, the insurance company cannot refuse to cover most preexisting conditions. During military service, USERRA protects reservists’ employer-provided health benefits. Specifically, for absences of 30 days or less (training periods typically last 2 weeks or less), health benefits continue as if the employee had not been absent. For absences of 31 days or more, coverage stops unless (1) the employee elects to pay for the coverage, including the employer contributions, or (2) the employer voluntarily agrees to continue coverage. Under USERRA, employers must reinstate reservists’ health coverage the day they apply to be reinstated in their civilian positions—even if the employers cannot put the employees back to work immediately. Reservists mobilized under federal authorities are covered by TRICARE, DOD's health care system. If they are ordered to active duty for 31 days or more, reservists are enrolled in Prime, TRICARE’s managed care option, and—like other active duty personnel—are required to receive care through TRICARE, either through 1 of 580 MTFs worldwide, or through TRICARE’s network of civilian providers. When reservists’ mobilization orders are for 31 to 178 days, their dependents are eligible for the Standard and Extra options—TRICARE’s fee-for-service and preferred provider options, respectively. Once eligible for TRICARE, reservists and their dependents also become eligible for prescription drug benefits. When reservists’ orders are for 179 days or more, dependents are eligible for health care under Prime. Under TRICARE, active duty personnel, including mobilized reservists, do not pay premiums for their health care coverage; however, depending on the option chosen, they may be responsible for copayments, deductibles, and enrollment requirements for their dependents. (For an overview of these benefits, see table 1.) Mobilized reservists are eligible for dental care through the military health care system. However, like active duty dependents, mobilized reservists’ dependents are only eligible for dental care if they participate in DOD’s voluntary dental insurance program, which requires enrollment and has monthly premiums. Because mobilized reservists’ dependents could be liable for two health coverage deductibles in 1 year—their civilian insurers’ deductible prior to mobilization and the TRICARE Standard or Extra deductible once mobilized—DOD has used authorities included in the National Defense Authorization Acts for 2000 and 2001 to provide financial assistance through several demonstration programs. For example, the Reserves Component Family Member Demonstration Project—available for those currently mobilized under DOD’s Operation Noble Eagle and Operation Enduring Freedom—eliminates the TRICARE deductible and the requirement that dependents obtain statements that inpatient care is not available in an MTF before obtaining nonemergency treatment from a civilian hospital. In addition, DOD may pay non-network physicians up to 15 percent more than the TRICARE rate for treating dependents of mobilized reservists—a cost that otherwise would be borne by dependents if physicians required this additional payment. Until recently, DOD had administered a transitional benefit program that provided demobilized reservists and their dependents 30 days of additional TRICARE coverage as they returned to their civilian health care. The 2002 NDAA extended the transitional period during which reservists may receive TRICARE coverage from 30 days to 60--120 days, depending on the length of active duty service. This change more closely reflects the 90 days that USERRA provides reservists to apply for civilian reemployment when they are mobilized for more than 181 days, and the change will provide health care coverage if they elect to delay return to their employment subsequent to demobilization. However, the 2002 NDAA did not provide any transitional benefit for dependents. Overall, the percentage of reservists with health care coverage when they are not mobilized is similar to that found in the general population—and, like the general population, most reservists have coverage through their employers. According to DOD’s 2000 Survey of Reserve Component Personnel, nearly 80 percent of reservists reported having health care coverage. In the general population, 81 percent of 18 to 65 year olds have health care coverage. Officers and senior enlisted personnel were more likely than junior enlisted personnel to have coverage. Only 60 percent of junior enlisted personnel, about 90 percent of whom are under age 35, had coverage—lower than the similarly aged group in the general population. Of reservists with dependents, about 86 percent reported having coverage. Of reservists without dependents, about 63 percent reported having coverage. More than three-quarters of reservists were provided health care coverage by their civilian employers’ health plans or their spouses’ health plans. (See fig. 1.) Some reservists were covered by more than one health plan. Most reservists maintained their civilian coverage when mobilized. Reservists generally maintained this coverage to better ensure continuity of health benefits and care for their dependents, sometimes at an additional cost. However, some reservists who dropped their civilian insurance to use TRICARE reported that their dependents had problems finding providers, establishing eligibility, understanding TRICARE’s benefits, and obtaining assistance when questions or problems arose. We found that such problems could be ameliorated through additional education and assistance targeted to reservists and their dependents. Because most reservists maintained their civilian coverage when mobilized, few dependents experienced disruptions in coverage. According to DOD’s 2000 survey, about 87 percent of reservists who had been mobilized at least once reported having civilian insurance at the time they were mobilized. The remaining 13 percent did not have civilian coverage. Of those who had civilian coverage, about 90 percent maintained it while mobilized. According to DOD officials and reservists we interviewed, many reservists maintained their civilian coverage to avoid disruptions associated with a change to TRICARE and to ensure that their dependents could continue seeing their current providers—who may not accept TRICARE reimbursements, either as network providers or under the Standard option. Preserving provider relationships was especially important to reservists whose dependents with special needs had specialists familiar with their care or to dependents who had long-standing relationships with civilian providers. Reservists we contacted reported varying financial arrangements for covering the costs of their civilian premiums while they were mobilized. USERRA does not require employers to continue paying their share of health insurance premiums when mobilizations extend beyond 30 days. However, employers continued to pay at least their portion of health insurance premiums beyond this 30-day period for about 80 percent of the reservists we contacted who maintained their employer-sponsored coverage. Sometimes, these employers paid all costs, both their own and the employee portion, while in other instances reservists continued to pay the employee portion of the premium. The remaining reservists paid the total insurance premium while mobilized. In the general population in 2001, the average employer-sponsored premium for a family plan was $588 per month with the employee generally paying about 26 percent of this premium. Mobilized reservists who used TRICARE reported a variety of problems that they and their dependents experienced when they tried to access the system. However, when DOD provided information and assistance targeted toward the situations reservists and their dependents face, these types of problems were more likely to be averted. The most common problems that reservists reported were difficulties they and their dependents had moving into the system—finding TRICARE providers, establishing eligibility, understanding TRICARE’s benefits, and obtaining assistance when questions or problems arose. While similar problems have been reported by other active duty personnel, reservists and their dependents are more likely to experience such problems because they often live in areas distant from MTFs, and their active duty service is brief and episodic. Of the 360 reservists with recent mobilization experience that we contacted, about 38 percent reported some kind of problem with TRICARE. One problem, constituting about a quarter of the reported problems, was finding a TRICARE provider. Mobilized reservists and their dependents can have more difficulty finding TRICARE providers because many do not live in areas where the network is robust. Compared to 5 percent of active duty personnel, about 70 percent of reservists live and work more than 50 miles (or an hour’s drive) from an MTF—areas DOD has designated as remote. Because DOD’s civilian contractors are generally not required to establish TRICARE civilian networks in these areas, a network of providers may not exist. Where networks do exist, provider choice may be limited. TRICARE Prime Remote (TPR) and TPR for Active Duty Family Members were established to help improve access to care in remote areas for active duty and mobilized reservists and their dependents. However, dependent eligibility is statutorily based on residing with a service member who both lives and works in a remote area. As a result, because mobilized reservists are most often assigned to work in a location near an MTF or deployed overseas, few dependents of reservists who are mobilized for 179 days or more are eligible for these programs. About 17 percent of reported problems involved documenting and establishing eligibility. For example, reservists had problems with DOD not providing identification cards acknowledging that they and their dependents were TRICARE beneficiaries. They also had difficulties with the accuracy of information in the Defense Enrollment Eligibility Reporting System (DEERS), DOD’s database that maintains benefit eligibility status. In order to ensure TRICARE eligibility, any status changes must be reported to DEERS, and according to a DOD civilian contractor, the services do not always send these changes to DEERS promptly. Reservists reported a variety of situations in which DEERS inaccuracies created problems. DEERS did not reflect that some reservists were on active duty; therefore, they and their dependents appeared to be ineligible for services and were denied care or medications. Further, in instances in which DEERS failed to reflect Prime enrollment for a dependent, claims were paid under Extra, resulting in charges for copayments that should not have been required. Also, mobilized reservists married to active duty personnel reported problems ensuring that DEERS accurately reflected their mobilized status so that they were eligible for active duty, rather than dependent, benefits and access privileges. Active duty families also have problems with DEERS, but, according to a TRICARE adviser at one site we visited, DEERS problems are accentuated for reservists because they move in and out of the system. However, determining the extent of such DEERS problems was beyond the scope of our work. Finally, about 40 percent of the problems reservists reported related to understanding TRICARE’s benefits and obtaining assistance when questions or problems arose. According to DOD officials, mobilized reservists have greater difficulty understanding and navigating TRICARE than other active duty personnel. First, reservists have less incentive to become familiar with TRICARE because mobilizations are for a limited period and because TRICARE only becomes important to them and their dependents if they are mobilized. Further, when first mobilized, reservists must accomplish many tasks in a compressed period. For example, they must prepare for an extended absence from home, make arrangements to be away from their civilian employment, obtain military physical examinations, and ensure that their families are registered in DEERS. DOD officials told us that learning about TRICARE may be a low priority for reservists when they are mobilizing. According to interviews with reservists and support personnel at sites we visited, problems with TRICARE could be reduced if education and administrative assistance were available and information was targeted to the needs of reservists. In addition, when beneficiaries, especially reservists’ dependents, were provided assistance with using the TRICARE system—identifying contact points and understanding TRICARE benefits and how to use them—they generally were able to obtain appropriate, timely health care through TRICARE. At one site we visited, assistance had been lacking or inadequate, and reservists were experiencing numerous difficulties with TRICARE. Here, 1,100 personnel, who were mobilized beginning in late September 2001 under Operation Noble Eagle and Operation Enduring Freedom, initially had no on-base MTF or TRICARE assistance. As a result, when questions arose, these mobilized reservists and their dependents sometimes obtained and passed along inaccurate information. In other instances they contacted TRICARE’s civilian contractor directly, sometimes waiting for over an hour on hold trying to obtain information. In November 2001, two administrative personnel were assigned, including a health benefits expert, and at the time of our visit in February 2002, progress was being made to resolve reservists’ and their dependents’ health care questions. However, because this assistance was initially delayed, two staff members were insufficient to address the volume of misinformation and problems that existed on site. Beneficiaries told us they were still confused about TRICARE regulations at the time we visited. Some mobilized reservists still did not understand that they had to select a TRICARE primary care manager and were continuing to use their non-network providers, even though regulations require active duty personnel to participate in Prime. Likewise, their dependents were continuing to have problems, such as determining whether they could continue to see their civilian providers under TRICARE. At another site we visited, which had an MTF and better on-base assistance, we observed that reservists and their dependents generally were not experiencing problems with TRICARE. In this location DOD had a mobilization team on site to help explain the benefits and had a staff on base to offer assistance when needed. To help ensure that reservists and dependents understood the various TRICARE options, the mobilization team presented general information on TRICARE and tailored benefits discussions to beneficiaries’ specific circumstances. For example, the mobilization team tailored TRICARE information depending on whether reservists’ dependents lived in areas with established networks or in areas where TRICARE networks were minimal or nonexistent. For the latter, the mobilization team discussed how TRICARE’s Standard option could permit dependents to continue relationships with civilian physicians by paying copayments similar to those required by many civilian insurers. The mobilization team members also referred reservists to TRICARE offices, Internet Web links, and toll-free information lines, and provided backup telephone numbers, including their own, to handle additional questions. The 2002 NDAA directed us to evaluate several health coverage options through TRICARE, FEHBP, or civilian insurance as possible mechanisms for ensuring continuity in benefits for reservists and their dependents. Some of the options would provide coverage on a continuous basis during the entire enlistment period, regardless of reservists’ mobilization status, while others would provide additional or alternative coverage only during or following periods of mobilization. Cost estimates for these options, which were provided by CBO, range from a low of about $89 million to a high of about $19.7 billion over a 5-year period. (See app. II for estimate assumptions.) For 2003 through 2007, the estimated cost to DOD for providing reservists and their dependents continuous health care coverage, regardless of reservists’ mobilization status, would range from about $4 billion to $19.7 billion for the 5-year period, depending on how the benefit was provided. CBO estimates that providing the benefit through TRICARE with no premium for reservists would cost DOD about $10.4 billion. (See table 2.) DOD’s cost would be reduced to about $7 billion if reservists paid a premium similar to that paid by active duty retirees under age 65 or to about $4 billion if reservists paid a premium share similar to that paid by federal employees for FEHBP. Providing insurance through FEHBP would be more expensive to DOD because CBO estimated the premium would be based on the existing FEHBP pool—an older population using more health care services. (See table 3.) While CBO estimates that the actual cost of providing health care for reservists and their dependents under FEHBP would be about $10.9 billion, similar to the cost of providing the TRICARE benefit, it estimates the DOD health insurance premium costs for FEHBP to be about $19.7 billion. If reservists paid the typical FEHBP employee portion of the premium, CBO estimates that DOD premium costs would be reduced to about $10.2 billion. The cost for options providing health care coverage only during mobilizations or for expanding the benefit after mobilizations would be from $89 million to $1.8 billion over the 5-year period, according to CBO estimates. (See table 4.) For example, in lieu of a TRICARE benefit, DOD might assume the costs of reservists’ civilian coverage during mobilization. The value of this benefit would vary from reservist to reservist depending on (1) the cost of the reservist’s portion of the premium, (2) the extent of employer coverage, and (3) whether the employer continued to pay the premium during the reservist’s mobilization. CBO estimates that if each year 80,000 reservists, the approximate number mobilized in April 2002, were mobilized for a 1-year period, the cost to fully pay for civilian health coverage for the 5-year period would be about $1.8 billion. The cost of DOD allowing dependents with civilian insurance the choice of TRICARE or a monetary voucher equivalent to the estimated value of the TRICARE benefit would be about $1.1 billion over 5 years, according to CBO’s estimate. Although the amount of this voucher would be based on the average cost of the TRICARE benefit for which the dependent is eligible, this option would increase DOD’s costs because historically many dependents of mobilized reservists have relied on their civilian coverage and have not used their TRICARE benefit. Revising the transitional period that DOD has provided so that demobilized reservists retain their TRICARE benefits for an additional 30 days and their dependents retain benefits for a 90-day period would cost $89 million for the 5-year period, according to CBO’s estimate. Because most reservists have civilian insurance and maintain it while mobilized, few of their dependents experience problems with disruptions to their health care, such as being forced to change providers, learn new health care plan requirements, and adjust to different benefit packages. However, when using TRICARE some dependents of mobilized reservists have experienced certain problems—in part, because they do not adequately understand how the plan works. Problems that reservists and their dependents face with health coverage during mobilizations could be mitigated if DOD improved the information and assistance provided them. Reservists are confronted with choices and circumstances that are more complex than those faced by active duty personnel. Their decisions about health care are affected by a variety of factors—length of orders, where they and their dependents live, whether they or their spouses have civilian health coverage, and the amount of support civilian employers would be willing to provide with health care premiums. In addition, reservists must determine whether their existing civilian providers would be willing to accept TRICARE while they are mobilized since their desire not to disrupt these relationships during a temporary mobilization may outweigh other considerations. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to ensure that reservists, as part of their ongoing readiness training, receive information and training on health care coverage available to them and their dependents when mobilized and provide TRICARE assistance during mobilizations targeted to the needs of reservists and their dependents. DOD reviewed and commented on a draft of this report. It concurred with the report’s recommendation and generally agreed with its findings. DOD stated that it recognized the importance of a well-informed TRICARE beneficiary population and to that end has already taken a number of steps to ensure that reservists understand their health care benefits. For example, the TRICARE Management Activity website and the Reserve Affairs portion of the Department of Defense website provide information about the health benefits available for reservists. Further, DOD stated it will continue to emphasize the importance of health care education and, as problem areas are identified, will immediately take steps to correct them. DOD’s comments are reprinted in appendix III. DOD provided additional comments from the Department of the Army and technical comments from the TRICARE Management Activity and from the Office of the Assistant Secretary of Defense for Reserve Affairs. The Army took exception to some of the information presented in the report that was obtained from DOD’s 2000 Survey of Reserve Component Personnel. The Army stated that the number of reservists who continued to retain their civilian health care coverage “seems exceptionally high” although they could provide no basis to support this claim. Nevertheless, because of their concern, we subsequently contacted DOD officials at the Defense Manpower Data Center, who were responsible for the survey, to reconfirm the information they provided. After we explained the Army’s position to them, they reaffirmed that the data from the survey instrument were correct. They stated that for the period covered by this survey prior to the 2001 partial mobilization there was no reason to question the accuracy of the estimate. The Army also asked for other analyses, such as a cost- benefit analysis of various TRICARE demonstration programs that were beyond the scope of our work. Technical corrections and clarifications have been incorporated into the text as appropriate. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. Copies will also be made available to others on request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7101. Other contacts and major contributors are listed in appendix IV. To determine whether reservists had health coverage when not on active duty and the source of any civilian coverage, we obtained analyses from the Department of Defense’s (DOD) 2000 Survey of Reserve Component Personnel. Although all survey questions had not been analyzed, we obtained information from DOD on selected questions for which survey processing had been completed. Because DOD had not yet completed processing for all questions, we were unable to obtain a more thorough DOD analysis or to obtain data for our own analyses. Using the analyses DOD provided, we were able to do limited checks for consistency of results, but, for the most part, we were not able to verify the accuracy of DOD’s data. To learn about the type of civilian health care coverage reservists and their dependents have and the extent to which mobilizations caused disruptions in coverage, we obtained information from 286 mobilized, or recently mobilized, reservists from three judgmentally selected reserve units— representing the Army, Navy, and Air Force. We selected these units with the help of DOD personnel using two criteria: (1) the unit consisted of at least 50 reservists and (2) at the time of our audit work, the unit was mobilized or had recently completed a mobilization and was drilling. We visited these sites and administered a questionnaire to identify the types and volume of problems that reservists and their dependents were experiencing with health care coverage. Sometimes we used the questionnaire as a structured interview guide and administered it to individuals; more frequently, reservists completed the questionnaires in a group and spoke with us individually afterwards if they had issues they wanted to discuss. During these visits, we also interviewed unit commanders, personnel responsible for mobilization activities, TRICARE personnel, and medical staff, when available. We also used our questionnaire as a guide in conducting a telephone survey of an additional 74 reservists or their family members. We obtained a randomized list of reservists who had been mobilized during the period July 2000 through December 2001, along with the sampled reservists’ home addresses and telephone numbers, from DOD’s Defense Manpower Data Center. We first excluded from the sample those reservists whose records lacked both addresses and telephone numbers; then proceeded in order from the first name on the list, either calling the telephone number provided or attempting to locate a telephone number using the name and address. When we were not able to obtain a telephone number or when the telephone number given to us had been disconnected or was determined to be inaccurate, we also excluded that reservist. Of 100 reservists whom we were able to contact or leave messages for, we ultimately completed an interview with 74 reservists or family members. The remaining 26 reservists either did not return our calls or refused to participate in our survey. Finally, we interviewed officials in the offices of the Assistant Secretary of Defense for Reserve Affairs and the Assistant Secretary of Defense for Health Affairs; the TRICARE Management Activity; the National Guard Bureau; the Department of Labor; representatives of the Army, Navy, and Air Force Reserve Components; and reservist advocacy groups, including the Enlisted Association of the National Guard of the United States, the National Guard Association of the United States, the National Military Family Association, the Ohio Air National Guard, the Reserve Officers Association, the Retired Officers Association, and the Retired Enlisted Association. We also reviewed our prior work on reservists and military health care. The Congressional Budget Office (CBO) calculated costs associated with options specified in the 2002 NDAA for providing coverage for reservists. We did not independently verify data used to calculate the cost estimates. See appendix II for CBO’s assumptions. In calculating the cost estimates specified in the National Defense Authorized Act for FY 2002 for providing health care coverage to reservists, CBO used the following basic assumptions: The estimates were based on 865,000 reservists, unless otherwise indicated. The benefit would start on January 2003. The percentage of the reserve force with dependents is 50.42. Reservists with dependents each have about 2.17 dependents. Inflation would be 8.5 percent in 2003, 7.5 percent in 2004, and 6.5 percent in the remaining years. The 14 percent of reservists who were federal employees were excluded from the estimates because they presumably have health insurance coverage under Employees Health Benefits Program (FEHBP). The specific assumptions used to develop each benefit option are discussed below. TRICARE (no premium) Ninety percent of reservists would take advantage of this option. Reservists and their dependents would use TRICARE-approved civilian physicians with little use of military treatment facilities (MTF). TRICARE costs were weighted from FEHBP costs, assuming that reservists cost about 40 percent of the FEHBP premium and families cost about 60 percent. TRICARE costs were estimated at $1,513 for a single reservist and $5,173 for a family during 2003. Costs of TRICARE Prime and TRICARE Standard are the same. Some beneficiaries would use TRICARE as a second payer insurance. (The 14 percent of reservists who presumably were enrolled in FEHBP was used as a proxy for this purpose.) Second payer costs were 25 percent of the regular TRICARE costs. Reservists will enroll over 3-year phase-in period. TRICARE with premium similar to under 65 active duty retirees Premium consists of $230 per year for individuals and $460 per year for families. Seventy percent of reservists would enroll in TRICARE under these conditions. Reservists and their dependents would use TRICARE-approved civilian physicians with little use of MTFs. TRICARE costs were weighted from FEHBP costs, assuming that reservists would cost about 40 percent of the FEHBP premium and families would cost about 60 percent of the FEHBP premium. TRICARE costs were estimated at $1,513 for an individual and $5,173 for a family during 2003. Costs of TRICARE Prime and TRICARE Standard are the same. No second payer costs exist. Reservists will enroll over 3-year phase-in period. TRICARE with premium-share equal to that of FEHBP Reservists would pay 28 percent of premium costs, which is similar to the percentage of FEHBP premiums paid by civilian federal employees. Fifty percent of reservists would enroll in TRICARE under these conditions. Reservists and their dependents would use TRICARE-approved civilian physicians with little use of MTFs. TRICARE costs were weighted from FEHBP costs, assuming that reservists cost about 40 percent of the FEHBP premium and families cost about 60 percent. Cost for an individual would be $1,513 and cost for a family would be $5,173 during 2003. Costs of TRICARE Prime and TRICARE Standard are the same. No second-payer costs exist. Reservists will enroll over 3-year phase-in period. FEHBP (no premium) Ninety percent of reservists would enroll in this program. DOD would pay the employee’s share of the premium for the 14 percent of reservists who presumably were enrolled in FEHBP. Blue Cross/Blue Shield and Kaiser Permanente premiums were used to calculate costs. The estimated average annual cost was $3,760 for individuals and $8,718 for families during 2003. Reservists will enroll over 3-year phase-in period. FEHBP (regular premium) Seventy percent of reservists would enroll in FEHBP if they had to pay the employee’s share of the premium. No cost was included for the 14 percent of reservists who presumably are enrolled in FEHBP. Average premiums for individuals and families were based on data provided by FEHBP actuaries. During 2003, the estimated cost for an individual would be $3,670 with DOD paying about 71 percent, and cost for a family would be $8,635 with DOD paying about 73 percent. Reservists will enroll over 3-year phase-in period. Costs are based on 80,000 reservists—the approximate number mobilized in April 2002. No cost was included for the 14 percent of reservists who presumably are enrolled in FEHBP. Ninety percent of reservists would enroll in the program. Average cost of employee premium and employer’s share were based on Kaiser Family Foundation data. During 2003, cost for an individual would be $2,877 with DOD paying 86 percent, and cost for a family would be $7,656 with DOD paying 74 percent. There is no phase-in period. Provide voucher for civilian insurance Costs are based on 80,000 reservists—the approximate number mobilized in April 2002. Voucher could be used to pay for any current health insurance coverage, including both employee’s and employer’s share. FEHBP enrollees would not receive vouchers. Ninety percent of reservists would use vouchers. Voucher costs were based on 2003 estimated TRICARE costs of $1,513 for individuals and $5,173 for families. (TRICARE costs were weighted from FEHBP costs, assuming reservists would cost about 40 percent of the FEHBP premium and families would cost about 60 percent.) Voucher may not be used to cover the cost of paying second payer insurance—only covers primary insurance. There is no phase-in period. Extend/Offer transition period following demobilization Costs are based on 80,000 reservists— the approximate number mobilized in April 2002. Forty percent of demobilized reservists would use this option. No cost was included for the 14 percent of reservists who presumably were enrolled in FEHBP. Reservists would use TRICARE-approved civilian physicians with little use of MTFs. TRICARE costs were weighted from FEHBP costs (assuming reservists would cost about 40 percent of the FEHBP premium and families would cost about 60 percent of the FEHBP premium). All reservists were eligible regardless of existing insurance coverage. Benefit for reservist is only 30 days since the first 60 days are currently covered. Dependents would be covered for 90 days. There is no phase-in period. In addition to those named above, the following staff members made key contributions to this report: Aditi Archer, Richard Wade, Julianna Williams, Mary W. Reich, and Karen Sloan. | To expand the capabilities of the nation's active duty forces, the Department of Defense (DOD) relies on the 1.2 million men and women of the Reserve and National Guard. Currently, reserve components constitute nearly half of the total armed forces. Although DOD requires reservists to use TRICARE DOD's health care system for their own health care, using TRICARE is an option for their dependents. Nearly 80 percent of reservists had health care coverage when they were not on active duty, according to a GAO survey. The most frequently cited sources of coverage were civilian employer health plans and spouses' employer health plans. Few dependents of mobilized reservists experience disruptions in their health coverage--primarily because most maintained civilian health coverage while reservists were mobilized. Ninety percent of the reservists with civilian health coverage maintained that coverage. The 5-year cost of the coverage options delineated in the 2002 National Defense Authorization Act range from $89 million, for expanding the transition benefit allowing mobilizations, to $19.7 billion, for continuous coverage under the Federal Employees Health Benefits Program, as estimated by the Congressional Budget Office. |
VA’s Office of National Veterans Sports Programs and Special Events’ mission is to motivate, encourage, and sustain participation and competition in adaptive sports among veterans and members of the Armed Forces with disabilities. This is to be accomplished through collaboration with VA clinical personnel as well as national and community-based adaptive sports programs. This office is responsible for the VA Paralympics program’s administration, including the grant award process, grant oversight, distribution of any monthly assistance allowances to eligible athletes, and program outreach. For fiscal years 2010 through 2012, the federal law authorizes appropriations of $2 million for monthly assistance allowances for competitive athletes in training and $8 million for grants to USOC. The grant program will need to be reauthorized to continue in fiscal year 2014. VA officials stated that, in the first years of the Paralympics program—fiscal years 2010 and 2011—$10 million in federal funds were made available for each year. Prior to receiving this initial funding, the office had a troubled beginning. In February 2010, VA’s Inspector General (IG) found that the office’s initial director had abused agency resources and obstructed the IG investigation.office director, restructured the office housing the program, and addressed these issues of malfeasance. Since then, VA officials reported that they hired a new The USOC is a non-profit organization that serves as the National Olympic and Paralympic Committees and, as such, is responsible for training, entering, and funding U.S. teams for the Olympic and Paralympic Games. Furthermore, the organization has a well-established history of providing adaptive sports opportunities to people with disabilities. To reach veterans and servicemembers throughout the United States, USOC subgrants VA funding to national and community organizations that provide adaptive sports opportunities. (See figure 1 for the VA Paralympics program’s organizational chart, including external organizations and individuals who receive program funds.) The categories of subgrantees are: National Partners: national organizations that offer camps, clinics and on-going programs for veterans and servicemembers with disabilities through local chapters. Individual annual subgrant amounts range from $100,000 to $500,000. Athlete Development subgrant recipients: organizations that conduct a national network of camps and clinics to provide opportunities for veterans and servicemembers with disabilities to receive sport- specific instruction and assessment. These opportunities help participants meet U.S. Paralympics standards of performance for emerging athletes. Individual annual subgrant amounts range from $15,000 to $300,000. Model Community Partners: community organizations that provide leadership in various geographic regions for promoting adaptive sports and to help increase regional capacity for Paralympic sports. These organizations are allowed to further subgrant funds to other local organizations to provide direct services. Individual annual subgrant amounts range from $2,500 to $175,000. Olympic Opportunity Fund recipients: community organizations that aim to bring adaptive sports opportunities to their local communities. Individual annual subgrant amounts range from $5,000 to $45,000. As a condition of receiving these funds, USOC must permit VA to conduct the oversight VA determines is appropriate. Furthermore, the federal law requires USOC to submit to the Secretary of VA an annual report detailing its use of grant funds. The reports are to include the number of veterans who participated in the adaptive sports program and the administrative expenses. USOC provided this first annual report to VA in November 2011. In turn, VA is required to report annually to Congress on the use of program funds for each year the Secretary makes grants to USOC. Additionally, VA and USOC officials agreed that USOC would submit quarterly progress reports throughout the year. During fiscal year 2011, USOC provided quarterly reports that included descriptions of activities conducted by subgrantees, the number of veterans and servicemembers served in the activities, and anecdotal information on how participants benefited from activities. Internal controls are defined as an integral component of an organization’s management that provides reasonable assurance that the following objectives are being achieved: effectiveness and efficiency of operations, reliability of financial reporting, and compliance with applicable laws and regulations. Internal control, which is synonymous with management control, helps government program managers achieve desired results through effective stewardship of public resources. For more information about GAO and OMB’s internal control frameworks, see Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1 (Washington, D.C., November 1999) and OMB Circular A-123 Revised. In the first 2 years of the Paralympics program, VA granted most of its available funds for the program to USOC, but inconsistent with federal internal controls standards for reporting relevant and reliable information, we found weaknesses in VA’s administrative and personnel expenditure reporting. In fiscal year 2010, VA allotted $10 million for the Paralympics program. VA granted $7.5 million to USOC and obligated about $400,000 to contract with a consulting firm to design an outreach strategy for informing eligible participants about the program. It is not clear, however, how much money VA spent, in total, on administrative and personnel costs associated with this program in fiscal year 2010, primarily because VA did not closely track these costs until midway through fiscal year 2011. VA officials told us that the Office of National Veterans Sports Programs and Special Events did not have a full-time program director and was not fully operational until about midway through fiscal year 2011, and as a result, VA did not establish accounting codes for the Paralympics program until that time. VA officials also said that some administrative and personnel costs were charged to other VA programs as general expenses, and therefore cannot be traced back to the Paralympics program. In addition, VA officials said they were unable to obligate the full amount of fiscal year 2010 funds available to athletes’ monthly assistance allowances before the end of the fiscal year, due to the delays in establishing the program. In fiscal year 2011, VA once again allotted $10 million for the Paralympics program. VA obligated a total of about $8.9 million, of which $7.5 million was granted to USOC. The remainder was spent on athletes’ monthly assistance allowances as well as agency administrative and personnel costs. (See table 1.) VA planned to spend about $1.1 million in 2011 on monthly assistance allowances to assist competitive athletes with their training, but ultimately spent about $675,900. According to VA officials, fewer athletes than expected were able to apply for the allowance, so the Paralympics program returned some portion of the remaining funds to VA’s Office of Public and Intergovernmental Affairs general expense account. USOC officials explained that the monthly information an athlete must submit to obtain an allowance, such as a detailed training log, can be burdensome. USOC is working with VA to develop a new online reporting tool to help ease the burden of this monthly reporting requirement in an effort to encourage greater participation. VA also planned to spend about $334,500 on fiscal year 2011 administrative and personnel costs associated with the Paralympics program, the majority of which was for contracted services. Specifically, within its administrative costs, VA contracted with a consulting firm to provide grants management services, including assistance with developing grant agreements, providing technical assistance, and developing performance measures for grantees. The contractor did not use all of the funding it obligated for this contract, and returned to VA approximately $98,500. According to VA, $11,753 in fiscal year 2011 funds was spent on personnel for the Paralympics program. However, the salaries for the Director of the Paralympics program and other personnel who contributed to establishing the program are not fully reflected in these personnel costs; those salaries were funded through other VA programs because separate accounting codes for the Paralympics program were not established until midway through fiscal year 2011. Indeed, the Director of the Paralympics program was paid out of funds from the Office of National and Special Events. As a result, only those expenses that were incurred after the codes were established were reported on VA’s budget for the Paralympics program. In fiscal year 2012, VA’s total personnel and administrative costs are projected to increase to about $2.2 million as the fiscal year 2012 budget will now reflect activities and personnel from the Office of National and Special Events, which has been consolidated into the Office of National Veterans Sports Programs and Special Events. Specifically, in addition to the Paralympics activities, the Office of National Veterans Sports Programs and Special Events now funds five additional staff who travel to and administer six separate national events, as well develop related outreach literature for these events. USOC was awarded a 1-year grant of $7.5 million by VA in fiscal year 2010, to be used during fiscal year 2011. USOC was also awarded a 1- year grant of $7.5 million in fiscal year 2011 to be used during fiscal year 2012. In fiscal year 2011, USOC subgranted approximately $4.4 million to organizations to provide adaptive sports opportunities and used the remaining $3.1 million for its operations and personnel and administrative costs. Some of USOC’s subgrantees did not use all of the funding they received for their adaptive sports programs, so they collectively returned about $50,000 to USOC. VA officials reported that USOC returned these remaining funds to VA. In fiscal year 2011, about half of the $1.5 million USOC spent on operations went towards outreach and awareness efforts as well as program support for the Paralympics sport programs, while the remaining half was used to provide training and technical assistance. For example, USOC works with VA medical centers and local organizations to help them develop relationships and expand opportunities for veterans and servicemembers with disabilities to engage in adaptive sports. USOC’s program budget shows that operation costs are projected to decrease to $1.1 million in fiscal year 2012. Fiscal year 2011 was the first year that USOC implemented a VA grant program. USOC officials told us the decrease in 2012 operations costs reflects the fact in fiscal year 2011, there were significant upfront, one-time costs to build the foundation of the program, such as designing outreach materials and regional training. USOC officials also said “lessons learned” from their experiences during that first year will allow them to plan more effectively going forward. USOC reports personnel costs separately from administrative costs. In fiscal year 2011, USOC spent about $1.3 million on personnel costs. Specifically, this funding went toward salaries, Social Security taxes, Medicare withholdings, and benefits for 17 program staff and additional temporary staff. The salaries of program staff ranged from about $20,000 to $175,000 and covered positions including the administrative assistants, coaches, grant managers, and program director, among others. Further, in addition to having staff who are dedicated to administering the program, USOC has staff dedicated to the outreach and technical assistance efforts described above; they are responsible for designing and implementing USOC’s outreach materials and facilitating conferences, regional meetings/trainings, and other training and education activities for veterans. In fiscal year 2012, USOC projects spending about $1.9 million to pay for the salaries of 12 program staff. As a percent of its budget, USOC’s personnel costs are projected to increase from 18 percent in fiscal year 2011 to 26 percent in fiscal year 2012. USOC officials said that the increase is due to its fiscal year 2011 grant with VA spanning a 17 month period of performance. This differs from the first grant USOC received which was for a 12 month period of performance. In contrast, USOC’s allocated administrative costs are projected to decrease from $253,000 in fiscal year 2011 to $0 in fiscal year 2012. Specifically, in fiscal year 2011 these funds went toward indirect costs such as rental expenses, supplies, event expenses, and utilities. A USOC official told us that they chose not to allocate any administrative costs in fiscal year 2012 because they want to allocate the most possible funding to programming that directly serves veterans. USOC reported that it plans to pay for administrative costs associated with the VA program through other funding sources. Subgrantees reported using funds to provide opportunities in a range of activities—through camps, practice/trainings, and competitions—across 29 adaptive sports. Cycling/handcycling and skiing were the most common activities. (See figure 2.) Some subgrantees received more than one type of grant depending on the nature of the adaptive sports programming they planned to offer. subgrantee reported using Olympic Opportunity funds to provide cycling and sled hockey activities, such as weekly hand cycling clinics and training sessions on adaptive equipment. Another subgrantee reported using Olympic Opportunity funds to hold a weekly power-lifting group, an indoor kayaking clinic, and a judo clinic. The remaining 22 subgrants generally were larger, ranging from $15,000 to $500,000, and were provided to Athlete Development organizations and National and Model Community Partners. These organizations provided a wider range of activity types through national networks, local chapters, or community organizations in various geographic regions. For example, one Athlete Development subgrantee reported holding an outreach clinic at a nearby VA hospital where they educated staff and potential participants about adaptive sports options and hosted a ski camp which included a sit-ski clinic, a race, and strength and conditioning training for a cross-country skiing marathon. In addition, a National Partner subgrantee reported using its funds to train and educate staff and program leaders on adaptive and Paralympic sports, and to conduct outreach and recruitment campaigns. This organization also awarded and administered subgrants to some of its local chapters to hold archery, cycling, fencing, wheelchair basketball, and swimming activities, among others. All subgrantees’ grant agreements required them to report how VA funds were used to cover program expenses. As part of their agreements, subgrantees provided a projected budget detailing plans to spend program funds in six categories: personnel, operations, equipment, supplies, travel, and administrative costs (see appendix II for USOC’s cost definitions for subgrantees’ use of funds). Although subgrantees could not spend more than 10 percent of the total amount granted on administrative costs, no other category had spending limits. Of the approximately $4.4 million USOC awarded for fiscal year 2011, subgrantees projected using over half, or about $2.6 million, on operations and personnel costs. Subgrantees projected that these costs would remain about the same for fiscal year 2012. (See figure 3.) Inconsistent with federal internal control standards, we found that during the first program year, USOC did not have reporting requirements or electronic reporting systems in place for subgrantees to provide information on how VA funds were used separate from other sources. As a result, there was a lack of reliable information on actual expenditures. Our review of a sample of subgrantees’ files raised concerns with how subgrantees reported their actual expenditures. In 7 of the 21 subgrantees’ files we reviewed, it was difficult to determine how the subgrantees spent their grant funding, and 4 out of these 7 files reported spending more than they were actually granted by VA to provide adaptive sports activities. For example, in one case, a subgrantee received a grant for $35,000 but reported expenditures to USOC in excess of $89,000. In reviewing these files, we found that USOC had not determined what portion of the subgrantees’ program costs were funded with VA dollars. While USOC provided a template to each subgrantee to report quarterly expenditures, the template did not explicitly request that a subgrantee only provide information on how VA funds were spent. When asked if they reconciled a subgrantee’s reported expenditures with planned expenditures, USOC officials told us they generally did not have enough time between the receipt of a subgrantee’s quarterly budget and the deadline to submit a quarterly report to VA to ensure that the budget information was accurate. This lack of follow-up is inconsistent with OMB internal controls guidance, which provides that management should regularly reconcile and compare data within the normal course of business. USOC officials said they were aware of reporting problems and, at the beginning of fiscal year 2012, were developing and implementing an electronic system that will allow subgrantees to report quarterly expenditures online, among other things. USOC officials told us such a system should help them process the reports more quickly. However, they acknowledged the system will not include controls to ensure subgrantees report only those costs specific to the VA grant. VA officials told us that they are aware that this electronic system has limitations and they have directed USOC to make the necessary improvements. In addition, VA reported that it provided guidance on how USOC could improve data processes, and USOC has agreed to send its grant management staff to training, both in effort to enhance USOC’s data reporting capabilities. VA lacked information on how USOC and subgrantees used funds due to its reliance on self-reported, unverified quarterly reports from USOC. Inconsistent with federal internal controls standards, VA officials stated that they did not independently review or verify how grant funds were used due to a lack of staff to oversee the program in fiscal year 2011. In fact, VA did not hire staff dedicated to managing the Paralympics program until it had already granted funds to USOC. Specifically, in September 2010, VA and USOC established a memorandum of agreement for the grant, but a Paralympics program director was not hired until February 2011. The director, with the assistance of interns, reported spending the rest of the fiscal year finalizing the office’s outreach campaign, administering the monthly assistance allowance program, and processing USOC’s fiscal year 2011 grant application. According to VA officials, another VA Paralympics staff person was hired in September 2011. Furthermore, with the establishment of the Paralympics program, agency officials stated that grant management became a new administrative responsibility for the VA Office of Public and Intergovernmental Affairs as well as USOC and the subgrantees, and all of these program stakeholders needed time to learn about appropriate oversight mechanisms. USOC officials told us their quarterly reports were primarily based on the quarterly reports they obtained from subgrantees, and therefore, the available information VA had for oversight may not have been accurate. USOC officials managing the grant program did not conduct any separate reviews to verify the information provided to them by subgrantees, and as mentioned earlier, did not make efforts to reconcile expenditure data in these quarterly reports as they were submitted. To gain additional information about how subgrantees were managing funds in the first program year, USOC’s Audit Division selected 2 of the 65 subgrantees, based on risk-related criteria, for review in the fall and winter of 2012, and the grant managing officials plan to use the information from those audits to develop future plans for oversight. Our review of a sample of USOC’s files on subgrantees showed that USOC officials were not holding subgrantees accountable for meeting the terms of their subgrant agreements—a grant management problem about which VA was not in the position to know about given its lack of oversight. We found that many subgrantee files lacked information on the status of their grant expenditures and the activities the subgrantees agreed to conduct. USOC reported using a process in which quarterly reports are checked against subgrantees’ agreements to ensure completeness of agreed-upon activities. However, in 12 of the 21 subgrant files we reviewed, we did not find evidence that the subgrantees conducted all agreed-upon activities. For example, one National Partner had agreed to develop 27 programs related to handcycling, bowling, and trapshooting, but the reports we found mentioned that only 18 programs had been completed. Another National Partner agreed to conduct 10 activities related to outreach, introducing adaptive sports at VA clinics, and identifying athletes for higher level of competition, but the case file indicated only 4 of these activities had been completed. Furthermore, in 11 of the 12 files, we did not find any documentation explaining why the planned activities did not occur, nor did we find written permission from USOC to change the scope of agreed-upon activities. In 5 of the 21 files we reviewed, we found that subgrantees transferred more than 20 percent of funds from one budget category to another without the written permission of USOC, as required by their grant agreements. For 2 of these files, we identified significant issues with the subgrantees’ financial management and reporting. Specifically, 1 file belonged to a subgrantee that received a $400,000 grant that was one of the organizations subjected to an audit by USOC’s Audit Division. The division officials found that, in addition to making unallowed transfers, the subgrantee had instances of non-compliance with OMB Circular A-122’s Cost Principles for Non-Profit Organizations (including unexplained personnel and administrative charges by five employees), did not consistently document and communicate requirements and responsibility related to the VA funds it subgranted to its member chapters, and did not clearly and formally document its methodology for determining and Another subgrantee who allocating administrative costs to the VA grant.made unallowed budget transfers also reported purchasing a van without the written permission of USOC, which is required by the grant agreement prior to making equipment purchases exceeding $5,000. This same subgrantee also received another $35,000 VA-funded grant for which it did not submit required expenditure reports. VA officials recognized that their grant oversight has been limited and report that improvements are under development. In December 2011, VA established a monitoring plan that identifies the specific information USOC should report and requires USOC to establish a similar plan to oversee subgrantees. Specifically, VA’s plan requires USOC to submit quarterly reports and an annual report that include summary data on the activities provided, number of veterans served, levels of expended and unexpended funds, and available assets, among other information. VA’s plan does not, however, require on-site or remote evaluations of USOC and the subgrantees, nor a review of USOC’s monitoring outcomes for subgrantees. When asked why such monitoring was not included in the plan, VA officials stated that they recognize that a lack of separate evaluations is a gap in their oversight and are working to address this limitation. For example, in fiscal year 2012, VA officials reported conducting on-site visits of USOC to discuss various aspects of grant management. VA plans to conduct additional on-site reviews of USOC and selected subgrantees later in the year. Furthermore, officials are expecting that information from USOC’s subgrantee reviews will eventually be incorporated into subgrantee application packages, which they will review before finalizing future grant agreements. Also, after reviewing the first quarter reporting for fiscal year 2011 funds in January 2012, VA officials reported asking USOC officials to provide more information about whether subgrantees were providing deliverables within the agreed-upon timeframes. With input from VA, USOC finalized a monitoring plan of subgrantees in early 2012 that, in addition to reviewing subgrantee reports, will require USOC officials to audit financial data for subgrantees selected on risk- based criteria. USOC’s plan includes a checklist for reviewing and verifying information in these quarterly reports; the checklist mentions comparing the reports to the agreed-upon activities and conducting remote and on-site reviews. USOC aims to conduct enough site visits to review the use of half of all fiscal year 2011 funds. Furthermore, USOC plans to use risk-based criteria to select subgrantees for remote audits of financial data; these criteria will include the size of the subgrant award, additional granting of VA funds to other entities by the subgrantee, and the absence of a current audit report. The remote audits will include reviewing the subgrantees’ ledgers and comparing them to what was submitted in the quarterly reports and reviewing documentation that supports selected transactions to ensure that they are compliant with OMB guidance. However, given that we found that USOC was not holding subgrantees accountable—despite having an oversight process in place—VA will need to ensure its monitoring efforts include overseeing the implementation of USOC’s plans. We found that many subgrantees and participants reported benefits from VA’s Paralympics program. Subgrantees primarily reported anecdotal information on program benefits in their quarterly reports to USOC, and USOC then provided some of these examples in their quarterly reports to VA. This anecdotal information included participant success stories, testimonials, and related news articles, and was consistently positive with regards to the program’s value in the first year. (See figure 4 for examples.) During our site visit to an adaptive sports event in Chicago, veterans we spoke with also told us how adaptive sports programs have improved their mental and physical health. All six veterans in one group interview agreed that the greatest benefit was to their mental health; they believe that adaptive sports are a tool that helps them deal with depression. They also said that participating in group activities with other veterans with disabilities made them feel less isolated in their challenges. Other veterans said that they had experienced social benefits, including a boost to their self-esteem; one veteran described how he developed long-term friendships during the competitions, and another described how these events show veterans that they can be physically active despite their disabilities. Veterans also told us that competitions motivated them to stay active on an on-going basis and improved their overall physical health. For example, some veterans said regular participation in athletic activities made them physically stronger in their remaining limbs and had improved their balance and dexterity. One veteran in particular told us that he had lost 68 pounds in 4 months due to his regular participation in Paralympic program activities. While VA requires USOC and its subgrantees to count the number of adaptive sports activities conducted and the number of participants served, these measurements are not always accurate. In its fiscal year 2011 annual report to VA, USOC stated that over 10,000 veterans and servicemembers participated in nearly 2,000 activities. However, VA officials acknowledged that the participation and activity data are flawed. Indeed, in USOC’s quarterly reports to VA, USOC stated there is some double counting of unique veterans/servicemembers and activities due to partnerships and collaboration among the Paralympic community. For example, a veteran might attend activities sponsored by different subgrantees, and each subgrantee might then include that same veteran in their separate count. The extent of this double counting is unknown due to a lack of a systematic review of the activity and participant counts. Although VA is required by law to report annually to Congress on the number of veterans who participated in adaptive sports activities and the administrative expenses, it has yet to do so. Additionally, in our review of a sample of 21 subgrantee reports, we found some inconsistencies with how subgrantees count program participants and activities, further diminishing the reliability of these data. It was difficult to determine, in fact, how 16 out of the 21 were counting their participants or activities as many organizations had different interpretations of what qualified as a participant or activity. For example, 8 out of the 21 subgrantee reports counted activities that did not have veteran or military participants—some of them even counted purchases of equipment as activities. Also, 6 out of the 21 subgrantees administered more than one type of a VA Paralympics grant and submitted reports where it was difficult to determine which grant corresponded with which counts of activities and participants. While subgrantees and program participants reported program benefits, VA has not yet systematically measured how adaptive sports activities specifically benefitted the health and well-being of veterans and servicemembers. A couple of USOC’s subgrantees conducted surveys asking for feedback on specific events or activities, but VA has not conducted a program-wide survey or study to collect information about the various events and their benefits. Absent this measurement, VA largely relies on the anecdotal information supplied by subgrantees and program participants. Moreover, VA officials recognize that participant and activity counts do not comprehensively measure how participation in adaptive sports can improve a person with disabilities’ quality of life, including improved physical health, enhanced confidence and self- esteem, reduction in depression and improved relationships with family members and other members of the community. VA wants to improve its measurement of Paralympics activity benefits. VA and USOC have, in turn, taken the initiative to hire a contractor to conduct a study on the effects of adaptive sports on rehabilitation and reintegration of veterans and servicemembers into the community, including five life domains (self-care, mobility skills, communication with family and friends, participation in society, and acceptance of disability) and the psychosocial outcomes, including self-esteem and quality of life. The study will include a survey of participants in VA adaptive sports activities, with questions focusing on uncovering key life and goal-setting concerns of participants as well as employment and educational goals and opportunities. VA and USOC are expecting the contractor to provide a preliminary report to Model Community Partners by the end of September 2012. They have also planned for the final results of this study to be shared with internal and external audiences, including government agencies, the research community, and the general public. In addition, VA and USOC have tasked the contractor to conduct an assessment of the VA Paralympics program that will include identification of issues, trends, obstacles, and barriers, which will assist USOC and subgrantees with managing expectations and program performance. VA and USOC have required the contractor to provide an annual report on this assessment by November 2012. VA officials told us they have also been assisting the Paralympic Research and Sport Science Consortium with facilitating research in Paralympic and adaptive sports. VA officials stated that this research is focused on activities that would both enhance Paralympic sports and capabilities to provide rehabilitative opportunities to Veterans and members of the Armed Forces with disabilities. In addition, VA officials stated that they are seeking feedback from Paralympic and adaptive sport communities, academia, research institutions, and other entities to try to develop metrics to measure effectiveness. VA officials stated that the goals of its adaptive sports programming have changed in the past few years with the establishment of the Paralympics program. Prior to the Paralympics program and its current leadership, the Office of Public and Intergovernmental Affairs focused on engaging veterans and servicemembers with disabilities in a few Paralympic sport competitions it sponsored once a year. However, with the Paralympics program and the Office of National Veterans Sports Programs and Special Events in place, VA has expanded its goals to include veteran participation in local and community adaptive sports programs throughout the year and for on-going sports participation to have an impact on the veterans’ overall physical and emotional well-being. Further, VA is working with other VA entities to incorporate Paralympic and adaptive sports into rehabilitative whole-life programs for Veterans and members of the Armed Forces with disabilities. For example, VA officials stated that they worked with some of their subgrantees to develop adaptive sports program-related training webinars and other support materials for VA entities such as recreation therapists, centers for blind and visually impaired, and Community Living Centers. After veterans and servicemembers face life altering disabilities resulting from their service in the Armed Forces, the VA Paralympics program works to empower them to move forward in their next phase of life. In partnership with USOC and its subgrantees, VA has been able to introduce numerous participants to a variety of sports adapted for their physical conditions. Beyond providing access to recreational opportunities, veteran participants told us that adaptive sports have changed the way they think about their disabilities and provided them with opportunities to improve their physical health. As this program matures, it has the potential to provide greater access to adaptive sports and garner a wider range of benefits for participants. VA must, however, improve the program’s oversight and reporting to help ensure program funds are efficiently and effectively used. Although USOC is planning various oversight initiatives and is implementing an electronic reporting system for subgrantees, we found that USOC’s past efforts at financial accounting, subgrantee oversight, and reporting on participation and activities were weak, resulting in gaps in program knowledge about how program funds were actually spent, whether or not all promised activities occurred, and how many people benefitted from the activities. Moving forward, without this information, VA and policymakers will struggle to make informed decisions about the program’s future. VA officials report that they are building a stronger oversight structure, but to the extent USOC’s weaknesses remain, VA may miss opportunities to better use program resources to motivate, encourage, and sustain participation and competition in adaptive sports among veterans and servicemembers with disabilities. To improve oversight within the VA Paralympics grant program, we recommend the Secretary of VA direct the National Director of the Office of National Veterans Sports Programs and Special Events to take the following three actions: 1. Require USOC to modify reporting requirements that will: a. Direct subgrantees to only include VA Paralympics program funds in expenditure reports; and b. Provide a consistent methodology for how subgrantees should count their program activities and participants, including explicit instruction on what should and should not be counted as an activity or participant. 2. Ensure USOC adds controls to its electronic reporting system that will require subgrantees to identify how VA grant funds were used separate from other funding sources subgrantees use to support adaptive sports activities. 3. Review the implementation of USOC’s monitoring plan after a reasonable period to ensure planned efforts were conducted. VA provided us with comments on a draft of this report, which we have reprinted in appendix III. In its comments, VA agreed with our recommendations and reported that efforts were underway to address each of them. Specifically, VA reported that USOC has already agreed to direct subgrantees to only include information on VA Paralympics program funds in expenditure reports. Furthermore, USOC has agreed to send its grant management staff to training in an effort to improve its data reporting. Regarding USOC’s electronic reporting system, VA reported that USOC has included the requirement that subgrantees identify how VA funds were used separately from other funding sources, and VA will review the system before it goes on-line during the fourth quarter of 2012. VA indicated that USOC also plans to provide training to subgrantees on how to appropriately report on grant funds during this quarter. In response to our recommendation on following up on USOC’s monitoring plan, VA reported that, in April 2012, it began meeting with USOC to improve USOC’s subgrant monitoring program, which now includes weekly conference calls with VA. VA also provided technical comments, which were incorporated into the report as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of the Department of Veterans Affairs, the Chief of Paralympics, USOC, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of this report were to (1) review how VA and its grantee and subgrantees used program funds to provide adaptive sports opportunities to veterans and servicemembers; (2) assess how VA is overseeing grantees’ and subgrantees’ use of funds; and (3) describe how veterans and servicemembers have benefited from VA Paralympics activities. The mandated requirement to include a description of how the United States Paralympics, Inc. (which was superseded by the Paralympic Division of the United States Olympic Committee (USOC)) used grant funds from the Department of Veterans Affairs (VA) is provided under the first research objective. The other mandated requirements to include the number of veterans with disabilities who benefitted from such grants and how such veterans benefitted were addressed under the third research objective. To address the three objectives, we analyzed both planned and final program budget expenditure data; reviewed program reports, guidance, and relevant federal laws and regulations; and interviewed VA and USOC officials and other program stakeholders about VA’s Paralympics program activities funded with fiscal years 2010, 2011, and 2012 appropriations. Specifically, to determine how VA and its grantees used program funds, we reviewed information on planned and actual program expenditures provided by VA and USOC and interviewed VA and USOC officials to better understand the purposes for which funds were used. VA started spending Paralympic program funds in fiscal year 2010, but did not have complete final expenditure information for that first program year. As a result, we discuss this incomplete expenditure data in our report findings. We obtained VA’s complete planned and final expenditure budget information on fiscal year 2011 funds, but could only report planned expenditures for fiscal year 2012, as their actual expenditures for that year had not yet been finalized at the writing of this report. USOC provided data to us on subgrantees’ planned and final expenditures. Subgrantees’ planned expenditures were based on subgrant agreements made by USOC and the subgrantee, and final expenditures were based on data from quarterly reports submitted by each subgrantee to USOC. To determine the reliability of VA and USOC data on planned and actual program expenditures for that year, we interviewed VA and USOC officials about their procedures for collecting and maintaining these data. In addition, we reviewed a nonprobability sample of 21 subgrant files to verify the accuracy of data reported to USOC and to better understand how USOC maintains these data. The sample included all types of USOC subgrants mentioned in the background of this report. Specifically, all of USOC’s National Partner subgrants were included, and if these National Partners were also awarded Olympic Opportunity Fund subgrants, those subgrants were also included. The sample also included Model Community Partners and other Olympic Opportunity Fund subgrantees which were selected randomly after being stratified according to their subgrant type and USOC-designated geographic region. Due to errors in the original list of subgrantees by subgrant type provided by USOC, our sample also included two Athlete Development subgrants. In total, the sample files contained information on 56 percent of funds USOC provided in subgrants using fiscal year 2010 dollars. Due to problems we found with subgrantee reporting, we did not report information on subgrantees’ actual expenditures. (See the body of the report for more information.) We reported only those expenditure data we believe were sufficiently reliable for the purposes of our study. To determine how VA oversaw grantees’ use of funds, we interviewed VA and USOC officials and obtained and reviewed quarterly progress reports from USOC, examples of progress reports from subgrantees, and VA’s and USOC’s monitoring plans. Furthermore, we reviewed the same non- probability sample of 21 subgrant files to obtain information on whether their activities were documented as required under USOC policies— mentioned in subgrant agreements, subgrant applications, and interviews with the organization’s officials—and as promoted by GAO’s guidelines for internal controls and the Domestic Working Group’s Grant Accountability Project’s promising practices. To determine how veterans and servicemembers have benefited from VA Paralympics program activities, we reviewed participant and activity counts in the same non-probability sample of 21 subgrantee files maintained by USOC. We found issues with double-counting of activities and participants, as well as issues of counting non- veteran/servicemember activities and participants. However, to be responsive to our mandate, we provided USOC’s total counts of activities and participants in the report along with a discussion of why these numbers are not reliable. We reviewed USOC quarterly and annual program reports to VA. We also interviewed VA and USOC officials as well as subgrantees, regional stakeholders, and veteran program participants at a site visit to a VA adaptive sports program event in Chicago, Illinois in August 2011. In addition to the contact named above, the following staff members made important contributions to this report: Brett Fallavollita, Assistant Director; Danielle Giese, Analyst-in-Charge; Kristy Kennedy; Nisha Hazra; and Juliann Gorse. Also, Shana Wallace provided guidance on the study’s methodology; Craig Winslow provided legal advice; James Bennett assisted with report graphics; and Susannah Compton provided writing assistance. VA Mental Health: Number of Veterans Receiving Care, Barriers Faced, and Efforts to Increase Access. GAO-12-12. Washington, D.C.: October 14, 2011. VA Education Benefits: Actions Taken, but Outreach and Oversight Could Be Improved. GAO-11-256. Washington, D.C.: February 28, 2011. VA Health Care: Spending for and Provision of Prosthetic Items. GAO-10-935. Washington, D.C.: September 30, 2010. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1 Washington, D.C.: November 1, 1999. | The Veterans Benefits Improvement Act of 2008 established VAs Paralympics Program to promote the lifelong health of disabled veterans and members of the Armed Forces through physical activity and sports. Additionally, the act authorized VA to provide a grant to USOCs Paralympics Division, and allowed USOC to enter into subgrant agreements to provide adaptive sports activities to veterans and service members. The act also mandated GAO to report on the VA Paralympics program. GAO is required to (1) review how VA and its grantee and subgrantees used program funds to provide adaptive sports opportunities to veterans and service members; (2) assess how VA is overseeing its grantees and subgrantees use of funds; and (3) describe how veterans and service members have benefited from VA Paralympics activities. To do this, GAO reviewed relevant federal laws, regulations, guidance, agency reports, and a non-probability sample of 21 of 76 subgrant files, consisting of data on about 56 percent of funds subgranted. GAO also conducted site visits to two states and interviewed veterans as well as agency and grantee officials. The Department of Veterans Affairs (VA) and the U.S. Olympic Committee (USOC) primarily awarded program funds through subgrants to 65 national and community organizations that support adaptive sports opportunities. However, their respective program expenditure reporting was not consistent with federal internal control standards, making it difficult to know fully how program funds were spent. VAs reporting of first-year program funding was problematic because it did not closely track costs until midway through the fiscal year. During the second fiscal year2011VA granted $7.5 million to USOC, which, in turn, awarded $4.4 million to subgrantees and spent the remainder primarily on operations and personnel. Subgrantees reported using funds for activities such as training and camps. GAO found, however, that USOC did not have sufficient reporting requirements in place for subgrantees to provide information on how VA funds were used separate from other sources of funding. VA relied upon self-reported, unverified information to oversee the grant program but is planning to make improvements. In fiscal year 2011, VA did not conduct any on-site or remote monitoring to verify how funds were used. Thus, VA lacked information on how well USOC and subgrantees managed grant funds, potentially exposing itself to paying for services not delivered. In 12 of 21 subgrant files selected, USOC was not holding subgrantees accountable for meeting the terms of their agreements. For example, one subgrantee agreed to conduct 10 activities, but the file indicated only 4 were conducted. VA reported that it has plans to improve to oversight, including conducting on-site monitoring of grantees and subgrantees use of funds and having USOC verify financial reports for at-risk subgrantees, such as those with large subgrants. While program benefits were reported by subgrantees and participants, up until this point VA has not systematically measured how adaptive sports activities benefit the health and well-being of veterans and service members. Subgrantees primarily report anecdotal information on program benefits, such as individual success stories. VA collects information on the number of activities and participants from USOC. In 2011, over 10,000 participants were served through nearly 2,000 activities. However, these metrics are flawed due to double counting and other measurement issues. VA officials also recognize that the metrics do not comprehensively measure program benefits. Thus, VA and USOC have hired a contractor to conduct a study on the effects of adaptive sports on rehabilitation and reintegration of veterans and service members into the community. GAO recommends that VA take additional actions to improve grantee and subgrantee reporting of expenditures, activities, and participants, as well as USOCs monitoring of subgrantees. In commenting upon a draft of this report, VA agreed with these recommendations and reported that it was taking steps to implement them. |
DOE has about 50 major sites around the country where the department carries out its missions, including developing, maintaining, and securing the nation’s nuclear weapons capability; cleaning up the nuclear and hazardous wastes resulting from more than 50 years of weapons production; and conducting basic energy and scientific research, such as mapping the human genome. This mission work is carried out under the direction of NNSA and DOE’s program offices. With a workforce of 16,000 federal employees and more than 100,000 contractor employees, DOE relies primarily on contractors to manage and operate its facilities and to accomplish its missions. In addition to accomplishing DOE’s core mission work, managing and operating the sites involves a broad range of support activities, such as information technology, safety, security, and purchase of products and services. The Small Business Act, as amended by the Small Business Reauthorization Act of 1997, directed the President to establish the goal that not less than 23 percent of the federal government’s prime contracting dollars would be directed to small businesses each fiscal year. SBA is charged with working with federal agencies to establish agency small business contracting goals that, in the aggregate, meet or exceed the 23 percent government-wide goal. SBA negotiates an annual goal with each agency based on the overall amount of contracting in the agency (contracting base) and the agency’s past achievements. SBA guidelines for setting individual agency goals specify that certain types of federal spending should not be included in the contracting base. These exclusions include items such as grants, purchases from mandatory sources, or contracts for work done internationally for which U.S. small businesses would not be competing. For fiscal year 2003, excluding such items resulted in a DOE contracting base of about $21 billion subject to the small business prime contracting goal. As figure 1 shows, facility management contracts account for more than 80 percent of this amount. DOE’s Small Business Office negotiates annual small business contracting goals with SBA, coordinates outreach efforts with the small business community, and works with NNSA and DOE’s program offices to establish and monitor annual goals for small business contracting. DOE’s Office of Procurement and Assistance Management and NNSA’s Office of Acquisition and Supply Management establish policies and guidance for conducting procurements according to federal and departmental regulations, and maintain the information systems on the department’s prime contracts, including annual dollars provided to each contract. NNSA and DOE’s program offices, such as EM and Science, are responsible for identifying opportunities for small business contracting and providing program oversight and direction to the contractors. Since the 1999 federal policy change, DOE can no longer include subcontracts of its facility management contractors when calculating the department’s small business prime contracting goals. As a result, to achieve even its near-term small business prime contracting goals, DOE will have to direct more prime contracting dollars to small businesses than it ever has in the past. Further, meeting a long-term goal of 23 percent small business prime contracting would represent an achievement far beyond what DOE has ever reached—about 6 times the $847 million that it directed to small businesses in fiscal year 2003. Now that DOE’s facility management subcontracts can no longer be counted toward achieving its small business prime contracting goals, achieving its near-term goals for fiscal years 2004 and 2005, will require DOE to expand the amount of prime contracting dollars it provides directly to small businesses. The department has a goal of directing to small business prime contracts 5.06 percent of its contracting base in fiscal year 2004, and 5.50 percent of its contracting base in fiscal year 2005. These goals surpass any of DOE’s small business prime contracting achievements prior to fiscal year 2004. As figure 2 shows, the percentage of prime contracting dollars DOE directed to small businesses in any year since 1996 ranges from 2.68 percent to 3.99 percent. During 1991 through 1999, when DOE could include in its achievements those dollars going to small business subcontractors of facility management contractors, as well as dollars going directly to small business prime contractors, DOE’s reported percentages of prime contracting dollars awarded to small businesses ranged from 15.7 percent to 19.9 percent. However, most of the reported achievements during those years came from facility management subcontracting dollars going to small businesses. The remainder of the reported achievements came from prime contracts to small businesses for work not associated with facility management contracts. Meeting the small-business prime contracting goals in fiscal years 2004 and 2005 will require DOE to achieve a substantial increase over the $847 million in prime contracting dollars that DOE provided directly to small businesses in fiscal year 2003. To meet its fiscal year 2004 goal, DOE will need to direct an additional $226 million, or 26.7 percent, over the 2003 amount. Meeting the department’s 2005 goal will require directing $319 million more than in 2003, an increase of 37.7 percent over 2003 levels. Although achieving DOE’s near-term small business prime contracting goals for fiscal years 2004 and 2005 will not be easy, the long-term goal of 23 percent would require an achievement far beyond what DOE has accomplished in the past. SBA expects DOE to achieve a small business prime contracting goal at least on par with the federal goal of 23 percent. DOE’s response has been to formulate a plan for gradual compliance. In 2002, DOE’s Small Business Office submitted a plan to SBA to achieve the 23 percent goal in 20 years, by the year 2022. According to this 20-year plan, DOE would increase its level of small business prime contracting by about 1 percentage point per year to achieve the 23 percent goal by 2022. To achieve this goal, the department would need to increase its small business prime contracting to about $5 billion, or 6 times its 2003 achievement. Put in terms of DOE’s current contracting base, the additional amount of contracting dollars necessary to achieve the 23 percent goal approximately equals the combined annual budgets of the facility management contracts for the two largest laboratories—Los Alamos and Sandia National Laboratories. Meeting the 23 percent goal under DOE’s current contracting approach means that a substantial portion of dollars now included in facility management contracts would have to be redirected to small business prime contracts, resulting in more prime contracts for DOE to manage. Redirecting these dollars would be necessary because prime contracts not associated with facility management generally account for less than 20 percent of DOE’s total prime contract dollars. Therefore, even if all the dollars not associated with facility management contracts were directed to small businesses, the total amount would be insufficient to meet the 23 percent small business prime contracting goal. Although DOE has an agreed upon organizational strategy to achieve its near-term small business prime contracting goals, a consistent view does not prevail within the department on whether or how to reach the eventual goal of directing 23 percent of prime contracting dollars to small businesses. To achieve the near-term goals of 5.06 of prime contracting dollars to small businesses in fiscal year 2004, and 5.50 percent in fiscal year 2005, DOE has focused primarily on improving outreach to the small business community, directing more of the dollars not associated with facility management contracts toward small businesses, and beginning to redirect selected facility management contract activities to small business prime contracts. It is less clear, however, how DOE intends to achieve the eventual long-term goal of 23 percent small business prime contracting. DOE’s Small Business Office’s 20-year plan calls for redirecting about 20 percent of facility management contract dollars to small business prime contracts but provides no details as to how NNSA and the program offices, such as EM and Science, would implement the plan. Officials in these offices have differing views as to how much of the work done by their facility management contractors can be redirected to small businesses without jeopardizing critical agency missions. DOE’s plan for achieving its near-term small business prime contracting goals focuses primarily on directing more of the dollars not associated with facility management contracts to small businesses. To increase the percentage of such dollars going to small businesses, DOE has expanded its outreach to the small business community, notifying small businesses of contracting opportunities and preparing them to compete for these contracts. DOE’s Small Business Office has developed a variety of outreach and capacity-building activities designed to assist small businesses in competing for DOE prime contracts. For example, DOE’s Small Business Office fosters mentor-protégé relationships between small businesses and DOE’s large prime contractors to help the small businesses expand their expertise. In addition to these department-wide efforts, offices such as NNSA and EM have also developed outreach activities, generally related to specific prime contract opportunities (see table 1 for examples.) In addition to its outreach efforts, DOE has taken steps in two other major areas. First, it has established internal requirements that it believes will help make progress toward achieving its small business prime contracting goals. These internal requirements were part of a 14-item plan of action included in the 20-year plan. The plan of action includes reviews of upcoming contracts to identify work activities that could potentially be awarded to small businesses, and regular monitoring of DOE program level and agency-wide achievements toward DOE’s annual goals. For example, each year DOE’s Small Business Office requires each program office to develop a small business plan that reflects the program’s goals for increasing prime contracts with small businesses. These program plans are used to develop DOE’s overall small business contracting goals, and DOE’s Small Business Office tracks progress toward these goals quarterly. Second, DOE has modified some of its procurement processes to eliminate certain barriers for small businesses, such as bonding requirements, and to help small businesses minimize the cost of developing proposals. For example, DOE has limited the amount of documentation that small businesses are required to submit in response to a request for proposals to 50 pages instead of volumes of supporting documentation. To achieve the near-term small business prime contracting goals in fiscal years 2004 and 2005, DOE is concentrating primarily on contracts not associated with facility management, because doing so does not involve significant changes in the way the department does business. For contracts not associated with facility management, as new work is identified or existing contracts come up for renewal, DOE sets them aside for small businesses and awards them as small business prime contracts whenever possible. For example, the information technology support contract for DOE headquarters came up for renewal in January 2002. DOE determined that this contract, which was held by a large business, could be carried out by a small business. The new contract, for a 5-year term with a total value of $409 million, was awarded in January 2003, to a team that included a consortium of 10 small businesses. NNSA and the program offices have also focused primarily on procurements not associated with their facility management contracts. NNSA, EM, and Science officials issued policy letters stressing the importance of directing contracts for activities not associated with facility management to small businesses to the maximum extent possible. For example, for any upcoming contract not associated with facility management, program office personnel must first conduct market research to determine if any small businesses are capable of performing all or parts of the work and have the necessary qualifications to do so. If the program office finds two small businesses capable of doing the work, the policy requires the contract or parts of the contract to be “set aside” from unrestricted competition and instead generally be made available for a more restricted competition among small businesses. Any exceptions to this policy must be approved by the head of the program office. Although in the near term DOE is concentrating primarily on contracts not associated with facility management, it has also begun to look at certain facility management contracts as they come up for renewal to identify potential work that could be made available to small businesses. DOE’s Offices of EM and Fossil Energy have identified several specific activities that had been within a facility management contractor’s scope of work and have set those activities aside for small business prime contracts. (See table 2 for examples.) Of the examples shown in table 2, the procurement at the Strategic Petroleum Reserve in Louisiana is the only one that DOE has completed so far. According to DOE officials with the Office of Fossil Energy, when the facility management contract was nearing the end of its term, DOE’s Small Business Office asked the program office to look for opportunities for small business prime contracts. DOE officials at the Strategic Petroleum Reserve said they identified a number of construction projects that could be performed by small businesses, and awarded several prime contracts to small businesses for this work. DOE officials then decided to remove all the construction management work from the facility management contract for the site so that a new small business prime contractor for construction management could then award and manage subcontracts for individual construction projects. According to DOE’s contracting officer at the Strategic Petroleum Reserve, having the new prime contractor responsible for awarding and managing the contracts will reduce the amount of additional work required by DOE procurement and program personnel. The prime contract was awarded in November 2003. While DOE’s Small Business Office and the three largest offices have a consistent approach to their near-term goals—primarily focusing on increasing small business prime contracting by using dollars not associated with facility management contracts—a consistent view does not prevail in the department on whether or how to achieve the eventual goal of directing 23 percent of prime contracting dollars to small businesses. DOE’s Small Business Office’s plan to achieve the long-term small business prime contracting goals has two main components. The first is to continue increasing the small business share of contract dollars not associated with facility management contracts. For any new contracts not associated with facility management, DOE has a stated preference to set aside those contracts for small businesses where possible. The three largest offices have been consistent in their efforts to do so. However, even this portion of DOE’s contracting base (about 20 percent of total contract dollars) is not immediately available for small business prime contracts. For example, many of the contracts not associated with facility management cover multiple years, so only a portion of these contracts are up for award or renewal in a given year. In addition, some contracts for work not associated with facility management may not be available for award to small businesses, for example, if market research determines that there are not at least two small businesses capable of performing all or parts of the work in an upcoming procurement. Because of the limited amount of contracting dollars for work not associated with facility management, the second component of DOE’s Small Business Office’s long-term plan is to redirect dollars now going to facility management contracts to small business prime contracts. DOE’s 20-year plan calls for increasing dollars redirected from facility management contracts to small business prime contracts from less than 1 percent in 2003 to about 20 percent by 2022 (see figure 3). Nevertheless, DOE does not have a consistent strategy in place to accomplish its plan for redirecting dollars from its facility management contracts to small business prime contracts. Officials in NNSA, EM, and Science have considerably different views about the feasibility of redirecting significant amounts of funding from their facility management contracts to small businesses. For example: Both NNSA and Science officials are very concerned about the implications of setting aside for small businesses significant portions of the dollars now going to facility management contractors that operate the weapons and research laboratories. NNSA and Science officials’ concerns stem from the large scale of laboratory operations, the integrated nature of the mission and mission support work, and the complexity and critical importance of the laboratory missions. These officials said that fragmenting mission activities among several contractors at the research laboratories, whether the contractors were large or small businesses, was inadvisable. Therefore, according to NNSA’s Director of Acquisition and Supply Management and Science’s Director of Grants and Contracts, NNSA and Science may never achieve a 23 percent small business prime contracting level because doing so would be inconsistent with accomplishing their missions safely, securely, and effectively. Despite the reluctance to fragment core mission activities, NNSA and Science officials said they would explore opportunities to contract separately with small businesses for mission support functions at the laboratories if those mission support functions were not closely integrated with the laboratories’ core missions. For example, NNSA is analyzing its own purchases of goods and services, such as computer hardware, software, and staffing services, as well as similar purchases by its facility management contractors. NNSA is assessing the feasibility of purchasing these items in bulk under a prime contract, rather than multiple separate contracts. An NNSA official said that NNSA is not trying to increase its small business prime contracting numbers by becoming a purchasing agent for its facility management contractors, but rather combining similar requirements as a way to possibly increase NNSA’s level of prime contracting to small business. On the basis of this analysis, NNSA is pursuing three potential opportunities, valued at about $80 million, involving technical services and services to provide temporary staff, and is exploring other opportunities. By contrast, EM officials were more optimistic about the potential role of small businesses in accomplishing its core missions. The Assistant Secretary for EM said that part of its initiative to accelerate the cleanup of DOE sites involves greater use of alternatives to traditional facility management contracts, including removing work from facility management contracts and setting that work aside for small businesses. The Assistant Secretary said that these small business procurements are part of EM’s overall strategy to clean up sites more quickly and at a lower cost to the government, not just to increase the amount of small business prime contracting. EM is also developing a complex-wide contracting arrangement, called indefinite delivery/indefinite quantity, which will result in prime contracts with both large and small businesses for smaller-scale cleanup activities. According to EM’s Director of Acquisition Management, the multiple contracts awarded under this initiative will allow EM sites nationwide to quickly purchase cleanup services from small and large businesses without having to conduct a separate procurement, which can take months to complete. Instead, either EM or the facility management contractor will be able to simply write a task order against these existing contracts. Finally, it is unclear to what extent EM can expand its use of small business prime contracts to accomplish its core missions. According to the Assistant Secretary, the main constraint is the ability of EM staff to effectively oversee those contracts, not the availability of qualified small businesses to perform the work. The Assistant Secretary said that EM is proceeding carefully to ensure that effective management and oversight will occur; that cost, schedule, and technical standards are met; and that safety and security issues are adequately addressed. Since DOE is in the early stages of implementing a long-term strategy to redirect facility management contracting dollars to small businesses, the implications of increased small business prime contracting are still relatively uncertain. However, the implications depend heavily on the extent to which DOE agrees, in its negotiations with SBA, to meet the 23 percent small business prime contracting goal. Given the differences we heard in the approaches of the three largest offices, it is not clear if DOE will commit to the incremental increases that would eventually lead to a 23 percent rate of prime contracting to small businesses, as detailed in the 20- year schedule prepared by DOE’s Small Business Office. Absent more specific direction from Congress or the executive branch, DOE’s eventual commitment to a particular small business prime contracting goal appears to rest heavily on whether the department will be willing to change its approach to contracting for activities at the science and weapons laboratories, its environmental cleanup work, or both. Regardless of the extent to which DOE directs more prime contracting dollars to small businesses, efforts to increase small business prime contracting involve potential benefits as well as potential risks. An overarching benefit of increasing small business prime contracting is that DOE would be helping to carry out the President’s small business agenda and would be contributing to the federal government’s overall goal of directing 23 percent of prime contracting dollars to small businesses. Beyond contributing to this overall effort, DOE’s Small Business Office and procurement officials explained that the benefits included increased competition, greater innovation, and enhanced small business capacity. One example of increased competition can be seen in EM’s program. DOE’s efforts to increase small business contracting have resulted in new procurements with narrower scope. In the past, EM has been concerned about the limited pool of potential contractors for large cleanup projects, sometimes receiving only two proposals on multibillion dollar procurements. By structuring the cleanup work into smaller contracts and opening them to individual small businesses or small business teams, EM expects to attract more potential bidders. One of EM’s current procurements is for cleanup work at the Fast Flux Test Facility at the Hanford site in Washington state. Currently included in a facility management contract, EM is in the process of redirecting this work as a small business set-aside. EM officials said that in the response to the request for proposals for this project, with an estimated contract amount of $46 million per year for up to 8 years, DOE received proposals from several small business teams. According to EM officials, increased competition from a larger pool of potential contractors could result in better prices for the government. However, since the contracts for the current small business procurements have not yet been awarded, it is too soon to tell whether better prices will be realized. In addition to increased competition, DOE procurement and program office officials believe that small businesses may bring new ideas and innovative approaches to the work. For example, as part of its accelerated cleanup strategy, EM has been looking for better and faster ways to accomplish cleanup at its sites and facilities. According to EM officials, expanding the pool of potential contractors for cleanup projects may increase the potential for new technology and ideas. Increasing small business prime contracting can also provide small businesses with the experience necessary to compete for other federal prime contracts. According to small business associations and advocacy groups that we contacted, a direct contracting relationship with DOE provides small businesses with more challenging work and better opportunities to grow and expand their businesses. The use of mentor- protégé arrangements or teaming with other small or large businesses also provides opportunities for growth and economic development. For example, an owner of a small construction company in New Mexico told us that his business had successfully teamed with a large construction company for several projects and that his small company was now the senior member of that team and was competing for DOE prime contracts. DOE’s long-term strategy for achieving a 23-percent small business prime contracting goal includes redirecting a substantial amount of facility management contract dollars to small business prime contracts. DOE procurement and program officials acknowledge that doing so would significantly increase the number of prime contracts DOE would have to manage. Increasing DOE’s number of prime contracts, whether these are with small or large businesses, could create problems with integrating and coordinating the efforts of more contractors at a site, as well as create problems with contract management and oversight. In addition, DOE’s efforts to increase small business prime contracting could inadvertently reduce the amount of small business subcontracting directed to local and regional small businesses. Increasing the number of prime contracts at a site raises concerns about integration, coordination, and accountability. If a facility management contractor has primary responsibility for accomplishing work at the site, that contractor is also accountable for integrating the efforts of multiple subcontractors to ensure that the mission work is accomplished. In addition, the facility management contractor has the responsibility for ensuring that all contractor and subcontractor employees at the site comply with DOE safety and security standards. If the work done by the facility management contractor becomes fragmented and spread among multiple prime contracts, DOE may need to carry out these integration functions, which places more oversight responsibilities on federal program and project management personnel. If the number of prime contractors at a site increases significantly, the challenges associated with integrating and coordinating the activities also increase. Both DOE and facility management contractor officials have expressed concerns about successfully integrating and coordinating the efforts of an increased number of prime contractors at a site. Ensuring that all work is performed in accordance with DOE safety and security standards is a significant concern, especially given the continuing challenges that the department faces in these two areas. To begin to address the constraint of having a limited number of federal employees to perform coordination and integration functions, DOE is considering awarding small business prime contracts but then having the facility management contractors at the sites manage and oversee the work. As some facility management contracts are extended or awarded, DOE includes a provision that specifically allows the department to identify and redirect work within the facility management contract to a small business prime contract. The provision also allows DOE to request the facility management contractor to manage and oversee the work. Since the work that DOE would redirect is generally already being done by a facility management subcontractor, the only actual change is the contractual relationship. In fiscal year 2003, NNSA started using this arrangement for facilities and infrastructure restoration projects at the Sandia National Laboratory in New Mexico. NNSA awarded prime contracts—$100,000 in fiscal year 2003 and an estimated $3 million in fiscal year 2004—to small businesses for some of these projects. Although it is too soon to fully assess the implications of this arrangement, facility management contractor officials at the Sandia laboratory have expressed concern that it could confuse the lines of authority and accountability at the site, because the contractual relationship is not consistent with the daily management and oversight of the activities being performed. In prior work, we have also expressed concerns about confusing the lines of authority, which can make it difficult to hold contractors accountable for performance. Regarding contract management and oversight, increasing the number of prime contracts with DOE could place further strain on DOE’s procurement and program oversight personnel. DOE’s reliance on contractors to operate its facilities and carry out its missions, coupled with the department’s history of inadequate contractor management and oversight, led us in 1990 to designate DOE contract management as a high- risk area vulnerable to fraud, waste, abuse, and mismanagement. This high-risk designation is still in effect. GAO and others have stated that one of the contributing factors to DOE’s inadequate oversight of its contractors has been a shortage of personnel with the right skills to perform these functions. Although DOE has over the past several years made progress in training and certifying its procurement and project management personnel, DOE procurement and program officials said that the overall number of available personnel has not grown, and has significantly decreased in NNSA. More prime contracts would create additional work for federal employees in two phases: managing the procurement process by requesting and evaluating proposals to award a contract, and overseeing the work of the contractor to ensure that performance is acceptable. DOE officials at headquarters and at the sites we visited expressed concerns that significantly increasing the number of prime contracts could reduce the ability to adequately oversee and evaluate contractor performance. While headquarters and site office officials in the EM program acknowledge the potential risks that additional prime contracts can create in both integrating work activities at a site and contract management and oversight, they are pursuing ways to mitigate those risks. To address concerns about sitewide integration of safety and security, DOE officials at Hanford plan to use contract language and incentives to encourage the site’s new small business prime contractors and the facility management contractors to work together. To earn potential incentive fees under this proposed arrangement, for example, all prime contractors will have to cooperate in such areas as safety and security. But, since these are new approaches and the small business prime contracts have yet to be awarded, the extent to which these steps will mitigate the potential risks is unknown. To lessen the impact of additional prime contracts on procurement and program personnel, EM officials said they intend to use a contract for small business procurements that has a well-defined statement of work and that ties incentive fees to accomplishing the contract’s stated final goal rather than to interim steps. According to EM’s Director of Acquisition Management, administering such contracts generally may require less federal involvement, although EM will also have to train its staff on the most effective way to manage these contracts. In addition to the potential risks discussed above, DOE and contractor officials, as well as representatives of small business advocacy groups, raised concerns about DOE’s efforts to increase small business prime contracting. One concern expressed was that such efforts could inadvertently result in less total contracting dollars directed to the small business community. Procurement regulations require that all facility management contractors have a small business subcontracting plan and facility management contractors must generally negotiate annual small business subcontracting goals with the department. However, if work is removed from a facility management contract, the facility management contractor may negotiate lower subcontracting goals with the department and then subcontract less of the remaining work to small businesses. Since the efforts to redirect facility management contract dollars to small businesses is in its early stages, no data are yet available to validate this concern. A related concern is that if DOE removes work from a facility management contract and sets that work aside for a small business procurement, there may be fewer contracting dollars available to local and regional small businesses. This could occur because DOE’s facility management contractors generally are not required to follow federal regulations in their procurements, but instead comply with “best business practices.” In doing so, a facility management contractor can restrict a competition for its subcontracts to the local small business community. In contrast, DOE must generally open up its procurements to nationwide competition, which may result in fewer contracts going to local and regional small businesses. Again, no data are yet available to validate this concern. Finally, representatives of some small business advocacy groups told us that some small businesses would rather have a subcontract with a facility management contractor than a prime contract with DOE. This is because facility management contractors generally have fewer administrative requirements and a less burdensome and faster procurement process. It is not clear to what extent these potential risks will affect DOE’s ability to carry out its missions in a safe, secure, and effective manner. The impact on DOE’s missions of increasing small business prime contracts will depend both on the total number of new prime contracts awarded and on how well the department manages the contractors and the work. The stakes are high as DOE attempts to contribute to the federal government’s goal of increasing the prime contracting dollars directed to the small business community, while striving to accomplish its missions efficiently and effectively. This concludes my testimony. I would be pleased to respond to any questions that you may have. For further information on this testimony, please contact Ms. Robin Nazzaro at (202) 512-3841. Individuals making key contributions to this testimony included Carole Blackwell, Ellen W. Chu, Matt Coco, Doreen Feldman, Jeff Rueckhaus, Stan Stenersen, and Bill Swick. Small business subcontracts awarded by all other prime contractors Small and large business subcontracts awarded by prime contractors Small business prime and subcontracts (as a percent of contracting base) Small business subcontracts awarded by prime contractors Small business subcontracts awarded by all other prime contractors Small and large business subcontracts awarded by prime contractors Small business prime and subcontracts (as a percent of contracting base) DOE’s contracting base includes dollars that can potentially be directed to U.S. small businesses, excluding, under Small Business Administration (SBA) guidelines, dollars that cannot go to small business prime contracts, such as grants and purchases from mandatory or foreign sources. We calculated the percentage of DOE’s contract dollars going to small business prime contracts by dividing small business prime contract dollars (row 5) by the contracting base (row 1). For fiscal years 1991 through 1999, DOE’s annual small business prime contracting achievements, as reported to SBA, included DOE subcontracts awarded to small businesses by its facility management contractors, as well as prime contracts awarded directly to small businesses. To calculate small business prime contracting achievements for these 9 years, we therefore added rows 5 and 7 and divided the sum by row 1. We did not do this calculation for fiscal years 1990 and 2000 through 2003 because small business subcontracts from facility management contractors did not “count” in those years toward small business achievement percentages. We calculated the overall percentage of DOE’s contract dollars going to small businesses—via both prime contracts and subcontracts—by dividing DOE’s contract dollars to small businesses (row 4) by the contracting base (row 1). We calculated the percentage of total subcontracting dollars going to small business by dividing small business subcontract dollars from prime contractors (row 6) by total subcontract dollars going to small and large businesses (row 9). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Under the Small Business Reauthorization Act of 1997, the federal government has a goal of awarding at least 23 percent of prime, or direct, contracting dollars to small businesses each fiscal year. The Department of Energy (DOE), like other federal agencies, shares in the responsibility for meeting this goal. In fiscal year 2003, DOE spent $21.6 billion on prime contracts. More than 80 percent of this amount was spent on facility management contracts to manage and operate DOE's sites. Before 1999, DOE included subcontracts awarded by its facility management contractors when calculating its small business prime contracting achievements. In 1999, however, the Office of Federal Procurement Policy determined that DOE could no longer do so. This testimony discusses (1) the effect of the 1999 policy change on the amount of prime contract dollars that DOE will be required to direct to small businesses, (2) the steps that DOE has taken or plans to take to achieve its small business contracting goals, and (3) the likely implications for DOE's programs resulting from these changes. To meet its share of federal goals, DOE would need to direct significantly more prime contracting dollars to small businesses. If it is to reach its near-term goals of 5.06 percent in fiscal year 2004, and 5.50 percent in fiscal year 2005, DOE must direct to small businesses an additional $226 million and $319 million, respectively, over the $847 million it directed to small businesses in fiscal year 2003. Achieving a long-term goal of directing 23 percent of prime contracting dollars to small businesses would require DOE to contract with small businesses at about 6 times its current rate. Such an increase is about equal to the combined annual budgets for Los Alamos and Sandia--the two largest national laboratories. To address its near-term small business prime contracting goals, DOE has improved its outreach efforts and has redirected to small businesses some contract dollars not associated with facility management contracts. DOE has also begun to review facility management contracts up for renewal to identify work that could be redirected to small business prime contracts. Achieving a long-term goal of 23 percent is much more problematic. Notably, DOE's three largest offices--the National Nuclear Security Administration (NNSA), Environmental Management (EM), and Science--have differing views as to what extent facility management contract work can be redirected to small businesses without having a negative impact on accomplishing their missions. EM is in favor of doing so if redirecting the work is consistent with its accelerated cleanup strategy. NNSA and Science officials express concern that redirecting work now done by facility management contractors could jeopardize critical research missions at the laboratories. DOE's efforts to increase small business prime contracting involve both potential benefits and risks, which depend on the eventual goal DOE attempts to achieve. The potential benefits to DOE of increased small business prime contracting include increasing the pool of potential contractors, which could result in better competition and better prices for the government; finding new and innovative approaches to the work developed by small businesses; and providing experiences to small businesses to allow them to better compete for other federal contracts. The potential risks include integrating and coordinating the work of a greater number of contractors at a site in a safe, secure, and effective manner, and having adequate federal resources for effective contract management and oversight--areas that already pose significant challenges for DOE. In addition, DOE's efforts to increase small business prime contracting may cause its facility management contractors to reduce the amount of subcontracting that they direct to local and regional small businesses. DOE largely agreed with the information in this testimony. However, it disagreed with GAO's characterization of DOE's long-term small business prime contracting goal and its strategy to achieve it. GAO believes that both the longterm goal and DOE's strategy have been accurately described. |
The Defense Information Systems Agency (DISA) is responsible for managing the Defense Information Services business area. This business area provides a wide range of services relating to computer center operations and voice, data, and video telecommunications. For fiscal year 1998, DISA estimated that the business area will have revenues of about $2.7 billion. Business area operations are financed as part of the Defense-wide Working Capital Fund, which was formerly called the Defense Business Operations Fund (DBOF). In December 1996, the Under Secretary of Defense (Comptroller) dissolved DBOF and created four working capital funds (WCF) to clearly establish the military services’ and DOD components’ responsibilities for managing the functional and financial aspects of their respective business areas. As currently specified in the Department of Defense’s (DOD) Financial Management Regulation, Volume 11B, Reimbursable Operations, Policy and Procedures-Defense Business Operations Fund, the funds are to charge customers the full costs of providing goods and services. The primary goal of the working capital fund is to focus management’s attention on the full costs of carrying out certain critical DOD business operations and the management of those costs. Unlike a private sector enterprise, which has a profit motive, the working capital funds are to operate on a break-even basis over time by recovering the full costs incurred in conducting business operations. Accomplishing this requires DOD managers to become more conscious of operating costs and to make fundamental improvements in how DOD conducts business. It is critical for the working capital funds to operate efficiently since every dollar spent inefficiently results in fewer resources available for other defense spending priorities. The Defense Information Services business area consists of two components—the Defense megacenters (DMC) and the Communications Information Services Activity (CISA). The DMCs’ primary mission is to provide computer processing services to DOD and other federal government agencies. The primary mission of CISA is to provide telecommunications services to DOD and non-Defense customers. These two entities differ markedly in mission, as highlighted in the following sections. Mainframe processing comprises the core of the DMC services. DISA refers to them collectively as A-Goal services, and they include data processing on IBM and UNISYS mainframe computers, data transfers between computers, and data storage. DMCs provide a variety of other services to their customers, referred to collectively as C-Goal services, which include mainframe processing on computers made by other manufacturers (such as Burroughs), telecommunications, and database management. Table 1.1 summarizes the DMCs’ reported revenues, cost of operations, and the net operating results for fiscal years 1995 through 1997. Currently, there are 16 megacenters located throughout the Unites States.DISA has designated DISA Western Hemisphere as the responsible entity for managing the DMCs. As part of DOD’s ongoing efforts to reduce infrastructure cost, DISA has efforts underway to further reduce the number of megacenters. Over the next 2 years DISA plans to complete the consolidation of its mainframe processing centers from 16 to 6 and at the same time introduce Regional Information Services. This is a continuation of DOD efforts to consolidate its computer center operations. Between fiscal years 1990 and 1996, DOD consolidated workload and equipment from 194 computer centers to 16 DISA DMCs. While the remaining DMCs will provide mainframe processing services, the Regional Information Services will concentrate on nonmainframe services, such as local area network support and personal computer operations and maintenance. According to DISA’s Defense Megacenter Business Strategy, dated October 1997, DOD estimates that the planned consolidation will result in savings over a 10-year period (fiscal years 1998 through 2007) of approximately $1.5 billion. Of the $1.5 billion, approximately $1 billion will accrue after fiscal year 2002. CISA is responsible for acquiring services that connect base-level and deployed telecommunications networks within and between the continental United States, Europe, Pacific, and the Caribbean. These services are provided within the United States primarily through leased telecommunications lines and overseas by a mixture of government-owned and leased lines. Table 1.2 provides information on CISA’s reported revenues, cost of operations, and net operating results for fiscal years 1995 through 1997. CISA can provide its customers—DOD and other federal entities, such as the Federal Aviation Administration—all forms of secure and nonsecure voice, data, video, and bulk transmission telecommunications. If CISA is unable to provide the requested services directly, it will contract, on behalf of the requesting activity, with the commercial sector or another federal entity to provide the services. For example, some voice services are provided by the General Services Administration under its FTS-2000 contract for services not available through CISA. The objectives of our review were to (1) evaluate DISA’s processes for establishing the prices DMCs and CISA charge for the services provided to customers of the Defense Information Services business area, (2) ascertain if DISA is being reimbursed for all services provided, and (3) ascertain the accuracy of DISA’s financial management information. To evaluate the price-setting process for the DMCs and CISA, we reviewed the policies and procedures DOD established for setting prices. We identified the cost elements included in the prices and determined whether these elements are in conformance with the guidance set forth in DOD’s Financial Management Regulation, Volume 11B, Reimbursable Operations, Policy and Procedures-Defense Business Operations Fund. We also collected and analyzed workload data obtained from DISA-WESTHEM. We discussed the reliability of this data on DMC operations with DISA-WESTHEM and determined the difference between the projected and actual workload for IBM and UNISYS mainframe services for fiscal year 1997. In addition, we obtained and reviewed a study performed by a private contractor related to CISA’s pricing of services. To determine if the information service business area is being reimbursed for all services provided, we collected, reviewed, and analyzed selected financial information related to collections, disbursements, and accounts receivable. We determined whether DISA pursued collection of accounts receivable in accordance with the guidance set forth in DOD’s Financial Management Regulation. We also contacted DISA customers to discuss amounts they owed DISA for services provided. In addition, through our discussion with DISA personnel, review of the financial reports, and review of relevant federal accounting standards, we determined whether amounts owed DISA were being properly recorded. To evaluate the accuracy of DISA’s financial management information we (1) obtained and analyzed the Defense Working Capital Fund Accounting Reports and (2) DISA’s Chief Financial Officer Annual Financial Statement for FY 1996. We also reviewed the DOD IG’s audit report on the business area financial statements for fiscal year 1997, to identify any problems they found with the business area’s financial information. We also reviewed DOD’s fiscal year 1997 Federal Managers’ Financial Integrity Act (FMFIA) report and the Defense Finance and Accounting Service (DFAS) Chief Financial Officer’s Financial Management Status Report and Five Year Plan 1997-2001 to identify any accounting and reporting weaknesses related to DISA. The quantitative financial information used in this report was produced by DOD’s systems, which have long been reported to generate unreliable data. We did not independently verify the data. The DOD IG was unable to render an opinion on DISA’s financial statements for fiscal year 1997. We performed our work at the Office of the Under Secretary of Defense (Comptroller), Washington, D.C.; DISA Headquarters, Arlington, Virginia; DFAS Headquarters, Crystal City, Virginia; DISA-WESTHEM, Denver, Colorado; Defense Logistics Agency, Ft. Belvior, Virginia; Federal Aviation Administration, Washington, D.C.; and the Department of State, Washington, D.C.. Our work was performed from August 1997 through August 1998, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense. The Office of the Secretary of Defense provided written comments on a draft of this report that are discussed in chapters 2, 3, 4, and 5 and are reprinted in appendix I. DOD also provided technical comments on the draft report, which we have incorporated where appropriate, but have not included. Chapter 2 of this report discusses pricing issues related to the Defense megacenters. Chapter 3 discusses issues primarily related to pricing telecommunications services offered by CISA. Chapter 4 discusses DISA’s ability to be reimbursed in a timely manner for services provided and the nonreimbursement for services provided to customers. Chapter 5 discusses the accuracy and reliability of DISA’s financial management information. One of the goals of the working capital fund is to break even over time. To achieve this, prices are supposed to include all direct and indirect costs incurred in providing services to the customers. To ensure that customers have sufficient funds to pay for the requested services, prices are to be established before the start of the fiscal year and remain in effect for the entire year. In order to set prices that will enable the business area to operate on a break-even basis, it is extremely important that the business area accurately estimate the work it will perform and the cost of performing that work. This task is made more difficult because the process that business areas use to develop prices begins up to 2 years before the prices go into effect. In developing prices for mainframe processing services, each DMC collects cost data on direct labor, depreciation, contracts, software, and the indirect cost incurred by the DMC and headquarters (such as base support costs and centralized contract administration) to arrive at the activity’s estimated cost of doing business. The workload data are derived through discussions with customers and utilization data collected by DISA. Once the cost and workload data are accumulated, the individual DMC price is determined by allocating the estimated total cost over the estimated workload to arrive at a cost per hour. Currently the DMCs use a uniform price structure which results in all customers being charged the same price regardless of where the work is performed. Our review disclosed that the cost of doing business varied considerably from DMC to DMC. As DISA proceeds with its consolidation effort, analyzing the cost differences between the DMCs should enable managers to seek ways to become more efficient and effective, thereby reducing the cost of operations and lowering prices charged to the customers. We also found that the DMCs had difficulty developing accurate workload estimates. For example, at the Columbus DMC, the actual reported workload was about 74 percent more than the projected workload, while at the Warner Robins DMC, the actual reported workload was approximately 81 percent of the projected workload. While DISA has put into place mechanisms to better identify the current workload, additional efforts are needed to ensure that DISA receives accurate estimates on new workload requirements. DOD has recognized that its computer centers have been operating inefficiently and that they need to adopt new technologies in order to continue supporting DOD’s large and complex information infrastructure. The planned DMC consolidation is aimed at reducing DOD’s infrastructure costs, thereby lowering the price charged to customers for IBM and UNISYS mainframe services. The House Appropriation Committee report 104-208, directed the Under Secretary of Defense (Comptroller) to determine the feasibility of outsourcing DOD’s megacenters. A cost analysis was completed in February 1996 that detailed the overall cost of operating the DMCs. Although the analysis was used as a factor in evaluating which DMCs would continue to provide mainframe services, it did not identify the specific costs of operating each DMC. Our analysis of the reported cost of doing business disclosed that the cost varied considerably from DMC to DMC for both IBM and UNISYS work. The IBM costs, for example, ranged from a low of $40 per hour at the Ogden DMC to a high of $275 per hour at the San Diego DMC. Table 2.1 shows the reported fiscal year 1998 cost per central processing unit (CPU) hour for IBM and UNISYS platforms at individual DMCs. The primary goal of the WCF financial structure is to focus the attention of all levels of management on the full costs of carrying out certain critical DOD business operations and the management of those costs. Analysis of the reported cost differences would be in accordance with this goal. However, DISA personnel stated that a formal analysis has not been conducted to determine the causes of DMC cost differences. This analysis is especially critical for the six centers that are supposed to remain after the consolidation effort. These centers report wide variances in the cost per CPU hour. For instance, at the Ogden DMC, the IBM cost per CPU hour is approximately $40; at the Oklahoma City DMC, it is $126, over three times higher. By taking time now to analyze the cost differences at these and the other four remaining facilities, DISA managers can assess the causes of such variances and thereby identify inefficient operations and make fundamental improvements in how the centers conduct business before consolidation efforts are completed. Projecting workload accurately is a key element in setting prices that will help a business area to break even over time. Too high a workload estimate could result in the business area operating at a loss. Conversely, if the workload estimate is too low, the business area could have a profit. Although DISA has initiated efforts over the past several years to develop accurate workload estimates, it continues to struggle. For example, at the Columbus DMC, the reported actual workload was about 74 percent more than the projected workload, while at the Warner Robins DMC, the actual reported workload was approximately 19 percent less than the projected workload. The establishment of accurate workload estimates was one of the issues discussed in DOD’s September 1997 plan to improve the operations of the WCFs. The improvement plan notes that synchronizing customer funding and workload estimates is critical to ensure that WCF prices are based on realistic workload estimates and customer purchases are adequately funded. The plan noted that this does not always occur. In fiscal year 1994, DISA began identifying system utilization and developing projections of future customer workload requirements based on information provided by DMCs. However, according to DISA, this information has frequently been misclassified because clear definitions for customer identification codes—which identify the workload below the major command level—are lacking. In order to improve the reliability of customer projections, DISA validated customer identification codes for DMC customers in fiscal year 1997. DISA also took steps to capture the utilization data by installing measurement systems on IBM, UNISYS, and other hardware platforms. Data from these systems are fed to the MVS Information Control System (MICS), which now serves as DISA’s workload reporting and invoicing system. DISA has also gathered information from DMC staff on conditions that could have an impact on future requirements, such as missing data, changes in customer codes, and differences between historical and future volumes caused by workload migrations. However, despite these efforts, the inability to reasonably estimate the volume of services was the primary reason the mainframe services—IBM and UNISYS—had a reported net profit of approximately $90 million for fiscal year 1997. This profit was approximately 13 percent of the DMCs’ reported revenue of about $682 million. Our analysis of DISA’s workload execution reports showed that 7 of the 15 DMCs providing IBM mainframe services overestimated their CPU hour usage during fiscal year 1997, while 8 underestimated usage. For example, at the Columbus DMC, the actual usage was about 74 percent more than the projected usage. Further, our analysis of UNISYS workload reports also showed that actual processing at 5 of the 8 DMCs providing UNISYSmainframe services was 57 percent to 70 percent of projected amounts. At one of the DMCs, however, the actual reported workload was almost 66 percent more than projected. Tables 2.2 and 2.3 show the projected and actual amounts of processing for IBM and UNISYS systems in fiscal year 1997. In discussing the workload fluctuations with DISA-WESTHEM personnel, we were informed that although they are responsible for estimating future requirements for DMC services, the accuracy of these estimates depends heavily on information provided by DISA customers. According to DISA-WESTHEM personnel, systems have been installed which enable DISA to determine the amount of services actually provided to DMC customers. However, DISA-WESTHEM cannot easily identify all of the factors that could cause a change in future customer needs. For example, the current DOD initiatives to standardize systems have led to widespread migration of workloads from numerous older systems to new systems. Central Design Activities are responsible for maintaining existing systems and for working with customers on the development of replacement systems. Decisions concerning the types of data to be maintained by the new systems and specific program operations affect the types and amounts of DMC services required, such as data storage and the number of input/output operations that will occur during program execution. In addition, the actual pace of progress made in developing, testing, installing, and implementing the new systems affects the volume of processing that will continue to be done on the legacy systems. For example, the Chief of the Resource Management Branch in the Denver DMC Business Management Division stated that the CPU hours for the Defense Civilian Pay System were 46 percent higher than projected in fiscal year 1997 because of an increase in accounts migrated from legacy systems requiring mainframe processing services. DISA-WESTHEM officials also confirmed that this issue is continuing to hinder their ability to develop accurate workload estimates. For example, DISA-WESTHEM was recently notified that the planned migration of the Base Level Personnel System workload from DISA to Randolph Air Force Base on September 30, 1998, has been delayed until fiscal year 2000. As a result, DISA will be providing about $7 million in services during fiscal year 1999 that were not included in its customer projections or factored into DISA’s prices for the fiscal year. DISA-WESTHEM officials further noted that the workload for the Defense Transportation Reporting System is now four times the fiscal year 1997 projected level. DISA only received 30 days notice of the increased workload. Because of the long lead time to develop prices, which is tied to preparing the budget, this additional workload will not be reflected in the fiscal year 1999 prices. All these factors impact the DMCs’ ability to accurately estimate their workload. Without sound workload estimates, the credibility of the prices being charged is questionable. In response to the National Defense Authorization Act for Fiscal Year 1997, DOD developed a plan to improve the operation of the WCFs. One of the issues discussed in the plan was the importance of accurate workload estimates and the potential effect of inaccurate estimates on the results of operations. The plan points out that revolving fund activity workload and customer funding should be synchronized. This synchronization is critical to ensure that prices are based on realistic workload estimates and expected purchases are adequately funded. The plan points out that this does not always occur. In the case of the DMCs, the higher than anticipated workload in fiscal year 1997 was a primary reason the IBM and UNISYS mainframe services reported a net profit of $90 million in fiscal year 1997. Within DOD, the Office of the Comptroller is the one entity that should have, or be able to obtain, information on the workload estimates contained in customers’ budget request and the revolving fund activity estimates of workload to be performed for customers. As part of its program budget review process, in which the prices are finalized, the Comptroller’s office could use the information to review and resolve workload differences between DISA and its customers. A more accurate workload estimate should help reduce the problem of customers not being able to pay for all services provided, which is discussed in further detail in chapter 4. DOD has recognized the need to continue reducing the cost of its computer centers’ operations through consolidations. Over the next 2 years, DISA plans to complete the consolidation of its mainframe processing centers from 16 to 6 locations. In planning the consolidation effort, DISA identified the cost of operating the DMCs and used these data in the decision making process. However, by not analyzing significant differences between the reported cost of operations at the DMCs that will remain after the consolidation is completed, DOD is forgoing an opportunity to further enhance the efficiency of DMC operations and make fundamental improvements in the services provided. Further, until DOD improves its workload projections, it will continue to experience difficulty in setting accurate prices and, in turn, ensuring that the DMCs do not incur excessive profits or losses. We recommend that the Director of DISA analyze the cost differences in the estimated cost per CPU hour at the DMCs as part of the consolidation effort and identify improvements needed in how they conduct business and compare forecasted workload estimates to actual work received and consider these trends in developing the workload estimates and prices to charge customers for the services provided. We also recommend that as part of the price-setting process, the Under Secretary of Defense (Comptroller) ensure the workload estimates in DISA and customer budgets agree. DOD did not agree with our recommendations that DISA (1) analyze the cost differences at the DMCs and identify improvements needed in how they conduct business and (2) compare forecasted workload estimates to actual work received and consider these trends in developing the workload estimates and prices to charge customers for the services provided. DOD agreed with our recommendation that the Under Secretary of Defense (Comptroller), as part of the price-setting process, ensure the workload estimates in DISA and customer budgets agree. In its response, DOD stated that analyses of cost differences have been made and provided to DISA management and that along with other analyses they were used to plan the ongoing consolidation of the DMCs. Our report fully recognizes the efforts DISA put forth in planning the consolidation and the importance the February 1996 cost analysis played in the decision-making process. However, our analysis disclosed that there were considerable differences in the reported cost of doing business between the DMCs. Our conclusion that opportunities for savings exist is based in part on the magnitude of the DMC operating cost differences which now exist—the reported variance per CPU hour is $40.35 to $274.55. Further, in an August 1998 meeting, the Acting Deputy Comptroller and the Resource Manager, Operations Directorate, stated that DISA has not formally analyzed the reasons for the cost differences between the DMCs. Furthermore, a DISA official acknowledged that DISA had not studied the differences in costs between DMCs providing the same or similar service. This evidence clearly indicates to us that despite earlier analyses, a more rigorous study of costs is warranted to determine and correct the underlying causes of these differences. DOD also disagreed with our characterization of the need for improvements in the estimation of the workload performed by the DMCs. DOD noted that given the changes brought on by the consolidation effort, it is difficult to develop accurate workload estimates. DOD also noted that as discussed in the report, the process of developing workload starts 2 years before the prices go into effect. DOD further stated that regardless of the quality of the estimated workload, the customers will inevitably change workload requirements to meet their current situation. The report recognizes the efforts that DISA has undertaken to improve the accuracy and reliability of its workload estimates, the obstacles it faces in doing so, and the difficulty involved in precisely estimating the workload to be performed. Indeed, there will always be some variance between the estimated workload and the actual work performed by the DMCs. However, the extent of the reported workload variance for IBM CPU hours (from about 74 percent more than the projected workload at Columbus DMC to about 19 percent less than the projected workload at the Warner Robins DMC) is much greater than would normally be expected. The DMCs posted a $90 million net profit for mainframe services in fiscal year 1997 primarily because workload volumes were substantially higher than anticipated. Since the volume of workload is one factor in determining the CPU hourly price, the accuracy and reliability of the workload estimate is critical in establishing an hourly CPU price. Conceptually, the larger the volume of work to be performed the lower the price because the cost of operations can be spread across more CPU hours. Therefore, for fiscal year 1997, a more accurate workload estimate for the DMCs would have resulted in a lower CPU hourly price for IBM mainframe services and a higher hourly price for UNISYS mainframe service. A lower hourly IBM mainframe price may have afforded some customers, such as DFAS, the opportunity to pay for more of the services DISA provided. Further, as discussed in the report, DISA personnel developing mainframe projections stated that they rely heavily on data gathered on past workload levels and that these data do not necessarily reflect future requirements. They emphasized that outside customers, such as the Central Design Activities, should have the most immediate knowledge of application systems run at the DMCs and the times when workloads will be moved from one platform to another. In addition to the DMC pricing concerns discussed in chapter 2, our analysis of the Defense Information Services business area also disclosed that DISA was not recovering the full costs incurred in providing telecommunications services. Recovering the full cost of operations is one of the basic underpinnings of the working capital fund (WCF). Not including the full cost of operations in developing prices understates the prices charged customers for the services provided. Our review disclosed that DISA did not include in its prices approximately $77 million related to transitioning independent networks to the Defense Information Systems Network (DISN) in accordance with DOD’s Financial Management Regulation. Further, we found approximately $60 million of costs that were not incorporated in DISA’s computation of its fiscal year 1998 telecommunications prices. In addition, while reviewing DISA’s fiscal year 1998 telecommunications prices, we identified at least $231 million in appropriated funds that supported WCF activities. However, based upon our review of DISA’s fiscal year 1998 budgetary request, the rationale for financing WCF-related costs through the use of appropriations was not clear. Normally, costs are recovered through customer billings and, in most instances, the WCF generally does not receive appropriations for financing day-to-day operations. Since the Defense Information Services business area prices do not take into consideration the use of appropriated funds, this further understates the prices charged for services offered by DISA. Furthermore, a recent pricing study performed by a private contractor concluded that while overall workload estimates were accurate for existing services, DISA’s telecommunications costs were supported by appropriations and thus were excluded from its telecommunications prices. DOD policy requires business activities to identify the direct and indirect costs of doing business and to incorporate these costs into their prices. To ensure that WCF customers have enough funds to pay for the services they need to sustain their readiness, DOD policy requires that prices be established before the start of the fiscal year and remain in effect for the entire year. The process for establishing telecommunications prices generally is finalized about 10 months before the prices go into effect with CISA developing workload projections for each service offered for the budget year based on customer input. In establishing CISA prices, cost data are collected related to (1) network operations, (2) network management, (3) provisioning, (4) systems development, (5) network transition, (6) equipment, and (7) prior year profits or losses. The prices charged customers vary based on service offering, data versus voice usage, calling area, precedence capability, bandwidth, and usage. In instances where usage data are not available, DISA may allocate the cost based on the average cost of using the service or attempt to establish prices that are competitive with the commercial sector or other federal entities. In addition, DISA adds a surcharge to each customer’s bill to recover general and administrative overhead expenses. Although DISA went through this process, our analysis disclosed that CISA’s telecommunications prices did not always include the total costs. For example, $77 million for transitioning telecommunications networks was not recognized in the prices charged customers. Further, for fiscal year 1998, we found that $49 million for a prior year loss and $11 million of overhead were not included. To improve the effectiveness and efficiency of its military communications services, DOD began in 1991 to plan and implement DISN to serve as the department’s worldwide telecommunications and information transfer network to support national security and defense operations. DOD’s strategy focuses on replacing its older data communications systems using emerging technologies and cost-effective strategies that provide secure and interoperable voice, data, video, and imagery communications services in support of military operations. DISN is a subset of the Defense Information Infrastructure (DII), which is a combination of communication networks, computers, software, databases, and other services. As stated in the WCF FY 1999 Amended Budget Estimates, dated February 1998, DISA is responsible for the pricing of DISN through the CISA business activity. According to DISA’s WCF charter, the responsibilities of the WCF expanded with the formulation of DII in order to create a seamless, transparent, and protected end-to-end information transfer capability. Although DISA is responsible for the pricing of DISN through CISA, it excluded approximately $77 million related to transitioning independent networks to DISN from its telecommunications prices. According to the Chief of the Revolving Funds Division, transition costs were expected to be offset by revenues generated from new customers, contract savings, discounts and DISA appropriations. For example, in developing the prices for fiscal year 1998, transition costs were reduced by the amount of the collections that DISA anticipated receiving from a contractor because of savings and volume discounts. DISA’s offsetting of costs in this manner is not in accordance with the DOD’s Financial Management Regulation,which states that realized gains are generally reflected in offsetting adjustments to prices established in subsequent fiscal years. In addition, this method of accounting for costs and revenue understates the actual cost incurred in providing services to the customer. If the full costs are not identified, the primary goal of the WCF financial structure—focusing attention on the full costs of operations and on managing those costs—cannot be met and management is not in a position to act. Further, WCF prices should recover operating expenses (full costs) to be incurred in the applicable fiscal year unless an exception is granted by the Under Secretary of Defense (Comptroller). In discussing this matter with the Office of the Under Secretary of Defense (Comptroller), we were informed that the office was not aware that the transition costs had increased to $127 million. The office also believed that DISA’s method of accounting for the costs and revenues was inappropriate and that all costs, other than a one-time $50 million cost exclusion, should have been included within the appropriate year’s price computation. Our review of CISA’s fiscal year 1998 telecommunications prices disclosed that losses from telecommunications operations were not recovered in accordance with DOD policy. In keeping with DOD’s policy, the reported accumulated operating loss at the end of fiscal year 1996—$49 million—should have been included in fiscal year 1998 prices, but it was not. In discussions with DISA and the Office of the Under Secretary of Defense (Comptroller), we pointed out that the $49 million was reported in DOD’s financial reports at the end of fiscal year 1996 and therefore should have been considered in developing the prices for fiscal year 1998. While the Comptroller’s office agreed, it was unable to explain why the $49 million loss from fiscal year 1996 had not been incorporated into the fiscal year 1998 prices. We verified that the financial results of operation for fiscal year 1997 had been incorporated into prices for fiscal year 1999. Normally, WCF prices should recoup all costs of doing business, including overhead costs. In this regard, DOD WCF requirements allow for surcharges to be used to recover general and administrative costs. To recoup its telecommunications overhead cost, CISA applies a 2 percent surcharge to the total cost of each customer’s bill. According to the revolving fund manager, the percentage factor is applied because it is less complicated than determining the actual amount of overhead cost related to each specific service. For fiscal year 1998, we found that the 2 percent surcharge will generate $44 million in revenue, which is $11 million less than the estimated overhead costs of $55 million. This shortfall is the result of not including all overhead costs when the fiscal year 1998 prices were developed. According to the Chief of the Revolving Fund Division, DISA is establishing additional surcharges to recover all overhead costs for the various services CISA offers. It is anticipated that these additional surcharges will be effective for fiscal year 2000. DOD’s WCF statute requires that the full costs of services or work performed be recovered through prices charged customers and recognizes that the fund may also receive appropriations for the purpose of providing capital as have been specifically authorized by law. Our review of DISA’s information technology (IT) budget for fiscal year 1998 identified many instances in which the Congress appropriated non-WCF funds that can be used to subsidize the cost of the Defense Information Services business area. DISA’s fiscal year 1998 budgetary request did not clearly delineate the rationale for using appropriated funds to finance WCF-related services. Since DISA does not include the costs paid for by appropriations within the prices, customers are not charged the full cost of services offered. As the central manager for the Defense Information Infrastructure, DISA annually receives appropriations for (1) Operations and Maintenance (O&M), (2) Procurement, and (3) Research, Develop, Test and Evaluation, along with authority for its working capital budgets. DISA’s appropriations may be spent for various purposes, including (1) establishing new services, (2) paying for program and technical activities, and (3) maintaining the communications and computer infrastructure. Our analysis of the budget request showed that at least $231 million of fiscal year 1998 appropriations support the WCF, including the following. Appendix II provides additional details regarding DISA IT appropriated funding being used to support WCF activities. Approximately $87 million of DISA’s fiscal year 1998 O&M and $19 million of Procurement authority were used to enhance DISA’s information systems security. The information systems security program was established to reduce the vulnerability of DOD’s existing telecommunications networks and data processing centers, including the systems operated as part of the WCF, to intrusion. DISA estimates that approximately 25 percent of its information systems security appropriations is used for improving the security of systems under the working capital fund. Approximately $34 million of O&M and $43 million of Procurement authority were used to cover the cost of replacing the current messaging service, the Automatic Digital Network (AUTODIN), with the Defense Message System (DMS). The objectives of DMS are to reduce cost, reduce staffing requirements, improve security, and improve DOD messaging services. For fiscal year 1998, the costs for AUTODIN and the network management and operational costs for DMS were funded through the CISA activity. Approximately $60 million of O&M authority was provided to DISA for implementing DISN, which as discussed previously, is replacing legacy telecommunications systems. Although DISA’s information infrastructure plan states that DISN operates on a fee-for-service or working capital basis, only the long haul component is currently paid for through the WCF. Further, our analysis of DISA’s O&M fiscal year 1998 budget identified the following instances in which appropriated funding was provided to organizational components supporting WCF activities. Approximately $138 million of DISA’s fiscal year 1998 O&M appropriation was designated for the Joint Interoperability and Engineering Organization (JIEO). JIEO’s mission is to ensure the interoperability of the Defense Information Infrastructure which includes those systems that are funded through DISA’s WCF. In addition, JIEO provides engineering support for all information transfer and network control systems managed by DISA. For example, the JIEO’s Center for Application Engineering is responsible for message handling for both DMS and AUTODIN—current components of the WCF. According to the Deputy Director for Strategic Plans and Policy, at least $6 million of the $138 million could be transferred from the O&M appropriation to the WCF. The DISN Service Center (DSC) mission is to manage provisioning, implementation, and operational control of telecommunications services under CISA. However, according to the Resource Manager for DISA operations, DSC is funded for fiscal year 1998 operations through the working capital fund and DISA’s O&M appropriation. For fiscal year 1998, DSC received approximately $7.7 million in O&M funds to cover mission support and customer service activity. Mission support and customer service costs include civilian salaries, rents, utilities, travel, and training. Approximately $7 million of DISA’s fiscal year 1998 O&M authority was used to cover the cost of operating DISA-Western Hemisphere, which is responsible for overseeing the operations of the DMCs. In addition, approximately $17 million and $5 million of DISA’s fiscal year 1998 O&M and Procurement authority, respectively, were used for the DISA Continuity of Operations and Test Facility (DCTF). DCTF provides innovative and integrated services for the DMCs, including disaster contingency planning. The DMCs are part of the WCF. After conducting a review of civilian salaries paid from its O&M appropriation, DISA identified at least $12 million of the $172 million authorized for fiscal year 1998 that could be transferred to the WCF. In addition, DISA stated that a portion of the $21 million authorized for travel would also need to be adjusted for those civilians who could be realigned to the WCF. According to DISA’s Acting Deputy Comptroller and the Chief of the Revolving Funds Division, the development of new DISA services has traditionally been paid for with appropriated funds. New services are not incorporated into the WCF until they are operational and a customer base has been identified. However, according to DOD working capital regulations, reinvestment in the infrastructure of business areas in order to improve product and service quality and timeliness, reduce costs, and foster comparable and competitive business operations is the primary goal of the WCF Capital Investment Program, whose use applies to all activities or groups of activities within the defense agencies, including DISA. We also noted that DISA was using military departments’ telecommunications components to supplement its telecommunications architecture. Because the military departments pay for these components, they were not included in DISA’s prices. For example, the Defense Satellite Communication System (DSCS) is owned, operated, and paid for by the Air Force and Army but used by CISA—more specifically, DISN—in providing services to its customers. According to DISA, though some telecommunications traffic does pass over the DSCS, DOD has kept both operation and life-cycle replacement of DSCS, as well as other military satellite communication systems, out of the WCF by policy. For fiscal year 1998, DISA estimated that its DSCS usage would cost approximately $46 million annually if procured from the commercial sector. Further, a recent study performed by a private contractor at DISA’s request concluded that while the overall workload estimates for existing services are accurate, DISA’s telecommunications prices were supported by appropriated funds which were excluded from the prices charged customers. The study states that excluding costs understates the true costs of operations. For example, the study stated that DISA’s Asychronous Transfer Mode (ATM) prices were considered more competitive than commercial prices. However, our review showed that DISA’s ATM price excluded the cost for base support, such as floor space and power. The study also found that the WCF paid for some unique military capabilities. In commenting on the study, DISA’s Deputy Director for Strategic Plans and Policy acknowledged that in some instances appropriated funds are supporting the WCF. He further stated that DISA was reviewing its pricing structure to identify those costs that should be part of the WCF. Working capital funds can break even over time by ensuring that all direct and indirect costs of conducting business are incorporated into their prices. Yet, DISA has been excluding from its telecommunications prices millions of dollars related to transitioning independent networks to the new common-user network, prior-year losses, and overhead expenses. In addition, significant costs associated with providing data processing and telecommunications services through the WCF are not being recovered through the prices charged, but rather, are paid for by appropriations. DISA’s budgetary request does not clearly state why appropriated funds are necessary to finance WCF-related services. Using appropriated funds further understates DISA’s WCF prices and undermines business area managers’ abilities to focus on their operating costs and to make fundamental improvements in their operations. We recommend that the Director, DISA, ensure that transition costs and revenues are considered when computing telecommunications prices, in accordance with the criteria set forth in DOD’s Financial Management Regulation, and as part of DISA’s fiscal year 2000 budget, identify (1) all appropriations used in support of WCF activities and (2) the specific reason(s) the appropriated funds are being used to support the activities of the WCF. DOD did not concur with our recommendations to ensure that transition costs and revenues are considered within the computation of telecommunications prices in accordance with the criteria set forth in DOD’s Financial Management Regulation. DOD also disagreed with our recommendation that DISA, as part of its fiscal year 2000 budget, identify (1) all appropriations used in support of WCF activities and (2) the specific reason(s) the appropriated funds are being used to support the activities of the WCF. DOD further commented that all costs—more specifically the $137 million discussed in the report—related to the telecommunications services have been considered in developing the prices charged customers. DISA may have considered these costs in the development of the prices, but the $137 million was either (1) not included in the fiscal year 1998 prices or (2) not considered within the framework of DOD’s Financial Management Regulation. For example, in a July 30, 1998, meeting with DOD and DISA officials, a DISA representative told us that the $11 million in overhead costs were not included within the prices for fiscal year 1998. This statement supports our analysis of the fiscal year 1998 price computation, which disclosed that the $11 million was not included. Our review showed that the revenue was approximately $44 million, whereas the cost was $55 million. Further, as discussed in the report, the Chief of the Revolving Fund Division acknowledged that there was an overhead shortfall in fiscal year 1998. To recover these costs, additional surcharges will be used starting in fiscal year 2000. In addition, according to the Office of the Under Secretary of Defense (Comptroller), although the $49 million loss discussed in the report should have been included within the prices for fiscal year 1998, it was not. Therefore, there is much evidence to indicate that $60 million of the $137 million was not included in the fiscal year 1998 prices charged customers. Further, as stated in the report, DISA’s methodology for accounting for the $77 million in transition costs is inconsistent with DOD’s Financial Management Regulation. The regulation clearly states that all estimated costs of providing the customer with goods and services should be included in the prices charged customers. It also stipulates that any realized gains should be used to offset the estimated costs in subsequent fiscal years. In addition, the Office of the Under Secretary of Defense (Comptroller) stated that it is DOD policy to treat transition costs as operating expenses and, therefore, these costs should be included in the price charged customers. However, as discussed previously, DISA did not adhere to this prescribed policy, thus understating the full cost of operations in a given fiscal year. We disagree with DOD’s position that it is not necessary to provide the Congress more detailed information on the use of appropriated funds in support of the WCF. Our review of DISA’s fiscal year 1998 budgetary request found that it did not delineate the rationale for using appropriated funds to finance WCF-related services. Significant costs associated with providing data processing and telecommunications services through the WCF are being subsidized by appropriations. Using appropriated funds further understates DISA’s WCF prices. The intent of our recommendation is to provide the Congress with information that will enable it to decide whether to continue funding DISA services, where applicable, through both appropriations and the WCF. In this regard, House Report 105-532, dated May 12, 1998, on the National Defense Authorization Act for Fiscal Year 1999, directs the Secretary of Defense, beginning with the fiscal year 2000 budget request, to more appropriately reflect and justify the DISA non-WCF budget request. Satisfying the language in the House Report will meet the intent of our recommendation. WCF activities rely on prompt reimbursement to be financially stable. Customer payments are used to finance subsequent operations, much as sales revenues are used in commercial enterprises. However, we found that DISA customers were not promptly paying for services provided. Additionally, in fiscal years 1996 and 1997, the DMCs did not bill customers $115 million for services provided. Further, these amounts were not recorded in DISA’s accounting records in accordance with federal accounting standards. DOD Financial Management Regulation, Volume 4, provides that “procedures shall be established for the routine aging of all amounts overdue so that appropriate actions can be taken to affect their collection. The aggressive and efficient management of receivables in the Department of Defense is an important element of DOD stewardship over public funds.” Our review of DISA WCF accounts receivable showed that DISA was not being promptly reimbursed millions of dollars for services it provided. As of January 1998, 31 percent of the reported accounts receivable, or about $173 million, was reported outstanding for over 60 days. Of the $173 million in receivables, $19.3 million was related to the DMCs and $154 million to CISA. The DMC accounts receivables were generally due from DOD customers, while CISA’s accounts receivables were generally due from other federal government entities. The following table provides aging information on DISA’s accounts receivable over 60 days old. Examples of the information service business area receivables that have not been promptly reimbursed follow. As of January 1998, the Federal Aviation Agency (FAA) had not reimbursed DISA approximately $50 million for telecommunications services. The entire $50 million was over 60 days old, with $16 million over 120 days old. According to FAA personnel, there is an approximately 2-month cycle for billing and paying for DISA telecommunications services. DISA’s Acting Comptroller and FAA personnel stated that they are discussing the use of electronic payments in order to reimburse DISA in a more timely manner. As of January 1998, DISA had not been reimbursed approximately $12 million for telecommunications services provided to the Department of State. Approximately $11 million was over 120 days old. Although DISA had routinely sent out past due notices for amounts owed, it had not inquired why State had not paid. Officials within State’s Office of the Comptroller acknowledged that the amounts were owed to DISA. A July 10, 1996, memo from the Director, Resource Management, DFAS-Denver Center, stated that DFAS-Headquarters had directed it to hold data processing costs constant by not reimbursing the DMCs for fiscal year 1996 data processing services beyond the amount paid in fiscal year 1995. As a result, the DMCs were not reimbursed approximately $3 million in fiscal year 1996 for data processing services. DISA had not reimbursed itself for approximately $11 million in DMC services and $9 million in telecommunications services as of January 1998. Both amounts were over 60 days old. According to DISA DMC officials, at a minimum, it takes 2 months to process a payment voucher, ask DFAS to make the transfer, and liquidate the internal receivable. DISA is currently working with DFAS to shorten its internal funds transfer process. In addition, according to DISA telecommunications officials, DISA had not collected amounts owed by its internal customers because these customers had failed to provide correct funding information. Since most of the receivables are from government entities and constitute the primary source of revenue for the WCF, these amounts should be collected. Our review of DISA documentation indicated that the DMCs performed approximately $115 million of billable work during fiscal years 1996 and 1997 for which they were not reimbursed. This represents about 8 percent of the DMC revenues for the 2 fiscal years. DISA performed this work without receiving the required funding document from its customers. DOD Financial Management Regulation, Volume 11B, Chapter 61, states that as a general rule, no work or services should be performed by a business activity unless a reimbursable order is received and accepted. Such orders constitute obligations of federal government ordering activities or advances from nonfederal government entities. Further, DISA’s method of accounting for the $115 million was not in accordance with federal accounting standards. Based upon information provided by DISA, as of November 1997, DFAS had not reimbursed DISA $11.7 million and $32.3 million in fiscal years 1996 and 1997, respectively, for work performed. According to the DOD Comptroller’s office, DFAS did not reimburse DISA for all service provided in fiscal year 1996 because the amount DFAS budgeted was less than the cost incurred. DFAS stated that the primary cause for nonpayment in fiscal year 1997 was that work had been reclassified into a different category which resulted in a higher price for the service. DFAS noted that this occurred after its fiscal year 1997 budget had been set and its level of funding approved. In discussing this issue with the Office of the Under Secretary of Defense (Comptroller), we were informed that discussions are being held with DFAS and DISA to determine the most appropriate means to resolve the nonpayment issues. Under DOD’s policy, prices are set at the beginning of the fiscal year and are to remain in effect for the entire year. Similarly, customer budgets are to include sufficient funds to pay for the services requested. This process should result in the WCF breaking even. However, during fiscal year 1997, DISA initiated efforts to better define the cost associated with its IBM and UNISYS mainframe processing. The specific services that DISA determined not to be related to mainframe processing were placed in another category. According to DFAS personnel, DFAS was charged a higher price for work as a result of the reclassification, and these higher prices were not anticipated when its fiscal year 1997 budget was developed, about 2 years prior to the start of the fiscal year. DISA’s efforts to reclassify its work is consistent with the WCF concept because it will result in more costs being aligned with the appropriate customers. Further, this effort should provide for a more accurate accumulation of DISA’s cost of operations and thereby enable DISA to develop more realistic prices for its services. However, DISA’s efforts were not coordinated with the overall WCF budget-setting process. As a result of reclassifying of the work to a higher cost category, the DFAS budget approved by the Office of the Secretary of Defense (Comptroller) was not sufficient to pay for the higher cost incurred. We also found other instances in which DISA was not reimbursed for all services provided. According to information provided by the Defense Logistics Agency (DLA), it did not reimburse DISA approximately $25.6 million during fiscal year 1996 for services provided. For fiscal year 1996, DISA billed DLA approximately $101.6 million, but DLA had only budgeted $76 million to reimburse DISA. According to DLA, the Office of the Secretary of Defense decided that DLA would have to reimburse DISA only the budgeted amount and DISA would have to absorb the $25.6 million shortfall. A similar shortfall occurred during fiscal year 1997. In an August 7, 1997, memo, signed by DISA’s Acting Comptroller and DLA’s Comptroller, it was agreed that DLA’s billing for fiscal year 1997 would be capped at $82 million—the maximum amount DLA would have to pay for the services DISA provided. According to DLA, the billings from DISA would have been about $108 million if the cap had not been in place. In addition, the funding shortages can be attributed to (1) workload projections not being available when the fiscal years 1996 and 1997 budgets were developed and (2) DLA’s approved funding not being commensurate with prices charged by DISA. The Marine Corps did not reimburse DISA $1 million in fiscal year 1996 and $6 million in fiscal year 1997. According to the Marine Corps official responsible for the information technology budget, DISA formally agreed to bill the Marine Corps up to the amount budgeted for each fiscal year. However, for each fiscal year, DISA’s actual cost incurred in providing services to the Marines Corps exceeded the budgeted amount. As a result of the agreement, DISA was not reimbursed for total costs it incurred. Further, DOD’s Financial Management Regulation prohibits the recognition of revenue and the corresponding recording of accounts receivable in the absence of the requesting activity having funding authority. Therefore, DISA did not report the $115 million for unbilled work for fiscal years 1996 and 1997 as part of its accounts receivable. Instead, DISA reported the $115 million through a work-in-process account. Federal accounting standards require that accounts receivable be established when a federal entity establishes a claim based on goods or services provided. Being reimbursed for work performed is essential to the Defense Information Services business area’s financial stability since this is the principal means through which it receives the funds needed to cover operating expenses. Since virtually all of the receivables are from government activities, it seems reasonable to expect that they should be collected. Nevertheless, about one-third of the business area’s accounts receivable have been outstanding for more than 60 days. While the lack of prompt reimbursement is a concern, the failure to be reimbursed for services provided is a more pressing issue to be addressed. We recommend that the Under Secretary of Defense (Comptroller) direct DOD activities to follow existing DOD Financial Management Regulation by providing funding documents to DISA for the amount of services being requested before DISA begins work, DOD activities to reimburse DISA for the full amount of services provided, DISA to record amounts it is owed for services provided in accordance with federal accounting standards. DOD agreed with our recommendations to direct (1) DOD activities to provide DISA funding documents for the amount of services requested prior to work beginning, (2) DOD activities to reimburse DISA for the full amount of services provided, and (3) DISA to record amounts it is owed for services provided in accordance with federal accounting standards. However, DOD stated that we did not recognize DISA’s progress in obtaining quicker reimbursement for the services provided by aggressively following up on outstanding amounts and using electronic payments. In July 1998, we contacted FAA to inquire about the progress being made in establishing an electronic payment process with DISA. A FAA representative stated that although discussions had been held with DISA concerning this matter, nothing had been finalized. Further, DISA-WESTHEM informed us that as of July 1998, it was estimating that DFAS will not reimburse DISA approximately $40 million for services provided in fiscal year 1998. Based upon these representations, it is not clear how much progress has actually been made. Meaningful and reliable financial reports are essential to allow DISA to monitor the financial results of operations and set realistic prices to charge the customer. Reliable financial reports are also necessary to enable the Congress to exercise its oversight responsibility. However, weaknesses within DISA’s internal control and accounting systems have hindered the development of accurate financial reports. The primary cause of these weaknesses is the Industrial Fund Accounting System (IFAS), which is used by the DMCs. As noted in DOD’s Chief Financial Officer’s status report for fiscal year 1997, IFAS cannot provide financial data that are complete, reliable, consistent, timely, and responsive to the needs of agency management. Because of these weaknesses, the DOD IG was unable to express an opinion on DISA’s fiscal year 1997 financial statements. These problems are not unique to DISA. Since the concept of DBOF was put forth in February 1991, we have continually reported that DOD has experienced difficulty with accurately reporting on the results of operations for the WCFs. Because the financial reporting problems and other inefficiencies in the operations of the WCFs, the National Defense Authorization Act for Fiscal Year 1997 required DOD to develop an improvement plan by September 30, 1997. In its response, DOD acknowledged that “ystem deficiencies are a major reason for unreliable and unsupported accounting information.” Because of system deficiencies that resulted in unverifiable account balances and inadequate audit trails, the DOD IG was unable to render an opinion on DISA’s financial statements for fiscal year 1997. The DOD IG found that (1) undistributed collections and disbursements were posted to accounts receivable and payable, respectively, and could not be verified and (2) beginning and ending balances for property, plant, and equipment could not be reconciled. Conceptually, collections and disbursements are considered undistributed when they have been made and reported to the Treasury but not recorded in DOD’s accounting records. Therefore, DOD adjusts the (1) accounts receivable balances based on the difference between the collections recorded in the accounting system’s general ledger and the collections reported to the Treasury and (2) accounts payable balance based on the difference between the disbursements recorded in the accounting system’s general ledger and the disbursements reported to the Treasury. In accordance with DOD guidance—Financial Management Regulation, Volume 11B, Chapter 54—the DMCs’ undistributed collections and disbursements were transferred to accounts receivable and payable and reported in the financial statements at the end of fiscal year 1997. These transfers resulted in accounts receivable and payable being reduced by $98 million and $337 million, respectively. In conducting its audit of DISA’s financial statements for fiscal year 1997, the DOD IG was unable to verify the accuracy and reliability of these adjustments. Furthermore, in the case of accounts payable, the reduction resulted in an abnormal debit balance of $50 million. Although DFAS and DISA are aware of the problem, they have not identified the specific cause. Moreover, federal accounting standards do not provide for offsetting undistributed transactions to accounts receivable and payable. Further, because of system interface problems between IFAS and the Defense Property Accountability System (DPAS), the DOD IG was unable to audit the property, plant, and equipment line item. These problems resulted in incorrect postings to depreciation and fixed asset accounts. In addition, regular periodic reconciliations were not performed to correct the errors. The amount reported for DMC property, plant, and equipment ($198.7 million) in DISA’s Statement of Financial Position represents 23 percent of total assets. Our September 1997 report identified similar problems between IFAS and DPAS. We identified over $100 million in differences between property and accounting records and found that procedures were not adequate to control rejected transactions and ensure that discrepancies were corrected promptly. This situation occurred because the required reconciliations were not performed. The inability to accurately account for property, plant, and equipment could affect the accuracy of DMC prices because depreciation is a major cost element included in the prices. For fiscal year 1998, the amount of depreciation included in DMC prices was 14 percent for IBM and 8 percent for UNISYS. Given the myriad of problems discussed above, there is no assurance that the amount for depreciation is accurate. If the accuracy of a major cost element is questionable, the accuracy of the price being charged is questionable. Our analysis of DISA’s fiscal year 1997 financial data disclosed numerous instances in which the revenue and cost for nonmainframe services were not accurately reported. Overall, we identified (1) 11 DMCs that reported revenues without any corresponding cost for 19 C-Goal categories of service and (2) 12 DMCs that reported cost without any related revenue for 20 categories of service. Revenue without corresponding cost totaled approximately $5 million, while cost for which no corresponding revenue was recorded totaled about $3 million. Examples of each condition are as follows. Ogden DMC reported revenues with no costs, including approximately $1.5 million in revenue for Direct Customer Support, $169,000 for Network Control, and $58,000 for services provided to a specific customer. Montgomery DMC reported no costs for four of the five categories of service with revenues totaling $823,000. Chambersburg DMC reported costs with no revenues for four of the eight services, including about $627,000 for Defense Information Integrated Engineering, $96,000 for Information Systems Support, $91,000 for Output Distribution, and $69 for Network and Program Management. DISA-WESTHEM officials acknowledged that the accounting for DMC revenues and costs has been unreliable. They further stated that DISA has focused much of its attention on the management of the mainframe processing workload and has little overall visibility over the other types of services the DMCs were providing to their customers. These services included computer repair and local area network operations that are not associated with mainframe operations. As part of DISA’s DMC consolidations, these services will be offered by the Regional Information Services locations. DISA has initiated efforts to improve its oversight over C-Goal services. For example, the Resource Management Branch has developed new unit identification codes to reduce the risk of misclassifying revenue and cost and new budgeting and accounting procedures manuals, including guidance for pricing nonmainframe services. DISA-WESTHEM also appointed a project manager who has begun efforts to standardize categories of services being offered. Standardizing services is an important first step in gaining visibility and oversight of the various services being offered. The outcome of these efforts should improve DISA’s ability to develop accurate projections of operating costs and prices and to evaluate operating results. Additionally, the accurate recording of revenues and costs is important to the successful operations of the Regional Information Services locations which are a part of DISA’s overall plan to further consolidate its megacenter operations. According to DISA’s consolidation plan, each regional location must be self-sustaining as required by the WCF. Further, an official within the Office of the Under Secretary of Defense (Comptroller) stated that locations operating at a loss will be closed. The types of problems identified by the DOD IG and discussed in DOD’s fiscal year 1997 Federal Managers’ Financial Integrity Act (FMFIA) report, and the DFAS Status Report are not unique to DISA. Since the concept of DBOF was put forth in February 1991, we have repeatedly identified weaknesses with the accuracy and reliability of the financial reports prepared on the results of operations. DOD itself has also recognized the inadequacies in financial reports—in the Acting Comptroller’s February 2, 1993, letter to the congressional Defense committees; the September 24, 1993, Defense Business Operations Fund improvement plan; and DOD’s February 2, 1994, response to our October 1993 letter on concerns we had with the Defense Business Operations Fund improvement plan. Further, DOD’s fiscal year 1997 FMFIA report noted deficiencies with WCF accounting and reporting. More specifically, DOD’s CFO Status Report notes that IFAS cannot provide financial data that are complete, reliable, consistent, timely, and responsive to the needs of agency management. To resolve these problems, DOD stated that it has undertaken an alternative analysis to determine the most cost effective means of implementing a compliant system. However, DOD has not specified a date for completion of the analysis. Because of congressional concern over DOD’s inability to resolve these long-standing problems, the National Defense Authorization Act for Fiscal Year 1997 directed DOD to prepare a plan to improve the management and operations of the WCFs. Among other things, the act specifically required DOD to address the issue involving financial reporting. As discussed in our recent report, DOD’s September 30, 1997, response clearly articulated the problems hindering accurate financial reporting and discussed the decisions made to resolve the problems. However, the plan does not (1) identify the specific tasks that need to be performed, (2) establish accountability for ensuring that the tasks are completed when more than one DOD organizational entity is involved, and (3) establish milestones for ensuring that the tasks are completed promptly. Our report recommended that DOD develop a detailed implementation plan that (1) identifies the specific actions that need to be taken, (2) establishes milestones, and (3) clearly delineates responsibilities for performing the specific tasks. In his May 14, 1998, response to our report, the Under Secretary of Defense (Comptroller) concurred with the overall findings and recommendations. The Comptroller noted that the Office of the Secretary of Defense Revolving Fund Directorate has established three working groups that will develop specific implementation and execution plans and procedures for financial reporting. Accurate and credible financial data are essential for DISA managers to ascertain if realistic prices are being established. Reliable financial information is also necessary to enable the Congress to exercise its oversight responsibilities. Although DOD has acknowledged accounting and reporting problems and developed various improvement plans, the financial reporting problems confronting the WCF today are essentially the same as they were since their inception. Until the accuracy and reliability of the financial reports improve, DISA will continue to be in the untenable position of attempting to manage and fulfill its fiduciary responsibility based on questionable data. DOD expressed concern that our findings related to DISA and that the report does not recognize the systemic nature of the deficiencies in DOD. We disagree. The report states that these problems were not unique to DISA and that since the concept of DBOF was put forth in February 1991, we have continually reported on DOD’s difficulties in reporting accurately on the results of operations for the working capital funds. Further, the report also recognizes that the responsibility for resolving these problems rests with DOD, not DISA. In addition, some of the financial reporting weaknesses discussed above are the result of DISA personnel not following procedures in the recording of revenue and cost at the DMCs. As noted in our report, the DMCs have experienced difficulty in accurately recording revenues and costs. DISA management needs to ensure that such weaknesses do not continue and contribute to overall weaknesses within DOD’s accounting systems. | Pursuant to a congressional request, GAO provided information on the Defense Information Systems Agency's (DISA) price-setting process, focusing on: (1) whether DISA is being reimbursed for the services provided; and (2) the accuracy of DISA's financial management information. GAO noted that: (1) DISA has difficulty: (a) setting prices for information technology services that result in the recovery of the full cost of doing business; (b) getting reimbursed for the services it provides; and (c) producing reliable financial information on the Defense Information Services business area; (2) these weaknesses impair the business area's ability to focus management attention on the full costs of carrying out operations and managing those costs effectively; (3) DISA is embarking upon a major effort to consolidate its Defense megacenters (DMC) and increase their efficiency by allowing them to specialize in mainframe processing and thereby lower their prices; (4) by consolidating the mainframe processing from the current 16 DMC sites to 6 and optimizing mainframe operations, DISA anticipates that planned savings will be passed on to its customers through reduced prices; (5) however, the reported cost of doing business varies considerably from computer center to computer center; (6) an analysis of the cost differences would provide management the opportunity to understand the cases of the differences and thereby help identify inefficiencies and make improvements in the services provided; (7) the DMCs have difficulty estimating future workload; in fiscal year 1997, the Department of Defense's (DOD) records showed that the estimated versus actual workload varied from 15 percent to 174 percent for individual centers; (8) because the DMCs underestimated the amount of work they would perform in fiscal year 1997 for IBM and UNISYS mainframe services, they reported a net profit of $90 million, which is 13 percent of the reported fiscal year 1997 revenue of approximately $682 million; (9) in setting prices for telecommunications services, the Communications Information Services Activity did not incorporate about $137 million of costs related to transitioning independent networks to DISA's new common-user network, prior-year losses, and overhead expenses; (10) because these costs were not included, the prices charged for services were not based on the full costs incurred; (11) since business area costs were offset by appropriations, its prices were further understated; (12) as of January 1998, DISA reports showed that 31 percent of the business area's receivables, or about $173 million, had been outstanding for more than 60 days; and (13) weaknesses within DISA's internal control and accounting systems have hindered the development of accurate financial reports. |
The F-35 program made progress in 2012 on several fronts. The program met or substantially met most of its key management and development testing objectives for the year. We also found that the program made progress in addressing key technical risks, as well as improving software management, manufacturing, and supply processes. The F-35 program met or substantially met most of its key management objectives established for calendar year 2012. The program office annually establishes major management objectives that it wants to achieve in the upcoming year. The F-35 program achieved 7 of its 10 primary objectives in 2012. Those included, among other things, the completion of development testing on early increments of software, the beginning of lab testing for both variations of the helmet mounted display, the beginning of pilot training for two aircraft variants, and the completion of negotiations on the restructured development contract. Although the program did not complete its software block 3 critical design review as planned in 2012, it did successfully complete its block 3 preliminary design review in November 2012 and the critical design review in late January 2013. The program did not meet its objectives to (1) deliver 40 production aircraft in 2012 and (2) receive approval from the Defense Contract Management Agency of the contractor’s plan for correcting deficiencies in its system for tracking and reporting cost and schedule progress. The F-35 development flight test program also substantially met 2012 expectations with some revisions to original plans. The program exceeded its planned number of flights by 18 percent, although it fell short of its plan in terms of test points flown by about 3 percent, suggesting that the flights flown were not as productive as expected. Test officials had to make several adjustments to plans during the year due to operating and performance limitations with aircraft and late releases of software to test. As a result, none of the three variants completed all of their planned 2012 baseline points, but the test team was able to add and complete some test points that had been planned for future years. Testing accomplished on each of the aircraft variants in 2012 included: Conventional takeoff and landing variant (F-35A)—accomplished high angle of attack testing, initial weapons separation, engine air start, expansion of the airspeed and altitude envelopes, and evaluated flying qualities with internal and external weapons. Short takeoff and vertical landing variant (F-35B)—accomplished the first weapons release, engine air start tests, fuel dump operations, flight envelope expansion with weapons loaded, radar signature testing, and tested re-design air inlet doors for vertical lift operations. Carrier suitable variant (F-35C)—conducted speed and altitude range verification and flights with external weapons, prepared for simulated carrier landings, and conducted shore-based tests of a redesigned arresting hook. In 2012, the F-35 program also made considerable progress in addressing four areas of technical risk that if left unaddressed could substantially degrade the F-35’s capabilities and mission effectiveness. However, additional work remains to fully address those risks. These risk areas and the actions taken in 2012 are discussed below: 1. Helmet mounted display (HMD)—DOD continued to address technical issues with the HMD system. The original helmet mounted display, integral to mission systems, encountered significant technical deficiencies and did not meet warfighter requirements. The program is pursuing a dual path by developing a second, less capable helmet while working to fix the first helmet design. In 2012, DOD began dedicated ground and flight testing to address these issues. Both variations of the helmet mounted display are being evaluated and program and contractor officials told us that they have increased confidence that the helmet deficiencies will be fixed. DOD may make a decision in 2013 as to which helmet to procure. 2. Autonomic Logistics Information System (ALIS)—ALIS is an important tool to predict and diagnose aircraft maintenance and supply issues. ALIS systems with limited capability are in use at training and testing locations. More capable versions of ALIS are being developed and program and contractor officials believe that the program is on track to fix previously identified shortcomings and field the fully capable system in 2015. Limited progress was made in 2012 on developing a smaller, transportable version needed to support unit level deployments to operating locations. 3. Arresting hook system—The carrier variant arresting hook system was redesigned after the original hook was found to be deficient, which prevented active carrier trials. The program accomplished risk reduction testing of a redesigned hook point to inform this new design. The preliminary design review was conducted in August 2012 and the critical design review in February 2013. Flight testing of the redesigned system is slated for late 2013. 4. Structural durability—Over time, testing has discovered bulkhead and rib cracks on the aircraft. Structural and durability testing to verify that all three variants can achieve their expected life and identify life- limited parts was completed in 2012. The program is testing some redesigned structures and planning other modifications. Officials plan to retrofit and test a production aircraft already built and make changes to the production line for subsequent aircraft. Current projections show the aircraft and modifications remain within weight targets. In 2012, the F-35 aircraft contractor and program office took steps to improve the program’s software management and output. The program began the process of establishing a second system integration laboratory, adding substantial testing and development capacity. The program also began prioritizing and focusing its resources on incremental software development as opposed to the much riskier concurrent development approach. In addition, the program began implementing improvement initiatives recommended by an independent software review, and evaluated the possible deferral of some of the aircraft’s capabilities to later blocks or moving them outside of the current F-35 program altogether. At the same time, program data regarding software output showed improvement. For example, program officials reported that the time it took to fix software defects decreased from180 days to 55 days, and the time it took to build and release software for testing decreased from 187 hours to 30 hours. Key manufacturing metrics and discussions with defense and contracting officials indicate that F-35 manufacturing and supply processes improved during 2012. While initial F-35 production overran target costs and delivered aircraft late, the latest data through the end of 2012 shows labor hours decreasing and deliveries accelerating. The aircraft contractor’s work force has gained important experience and processes have matured as more aircraft are built. We found that the labor hours needed to complete aircraft at the prime contractor’s plant decreased, labor efficiency since the first production aircraft improved, time to manufacture aircraft in the final assembly area declined, factory throughput increased, and the amount of traveled work declined. In addition, program data showed that the reliability and predictability of the manufacturing processes increased while at the same time aircraft delivery rates improved considerably. Figure 1 illustrates the improvement in production aircraft delivery time frames by comparing actual delivery dates against the dates specified in the contracts. Ensuring that the F-35 is affordable and can be bought in the quantities and time frames required by the warfighter will be of paramount concern to the Congress, U.S. military and international partners. As we recently reported, the acquisition funding requirements for the United States alone are currently expected to average $12.6 billion per year through 2037, and the projected costs of operating and sustaining the F-35 fleet, once fielded, have been deemed unaffordable by DOD officials. In addition, the program faces challenges with software development and continues to incur substantial costs for rework to fix deficiencies discovered during testing. As testing continues additional changes to design and manufacturing processes will likely be required, while production rates continue to increase. We recently concluded that while the March 2012 acquisition program baseline places the F-35 program on firmer footing, the aircraft are expected to cost more and deliveries to warfighters will take longer than previously projected. The new baseline projects the need for a total of $316 billion in development and procurement funding from 2013 through 2037, or an average of $12.6 billion annually over that period (see figure 2). Maintaining this level of sustained funding will be difficult in a period of declining or flat defense budgets and competition with other “big ticket items” such as the KC-46 tanker and a new bomber program. In addition, the funding projections assume the financial benefits of the international partners purchasing at least 697 aircraft. If fewer aircraft are procured in total or in smaller annual quantities—by the international partners or the United States—unit costs will likely rise according to analysis done by the Office of the Secretary of Defense (OSD) Cost Assessment and Program Evaluation (CAPE) office. In addition to the costs for acquiring aircraft, we found that significant concerns and questions persist regarding the cost to operate and sustain the F-35 fleet over the coming decades. The current sustainment cost projection by CAPE for all U.S. aircraft, based on an estimated 30-year service life, exceeds $1 trillion. Using current program assumptions of aircraft inventory and flight hours, CAPE recently estimated annual operating and support costs of $18.2 billion for all F-35 variants compared to $11.1 billion spent on legacy aircraft in 2010. DOD officials have declared that operating and support costs of this magnitude are unaffordable and the department is actively engaged in evaluating opportunities to reduce those costs, such as basing and infrastructure reductions, competitive sourcing, and reliability improvements. Because of F-35 delays and uncertainties, the military services have made investments to extend the service lives of legacy F-16 and F-18 aircraft at a cost of $5 billion (in 2013 dollars). The Navy is also buying new F/A-18E/F Super Hornets at a cost of $3.1 billion (in then-year dollars) to bridge the gap in F-35 deliveries and mitigate projected shortfalls in fighter aircraft force requirements. As a result, the services will incur additional future sustainment costs to support these new and extended-life aircraft, and will have a difficult time establishing and implementing retirement schedules for existing fleets. Our report found that over time, F-35 software requirements have grown in size and complexity and the contractor has taken more time and effort than expected to write computer code, integrate it on aircraft and subsystems, conduct lab and flight tests to verify it works, and to correct defects found in testing. Although recent management actions to refocus software development activities and implement improvement initiatives appeared to be yielding benefits, software continued to be a very challenging and high-risk undertaking, especially for mission systems. While most of the aircraft’s software code has been developed, a substantial amount of integration and test work remain before the program can demonstrate full warfighting capability. About 12 percent of mission systems capabilities have now been validated, up from 4 percent about a year ago. However, progress on mission systems was limited in 2012 by contractor delays in software delivery, limited capability in the software when delivered, and the need to fix problems and retest multiple software versions. Further development and integration of the most complex elements—sensor fusion and helmet mounted display—lie ahead. F-35 software capabilities are being developed, tested and delivered in three major blocks and two increments—initial and final—within each block. The testing and delivery status of the three blocks is described below: Block 1.0, providing initial training capability, was largely completed in 2012, although some final development and testing will continue. Also, the capability delivered did not fully meet expected requirements relating to the helmet, ALIS, and instrument landing capabilities. Block 2.0, providing initial warfighting capabilities and limited weapons, fell behind due to integration challenges and the reallocation of resources to fix block 1.0 defects. The initial increment, block 2A, delivered late and was incomplete. Full release of the final increment, block 2B, has been delayed until November 2013 and will not be complete until late 2015. Block 3.0 providing full warfighting capability, to include sensor fusion and additional weapons, is the capability required by the Navy and Air Force for declaring their respective initial operational capability dates. Thus far, the program has made little progress on block 3.0 software. The program intends initial block 3.0 to enter flight test in 2013. This is rated as one of the program’s highest risks because of its complexity. Although our recent review found that F-35 manufacturing, cost, and schedule metrics have shown improvement, the aircraft contractor continues to make major design and tooling changes and alter manufacturing processes while development testing continues. Engineering design changes from discoveries in manufacturing and testing are declining in number, but are still substantial and higher than expected from a program this far along in production. Further, the critical work to test and verify aircraft design and operational performance is far from complete. Cumulatively, since the start of developmental flight testing, the program has accomplished 34 percent of its planned flights and test points. For development testing as a whole, the program verified 11.3 percent of the development contract specifications through November 2012. As indicated in table 1, DOD continues to incur financial risk from its plan to procure 289 aircraft for $57.8 billion before completing development flight testing. This highly concurrent approach to procurement and testing increases the risk that the government will incur substantial costs to retrofit (rework) already produced aircraft to fix deficiencies discovered in testing. In fact, the F-35 program office projects rework costs of about $900 million to fix the aircraft procured on the first four annual procurement contracts. Substantial rework costs are also forecasted to continue through the 10th annual contract (fiscal year 2016 procurement), but at decreasing amounts annually and on each aircraft. The program office projects about $827 million more to rework aircraft procured under the next 6 annual contracts. We have reported on F-35 issues for over a decade and have found that the magnitude and persistence of the program’s cost and schedule problems can be largely traced to (1) decisions at key junctures made without adequate product knowledge; and (2) a highly concurrent acquisition strategy that significantly overlapped development, testing, and manufacturing activities. Over that time, our reports included numerous recommendations aimed at reducing risk in these areas and improving the chances for successful outcomes. DOD has implemented our recommendations to varying degrees. For example, in 2001 we recommended that DOD delay the start of system development until the F-35’s critical technologies were fully mature. DOD disagreed with that recommendation and chose to begin the program with limited knowledge about critical technologies. Several years later, we recommended that DOD delay the production decision until flight testing had shown that the F-35 would perform as expected, and although DOD partially concurred with our recommendation, it chose to initiate production before sufficient flight testing had been done. Citing concerns about the overlap—or concurrency—among development, testing, and production, we have recommended that DOD limit annual production quantities until F-35 flying qualities could be demonstrated. Although DOD disagreed with our recommendation at the time, it has since restructured the F-35 program and, among other things, deferred the production of hundreds of aircraft into the future, thus addressing the intent of our recommendation and reducing program risk. Appendix ll lists these and other key recommendations we have made over time, and identifies the actions DOD has taken in response. In conclusion, while the recent restructuring of the F-35 program placed it on a firmer footing, tremendous challenges still remain. The program must fully validate the F-35’s design and operational performance against warfighter requirements, while at the same time make the system affordable so that the United States and partners can acquire new capabilities in the quantity needed and can then sustain the force over its life cycle. Ensuring overall affordability will be a challenge as more austere budgets are looming. Chairman Durbin, Ranking Member Cochran and members of the subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have. For further information on this statement, please contact Michael Sullivan at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement are Bruce Fairbairn, Travis Masters, Marvin Bonner, W. Kendal Roberts, Megan Porter, Erin Stockdale, and Abby Volk. October 2001 (system development start) baseline) March 2007 (approved baseline) June 2010 (Nunn- McCurdy) March 2012 (approved baseline) Start of system development and demonstration approved. Primary GAO message Critical technologies needed for key aircraft performance elements not mature. Program should delay start of system development until critical technologies mature to acceptable levels. DOD response and actions DOD did not delay start of system development and demonstration stating technologies were at acceptable maturity levels and will manage risks in development. The program undergoes re- plan to address higher than expected design weight, which added $7 billion and 18 months to development schedule. We recommended that the program reduce risks and establish executable business case that is knowledge-based with an evolutionary acquisition strategy. DOD partially concurred but did not adjust strategy, believing that its approach is balanced between cost, schedule and technical risk. Program sets in motion plan to enter production in 2007 shortly after first flight of the non-production representative aircraft. The program planned to enter production with less than 1 percent of testing complete. We recommended program delay investing in production until flight testing shows that JSF performs as expected. DOD partially concurred but did not delay start of production because it believed the risk level was appropriate. Congress reduced funding for first two low-rate production buys thereby slowing the ramp up of production. Progress was being made but concerns remained about undue overlap in testing and production. We recommended limits to annual production quantities to 24 a year until flying quantities are demonstrated. DOD non-concurred and felt that the program had an acceptable level of concurrency and an appropriate acquisition strategy. DOD implemented a Mid- Course Risk Reduction Plan to replenish management reserves from about $400 million to about $1 billion by reducing test resources. We believed new plan increased risks and DOD should revise it to address testing, management reserves, and manufacturing concerns. We determined that the cost estimate was not reliable and that a new cost estimate and schedule risk assessment is needed. DOD did not revise risk plan or restore testing resources, stating that it will monitor the new plan and adjust it if necessary. Consistent with a report recommendation, a new cost estimate was eventually prepared, but DOD refused to do a risk and uncertainty analysis. The program increased the cost estimate and adds a year to development but accelerated the production ramp up. Independent DOD cost estimate (JET I) projects even higher costs and further delays. Primary GAO message Moving forward with an accelerated procurement plan and use of cost reimbursement contracts is very risky. We recommended the program report on the risks and mitigation strategy for this approach. DOD response and actions DOD agreed to report its contracting strategy and plans to Congress and conduct a schedule risk analysis. The program completed the first schedule risk assessment with plans to update semi-annually. The Department announced a major restructuring reducing procurement and moving to fixed-price contracts. The program was restructured to reflect findings of recent independent cost team (JET II) and independent manufacturing review team. As a result, development funds increased, test aircraft were added, the schedule was extended, and the early production rate decreased. Costs and schedule delays inhibit the program’s ability to meet needs on time. We recommended the program complete a full comprehensive cost estimate and assess warfighter and IOC requirements. We suggest that Congress require DOD to tie annual procurement requests to demonstrated progress. DOD continued restructuring, increasing test resources and lowering the production rate. Independent review teams evaluated aircraft and engine manufacturing processes. Cost increases later resulted in a Nunn-McCurdy breach. Military services are currently reviewing capability requirements as we recommended. Restructuring continued with additional development cost increases; schedule growth; further reduction in near-term procurement quantities; and decreased the rate of increase for future production. The Secretary of Defense placed the STOVL variant on a 2 year probation; decoupled STOVL from the other variants; and reduced STOVL production plans for fiscal years 2011 to 2013. The restructuring actions are positive and if implemented properly, should lead to more achievable and predictable outcomes. Concurrency of development, test, and production is substantial and provides risk to the program. We recommended the program maintain funding levels as budgeted; establish criteria for STOVL probation; and conduct an independent review of software development, integration, and test processes. DOD concurred with all three of the recommendations. DOD lifted STOVL probation, citing improved performance. Subsequently, DOD further reduced procurement quantities, decreasing funding requirements through 2016. The initial independent software assessment began in and ongoing reviews are planned through 2012. The program established a new acquisition program baseline and approved the continuation of system development, increasing costs for development and procurements and extending the period of planned procurements by 2 years. Primary GAO message Extensive restructuring places the program on a more achievable course. Most of the program’s instability continues to be concurrency of development, test, and production. We recommend the Cost Assessment Program Evaluation office conduct an analysis on the impact of lower annual funding levels; JSF program office conducts an assessment of the supply chain and transportation network. DOD response and actions DOD partially concurred with conducting an analysis on the impact of lower annual funding levels and concurred with assessing the supply chain and transportation network. . F-35 Joint Strike Fighter: Current Outlook Is Improved, but Long-Term Affordability Is a Major Concern. GAO-13-309. Washington, D.C.: March 11, 2013. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-13-294SP. Washington, D.C.: March 28, 2013. Joint Strike Fighter: DOD Actions Needed to Further Enhance Restructuring and Address Affordability Risks. GAO-12-437. Washington, D.C.: June 14, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Implications of Program Restructuring and Other Recent Developments on Key Aspects of DOD’s Prior Alternate Engine Analyses. GAO-11-903R. Washington, D.C.: September 14, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Additional Costs and Delays Risk Not Meeting Warfighter Requirements on Time. GAO-10-382. Washington, D.C.: March 19, 2010. Joint Strike Fighter: Accelerating Procurement before Completing Development Increases the Government’s Financial Risk. GAO-09-303. Washington D.C.: March 12, 2009. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The F-35 Lightning II, the Joint Strike Fighter, is DODs most costly and ambitious aircraft acquisition. The program is developing and fielding three aircraft variants for the Air Force, Navy, Marine Corps, and eight international partners. The F-35 is critical to long-term recapitalization plans as it is intended to replace hundreds of existing aircraft. This will require a long-term sustained funding commitment. Total U.S. investment is nearing $400 billion to develop and procure 2,457 aircraft through 2037. Fifty-two aircraft have been delivered through 2012. The F-35 program has been extensively restructured over the last 3 years to address prior cost, schedule, and performance problems. DOD approved a new acquisition program baseline in March 2012. This testimony is largely based on GAOs recently released report, G AO-13-309. This testimony discusses (1) progress the F-35 program made in 2012, and (2) major risks that program faces going forward. GAOs work included analyses of a wide range of program documents and interviews with defense and contractor officials. The new F-35 acquisition baseline reflects positive restructuring actions taken by the Department of Defense (DOD) since 2010, including more time and funding for development and deferred procurement of more than 400 aircraft to future years. Overall, the program progressed on several fronts during 2012 to further improve the current outlook. The program achieved 7 of 10 key management objectives and made substantial progress on one other. Objectives on aircraft deliveries and a corrective management plan were not met. The F-35 development test program substantially met expectations with some revisions to flight test plans and made considerable progress addressing key technical risks. Software management practices and some output measures improved, although deliveries to test continued to lag behind plans. Manufacturing and supply processes also improved--indicators such as factory throughput, labor efficiency, and quality measures were positive. While initial F-35 production overran target costs and delivered aircraft late, the latest data shows labor hours decreasing and deliveries accelerating. Going forward, the F-35 program still faces considerable challenges and risks. Ensuring that the F-35 is affordable and can be bought in the quantities and time required by the warfighter will be a paramount concern to the Congress, DOD, and international partners. With more austere budgets looming, F-35 acquisition funding requirements average $12.6 billion annually through 2037 (see below). Once fielded, the projected costs of sustaining the F-35 fleet have been deemed unaffordable by DOD officials; efforts to reduce these costs are underway. Software integration and test will be challenging as many complex tasks remain to enable full warfighting capability. The program is also incurring substantial costs for rework-currently projected at $1.7 billion over 10 years of production-to fix problems discovered during testing. With about two-thirds of development testing still to go, additional changes to design and manufacturing are likely. As a result, the program continues to incur financial risk from its plan to procure 289 aircraft for $57.8 billion before completing development flight testing. GAOs prior reviews of the F-35 made numerous recommendations to help reduce risk and improve outcomes. DOD has implemented those recommendations to varying degrees. |
In our past work on GNDA, we made recommendations about the need for a strategic plan to guide the development of the GDNA. Among other things, in July 2008, we recommended that DHS develop an overall strategic plan for the GNDA that (1) clearly defines the objectives to be accomplished, (2) identifies the roles and responsibilities for meeting each objective, (3) identifies the funding necessary to achieve those objectives, and (4) employs monitoring mechanisms to determine programmatic progress and identify needed improvements. In January 2009, we also recommended that DHS develop strategies to guide the domestic aspects of the GNDA including establishing time frames and costs for addressing previously identified gaps in the GNDA—land border areas between ports of entry, international general aviation, and small maritime vessels. DHS concurred with our 2008 recommendation to develop an overall strategic plan and did not comment on our 2009 recommendation to develop a plan for the domestic portion of the GNDA, but noted that it aligned with DNDO’s past, present, and future actions. In December 2010, DNDO issued a strategic plan for the GNDA. The strategic plan establishes a broad vision for the GNDA, identifies cross- cutting issues, defines several objectives, and assigns mission roles and responsibilities to the various federal entities that contribute to the GNDA. For example, the Department of Energy has the lead for several aspects of enhancing international capabilities for detecting nuclear materials abroad, DHS has the lead for detecting nuclear materials as they cross the border into the United States, and the Nuclear Regulatory Commission has the lead on reporting and sharing information on lost or stolen domestic radiological material. In addition, earlier this year, DNDO released the Global Nuclear Detection Architecture Joint Annual Interagency Review 2011. This review describes the current status of GNDA and includes information about the multiple federal programs that collectively seek to prevent nuclear terrorism in the United States. However, neither the strategic plan nor the 2011 interagency review identifies funding needed to achieve the strategic plan’s objectives nor establishes monitoring mechanisms to determine programmatic progress and identify needed improvements—key elements of a strategic plan that we previously identified in our recommendations. Furthermore, while the plan and the 2011 interagency review discuss previously identified gaps in the domestic portion of the architecture, neither discusses strategies, priorities, timeframes, or costs for addressing these gaps. In our view, one of the key benefits of a strategic plan is that it is a comprehensive means of establishing priorities, and using these priorities to allocate resources so that the greatest needs are being addressed. In times of tight budgets, allocating resources to address the highest priorities becomes even more important. Accordingly, while DNDO’s new strategic plan represents an important step forward in guiding the development of the GNDA, DNDO could do more to articulate strategies, priorities, timeframes and costs in addressing gaps and further deploying the GNDA in order to protect the homeland from the consequences of nuclear terrorism. In discussing these issues with DHS officials, they indicated that they will be producing a GNDA implementation plan later this year that will address several of these issues. As we reported in June 2010, DHS has made significant progress in deploying both radiation detection equipment and developing procedures to scan cargo and conveyances entering the United States through fixed land and sea ports of entry for nuclear and radiological materials, deploying nearly two-thirds of the radiation portal monitors identified in its deployment plan. According to DHS officials, the department scans nearly 100 percent of the cargo and conveyances entering the United States through land borders and major seaports. However, as we reported, DHS has made less progress scanning for radiation in (1) railcars entering the United States from Canada and Mexico; (2) international air cargo; and (3) international commercial aviation aircraft, passengers, or baggage. According to DHS officials, since November 2009, almost all non-rail land ports of entry have been equipped with one or more radiation detection portal monitors and 100 percent of all cargo, conveyances, drivers, and passengers driving into the United States through commercial lanes at land borders are scanned for radiation, as are more than 99 percent of all personally operated vehicles (non commercial passenger cars and light trucks), drivers, and passengers. Similarly, at major seaports, according to DHS officials, the department scans nearly all containerized cargo entering U.S. seaports for nuclear and radiological materials. DHS has deployed radiation portal monitors to major American seaports that account for the majority of cargo entering the United States. However, some smaller seaports that receive cargo may not be equipped with these portal monitors. DHS officials stated that current deployment plans have been in place to address all the remaining gaps in the deployment of portal monitors to seaports but that current and future budget realities require a re-planning of the deployment schedule. DHS has made much less progress scanning international rail. As we reported in June 2010, there is limited systematic radiation scanning of the roughly 4,800 loaded railcars entering the United States each day from Canada and Mexico. Much of the scanning for radioactive materials that takes place at these ports of entry is conducted with portable, hand- held radioactive isotope identification devices. According to DHS officials, international rail traffic represents one of the most difficult challenges for radiation detection systems due to the nature of trains and the need to develop close cooperation with officials in Mexico and Canada. In addition, DHS officials told us that rail companies resist doing things that might slow down rail traffic and typically own the land where DHS would need to establish stations for primary and secondary screening. DHS is in the early stages of developing procedures and technology to feasibly scan international rail traffic. As we reported in 2010, DHS is in the early stages of addressing the challenges of scanning for radioactive materials presented by air cargo and commercial aviation. DHS officials are also developing plans to increase their capacity to scan for radioactive materials in international air cargo conveyed on commercial airlines. DHS officials stated that their experience in scanning air cargo at a few major international airports in the United States has helped them develop scanning procedures and inform current and future deployment strategies for both fixed and mobile radiation detection equipment. These officials said that they believe that further operational experience and research is necessary before they can develop practical mobile scanning strategies and procedures. DHS is also developing plans to effectively scan commercial aviation aircraft, passengers, and baggage for radioactive materials. Since 2006, we have reported that DHS faces difficulties in developing new technologies to detect nuclear and radiological materials. Specifically, we have reported on longstanding problems with DNDO’s efforts to deploy advanced spectroscopic portal (ASP) radiation detection monitors. The ASP is a more advanced and significantly more expensive type of radiation detection portal monitor to replace the polyvinyl toluene (PVT) portal monitors in many locations that the Customs and Border Protection (CBP), an agency within DHS, currently uses to screen cargo at ports of entry. We have issued numerous reports regarding problems with the cost and performance of the ASPs and the lack of rigor in testing this equipment. For example, we found that tests DNDO conducted in early 2007 used biased test methods that enhanced the apparent performance of ASPs and did not use critical CBP operating procedures that are fundamental to the performance of current radiation detectors. In addition, in 2008 we estimated the lifecycle cost of each standard cargo version of the ASP (including deployment costs) to be about $822,000, compared with about $308,000 for the PVT portal monitor, and the total program cost for DNDO’s latest plan for deploying radiation portal monitors to be about $2 billion. Based in part on our work, DHS informed this Committee in February 2010, after spending over $280 million, that the department had scaled back its plans for the development and use of ASP technology. In September 2010, we also reported that DNDO was simultaneously engaged in the research and development phase while planning for the acquisition phase of its cargo advanced automated radiography system (CAARS) to detect certain nuclear materials in vehicles and containers at CBP ports of entry. DNDO pursued the deployment of CAARS without fully understanding that it would not fit within existing inspection lanes at ports of entry and would slow down the flow of commerce through these lanes, causing significant delays. DHS spent $113 million on the program since 2005 and cancelled the acquisition phase of the program in 2007. As we reported in September 2010, no CAARS machines had been deployed, and CAARS machines from various vendors were either disassembled or sitting idle without being tested in a port environment. DNDO’s problems developing the ASP and CAARS technologies are examples of broader challenges DHS faces in developing and acquiring new technologies to meet homeland security needs. Earlier this month, we testified that DHS has experienced challenges managing its multibillion-dollar acquisition efforts, including implementing technologies that did not meet intended requirements and were not appropriately tested and evaluated, and has not consistently completed analysis of costs and benefits before technologies were implemented. In June 2011, DHS reported to us that it is taking steps to strengthen its investment and acquisition management processes across the department. For example, DHS plans to establish a new model for managing departmentwide investments, establish new councils and boards to help ensure that test and evaluation methods are appropriately considered, and is working to improve the quality and accuracy of program cost estimates. As we testified, we believe these are positive steps and, if implemented effectively, could help the department address many of its acquisition challenges. However, it is still too early to assess the impact of DHS’s efforts to address these challenges. Going forward, we believe DHS will need to demonstrate measurable, sustained progress in effectively implementing these actions. Chairman Lungren, Ranking Member Clarke, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For questions about this statement, please contact David C. Maurer at (202) 512-9627 or maurerd@gao.gov or Gene Aloise at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Ned Woodward and Kevin Tarmann. Homeland Security: DHS Could Strengthen Acquisitions and Development of New Technologies, GAO-11-829T (Washington, D.C.: July 15, 2011). DHS Science and Technology: Additional Steps Needed to Ensure Test and Evaluation Requirements Are Met, GAO-11-596 (Washington, D.C.: June 15, 2011). Supply Chain Security: DHS Should Test and Evaluate Container Security Technologies Consistent with All Identified Operational Scenarios To Ensure the Technologies Will Function as Intended, GAO-10-887 (Washington D.C.: Sept. 29, 2010). Combating Nuclear Smuggling: Inadequate Communication and Oversight Hampered DHS Efforts to Develop an Advanced Radiography System to Detect Nuclear Materials, GAO-10-1041T (Washington D.C.: Sept. 15, 2010). Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP (Washington, D.C.: June 30, 2010). Combating Nuclear Smuggling: Lessons Learned from DHS Testing of Advanced Radiation Detection Portal Monitors, GAO-09-804T (Washington, D.C.: June 25, 2009). Combating Nuclear Smuggling: DHS Improved Testing of Advanced Radiation Detection Portal Monitors, but Preliminary Results Show Limits of the New Technology. GAO-09-655 (Washington, D.C.: May 21, 2009). Nuclear Detection: Domestic Nuclear Detection Office Should Improve Planning to Better Address Gaps and Vulnerabilities, GAO-09-257 (Washington D.C.: Jan. 29, 2009). Combating Nuclear Smuggling: DHS’s Program to Procure and Deploy Advanced Radiation Detection Portal Monitors Is Likely to Exceed the Department’s Previous Cost Estimates, GAO-08-1108R (Washington DC: Sept. 22, 2008). Nuclear Detection: Preliminary Observations on the Domestic Nuclear Detection Office’s Efforts to Develop a Global Nuclear Detection Architecture, GAO-08-999T (Washington, D.C.: July 16, 2008). Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment, GAO-07-1247T (Washington, D.C.: Sept. 18, 2007). Customs Service: Acquisition and Deployment of Radiation Detection Equipment, GAO-03-235T (Washington, D.C.: Oct. 17, 2002). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses our past work examining the Department of Homeland Security's (DHS) progress and efforts in planning, developing, and deploying its global nuclear detection architecture (GNDA). The overall mission of the GNDA is to use an integrated system of radiation detection equipment and interdiction activities to combat nuclear smuggling in foreign countries, at the U.S. border, and inside the United States. Terrorists smuggling nuclear or radiological material into the United States could use these materials to make an improvised nuclear device or a radiological dispersal device (also called a "dirty bomb"). The detonation of a nuclear device in an urban setting could cause hundreds of thousands of deaths and devastate buildings and physical infrastructure for miles. While not as damaging, a radiological dispersal device could nonetheless cause hundreds of millions of dollars in socioeconomic costs as a large part of a city would have to be evacuated--and possibly remain inaccessible--until an extensive radiological decontamination effort was completed. Accordingly, the GNDA remains our country's principal strategy in protecting the homeland from the consequences of nuclear terrorism. The GNDA is a multi-departmental effort coordinated by DHS's Domestic Nuclear Detection Office (DNDO). DNDO is also responsible for developing, acquiring, and deploying radiation detection equipment to support the efforts of DHS and other federal agencies. Federal efforts to combat nuclear smuggling have largely focused on established ports of entry, such as seaports and land border crossings. However, DNDO has also been examining nuclear detection strategies along other potential pathways and has identified several gaps in the GNDA, including (1) land border areas between ports of entry into the United States; (2) international general aviation; and (3) small maritime craft, such as recreational boats and commercial fishing vessels. Developing strategies, technologies, and resources to address these gaps remains one of the key challenges in deploying the GNDA. Some progress has been made, but DHS and other federal agencies have yet to fully address gaps in the global nuclear detection architecture. Specifically, this testimony discusses DHS's efforts to (1) address our prior recommendations to develop a strategic plan for the GNDA, including developing strategies to prevent smuggling of nuclear or radiological materials via the critical gaps DNDO identified, (2) complete the deployment of radiation detection equipment to scan all cargo and conveyances entering the United States at ports of entry, and (3) develop new technologies to detect nuclear or radioactive materials. This testimony is based on our prior work on U.S. government efforts to detect and prevent the smuggling of nuclear and radiological materials issued from October 2002 through September 2010. We updated this information in July 2011 to reflect DHS's efforts to address our prior recommendations by meeting with DNDO officials and reviewing recent DNDO documents, such as the 2010 GNDA Strategic Plan and the 2011 GNDA Joint Annual Interagency Review. In summary, since December 2010, DNDO has issued both a strategic plan to guide the development of the GNDA and an annual report on the current status of the GNDA. The new strategic plan addressed some key components of what we previously recommended be included in a strategic plan, such as identifying the roles and responsibilities for meeting strategic objectives. However, neither the plan nor the annual report identifies funding needed to achieve the strategic plan's objectives or employs monitoring mechanisms to determine programmatic progress and identify needed improvements. DHS officials informed us that they will address these missing elements in an implementation plan, which they plan to issue before the end of this year. As we reported in September 2010, DHS has made progress in deploying both radiation detection equipment and developing procedures to scan cargo entering the United States through land and sea ports of entry for nuclear and radiological materials. For example, according to DHS officials, the department scans nearly 100 percent of the cargo and conveyances entering the United States through land borders and major seaports. However, as we reported in July 2011, DHS has experienced challenges in developing new technologies to detect nuclear and radiological materials, such as developing and meeting key performance requirements. DHS has plans to enhance its development and acquisition of new technologies, although it is still too early to assess their impact on addressing the challenges we identified in our past work. |
Among its responsibilities for aviation safety, FAA issues certificates that approve the design and production of new aircraft and equipment before they are introduced into service; these certificates demonstrate that the aircraft and equipment meet FAA’s airworthiness requirements. FAA also grants approvals for such things as changes to air operations and equipment. Certificates indicate that the aircraft, equipment, and new air operators are safe for use or flight in the NAS. While industry stakeholders have expressed concerns about variation in FAA’s interpretation of standards for certification and approval decisions, stakeholders and experts that we interviewed for our 2010 report indicated that serious problems occur infrequently. In addition, in September 2011 we reported that FAA did a good job following its certification processes in assessing the composite fuselage and wings of Boeing’s 787 against its airworthiness standards. The certification process also provides an example of how FAA is attempting to use a more proactive approach in finding solutions to a potential problem. In the case of flammability regulations that govern transport type aircraft, FAA has primarily developed its regulations on a reactive basis. That is, as accidents and incidents have occurred, their causes have been investigated, and the findings used to develop regulations designed to prevent the future occurrence of similar incidents or accidents. To supplement this oversight method, FAA has proposed a new, threat-based approach for flammability regulations that will base the flammability performance for different parts of the aircraft upon realistic threats that could occur in-flight or in a post-crash environment. FAA recognizes the value of certification as a safety tool, however the agency faces some significant challenges, including resources and maintaining up-to-date knowledge of industry changes. According to a report from the Aircraft Certification Process Review and Reform Aviation Rulemaking Committee, these certification challenges will become increasingly difficult to overcome, as industry activity is expected to continue growing and government spending for certification resources remains relatively flat. As one means of responding to its certification workload, FAA relies on designees, however, our prior work has shown that there are concerns that designee oversight is lacking, particularly with the new organizational designation authorities in which companies rather than individuals are granted designee status. There are also concerns that, when faced with certification of new aircraft or equipment, FAA staff have not been able to keep pace with industry changes and, thus, may struggle to understand the aircraft or equipment they are tasked with certificating. SMS implementation within FAA should reduce certification delays and increase available resources to facilitate the introduction of advanced technologies. In response to a provision in the 2012 FAA Reauthorization, FAA is assessing the certification process and identifying opportunities to streamline the process. As we stated above, FAA plans to continue using data reactively to understand the causes of accidents and incidents, and is implementing a proactive approach—called an SMS approach—in which it analyzes data to identify and mitigate risks before they result in accidents. FAA is also overseeing SMS implementation throughout the aviation industry. Safety management systems are intended to continually monitor all aspects of aviation operations and collect appropriate data to identify emerging safety problems before they result in death, injury, or significant property damage. Under SMS, which FAA began implementing in 2005, the agency will analyze the aviation safety data it collects to identify conditions that could lead to aviation accidents or incidents and to address such conditions through changes to FAA’s organization, processes, management, and culture. As we reported in September 2012, according to FAA, the overarching goal of SMS is to improve safety by helping ensure that the outcomes of any management or system activity incorporate informed, risk-based decision making. FAA’s business lines, such as the Air Traffic Organization and the Aviation Safety Organization, are currently at different stages of SMS implementation and it is likely that full SMS implementation will take many more years. SMS relies heavily on data analysis and, while FAA has put in place various data quality controls, it continues to experience data challenges including limitations with some of its analyses and limitations to or the absence of data in some areas. Data limitations and the lack of data may inhibit FAA’s ability to manage safety risks. For example, we found that some FAA data used in risk assessments may not be complete, meaningful, or available to decision makers. We have also reported that the agency currently does not have comprehensive risk-based data, sophisticated databases to perform queries and model data, methods of reporting that capture all incidents, or a level of coordination that facilitates the comparison of incidents across data systems. Furthermore, technologies aimed at improving reporting have not been fully implemented. As a result, aviation officials managing risk using SMS have limited access to robust FAA incident data. Implementing systems and processes that capture accurate and complete data are critical for FAA to determine the magnitude of safety issues, assess their potential impacts, identify their root causes, and effectively address and mitigate them. Our recent work on aviation safety and FAA oversight issues has identified a number of specific areas where FAA’s risk-based oversight could be improved through improved data collection and analysis, including: runway and ramp safety, airborne operational errors, general aviation, pilot training, unmanned aircraft systems, and commercial space. FAA has taken steps to address safety oversight issues in many of these areas, including making changes to or committing to make changes to its data collection practices in response to our recommendations in most of these areas. Nonetheless, sustained FAA attention will be necessary to ensure that the agency’s ability to comprehensively and accurately assess and manage risk is not impaired. Runway and ramp safety. Takeoffs, landings, and movement around the surface areas of airports (the terminal area) are critical to the safe and efficient movement of air traffic. In a June 2011 incident at John F. Kennedy International Airport in New York, for example, a jumbo jet carrying 286 passengers and crew almost collided with another jumbo jet, which reportedly missed a turn and failed to stop where it should have to avoid the occupied runway. Safety in the terminal area could be improved by additional information about surface incidents, which is currently limited to certain types of incidents, notably runway incursions and certain airborne incidents, but does not include runway overruns or incidents in ramp areas. Without a process to track and assess these overruns or ramp area incidents, FAA cannot assess trends in those areas and the risks posed to aircraft or passengers in the terminal area. FAA is planning to develop a program to collect and analyze data on runway overruns, something we recommended in 2011, but it will be several years before FAA has obtained sufficient information about these incidents to be able to assess risks.still collects no comprehensive data on ramp area incidents and NTSB does not routinely collect data on ramp accidents unless they result in serious injury or substantial aircraft damage. In 2011, we recommended that FAA extend its oversight to ramp safety and FAA concurred. Airborne operational errors. Operational errors –also referred to as losses of separation—occur when two aircraft fly closer together than safety standards permit due to an air traffic controller error. We reported that FAA’s risk-based process for assessing airborne losses of separation is too narrow to account for all potential risk and changes in how errors are reported affect FAA’s ability to identify trends. For example, FAA’s current process for analyzing losses of separation assesses only those incidents that occur between two or more radar-tracked aircraft. By excluding incidents such as those that occur between aircraft and terrain or aircraft and protected airspace, FAA is not considering the systemic risks that may be associated with many other airborne incidents. FAA has stated that it is planning to include these incidents in its risk assessment process before the end of 2013, something we recommended in 2011.changes to reporting policies affect its ability to accurately determine safety trends. For instance, we reported in October 2011 that the rate and number of reported airborne operational errors in the terminal In addition, FAA’s area increased considerably since 2007.to reporting policies and processes in 2009 and 2010 make it difficult to know the extent to which the recent increases in reported operational errors are due to more accurate data, an actual increase in the occurrence of incidents, or both. General aviation. General aviation is characterized by a diverse fleet of aircraft flown for a variety of purposes. In 2010, FAA estimated that there were more than 220,000 aircraft in the active general aviation fleet, comprising more than 90 percent of the U.S. civil aircraft fleet. The number of nonfatal and fatal general aviation accidents decreased from 1999 through 2011; however, more than 200 fatal accidents occurred in each of those years. In October 2012, we reported that general aviation flight activity data limitations impede FAA’s ability to assess general aviation safety and thereby target risk mitigation efforts. For example, FAA estimates of annual general aviation flight hours may not be reliable because of methodological and conceptual limitations with the survey upon which flight activity estimates are based. These limitations include survey response rates below 50 percent. Without more comprehensive reporting of general aviation flight activity, such as requiring the reporting of flight hours at certain intervals, FAA lacks assurance that it is basing its policy decisions on an accurate measure of general aviation trends, and NTSB lacks assurance that its calculations of accident and fatality rates accurately represent the state of general aviation safety. Lack of comprehensive flight hour data is an issue we have also identified in other segments of the aviation industry, including helicopter emergency medical services (HEMS) and air cargo transportation. We recommended in 2007 and 2009 respectively that FAA take action to collect comprehensive and accurate data for HEMS and general aviation operations. In 2011, we confirmed that FAA now annually surveys all helicopter operators and requests, among other things, information on the total flying hours and the percentage of hours that were flown in air ambulance operations. Our recommendations to FAA for air cargo and general aviation data remain unaddressed. See GAO, Initial Pilot Training: Better Management Controls Are Needed to Improve FAA Oversight, GAO-12-117 (Washington, D.C.: November 4, 2011). data found that, while FAA requires its inspectors to conduct on-site inspections of each of these schools and their pilot examiners at least once per year, the agency does not have a comprehensive system in place to adequately measure its performance in meeting its annual inspection requirements. Without complete data on active pilot schools and pilot examiners, it is difficult to ensure that regulatory compliance and safety standards are being met. In addition, it is unclear whether required inspections for pilot examiners were completed because FAA’s data system lacks historical information. One potential implication is the quality of training that recreational pilot candidates receive, which could contribute to the many general aviation accidents in which pilot error is cited as a contributing factor. In 2011, we recommended that FAA develop a comprehensive system to measure performance of pilot school inspections and noted that this recommendation may require modifying or improving existing data systems. In responding to our recommendation, FAA officials said they agreed that improvements in oversight data were needed and indicated that they believe efforts already in existence or under way address our recommendations. Unmanned aircraft systems (UAS). FAA and the National Aeronautics and Space Administration (NASA) are taking steps to ensure the reliability of both small and large UAS by working on certification standards specific to UAS and undertaking research and development efforts to mitigate obstacles to the safe and routine integration of UAS into the national airspace. Some of these obstacles include vulnerabilities in UAS operation that will require technical solutions. However, we found that these research and development efforts related to overcoming these obstacles cannot be completed and validated without safety, reliability, and performance standards for UAS operations, which FAA has not developed due to data limitations. Standards for UAS operations are a key step in the process of safely integrating regular UAS operations into the national airspace. Once standards are developed, FAA has indicated that it will begin to use them in UAS regulations; until then, UAS will continue to operate as exceptions to the regulatory framework rather than being governed by it. Commercial space. FAA also oversees the safety of commercial space launches that can carry cargo and eventually humans into space. FAA is responsible for licensing and monitoring the safety of such launches and of spaceports (sites for launching spacecraft). However, FAA is prohibited by statute from regulating commercial space crew and passenger safety before 2015 except in response to a serious injury or fatality or an event that poses a high risk of causing a serious injury or fatality. FAA has interpreted this limited authority as allowing it to regulate crew safety in certain circumstances and has been proactive in issuing a regulation concerning emergency training for crews and passengers. However, FAA has not identified data that would allow it to monitor the safety of the developing space tourism sector and determine when to regulate human space flight. To allow the agency to be proactive about safety, rather than responding only after a fatality or serious incident occurs, we recommended in 2006 that FAA identify and continually monitor indicators of space tourism industry safety that might trigger the need to regulate crew and passenger safety before 2015 and use it to determine if the regulations should be revised.working with its industry advisory group, the Commercial Space Transportation Advisory Committee, to develop guidelines for human spaceflight. According to agency officials, FAA is Chairman Rockefeller, Ranking Member Thune, and Members of the Committee, this concludes my written statement. I would be pleased to answer any questions that you may have at this time. For further information on this statement, please contact Gerald L. Dillingham, Ph.D., at (202) 512-2834 or by email at dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Heather MacLeod, Assistant Director; Brooke Leary; Paul Aussendorf; Russell Burnett; Vashun Cole; Laura Erion; Brandon Haller; Dave Hooper; Dan Hoy; Delwen Jones; Maureen Luna-Long; Teresa Spisak; Pam Vines; and Jessica Wintfeld. Unmanned Aircraft Systems: Continued Coordination, Operational Data, and Performance Standards Needed to Guide Research and Development. GAO-13-346T. Washington, D.C.: February 15, 2013. General Aviation Safety: Additional FAA Efforts Could Help Identify and Mitigate Safety Risks GAO-13-36. Washington, D.C.: Oct 4, 2012. Unmanned Aircraft Systems: Measuring Progress and Addressing Potential Privacy Concerns Would Facilitate Integration into the National Airspace System. GAO-12-981. Washington, D.C.: September 14, 2012. Aviation Safety: Additional FAA Efforts Could Enhance Safety Risk Management.GAO-12-898. Washington, D.C.: Sep 12, 2012. Aviation Safety: FAA Is Taking Steps to Improve Data, but Challenges for Managing Safety Risks Remain.GAO-12-660T. Washington, D.C.: Apr 25, 2012. Aviation Safety: FAA Has An Opportunity to Enhance Safety and Improve Oversight of Initial Pilot Training. GAO-12-537T. Washington, D.C.: Mar 20, 2012. Initial Pilot Training: Better Management Controls Are Needed to Improve FAA Oversight. GAO-12-117. Washington, D.C.: November 4, 2011. Aviation Safety: Enhanced Oversight and Improved Availability of Risk- Based Data Could Further Improve Safety. GAO-12-24. Washington, D.C.: Oct 5, 2011. Aviation Safety: Status of FAA’s Actions to Oversee the Safety of Composite Airplanes. GAO-11-849. Washington, D.C.: Sep 21, 2011. Aviation Safety: Certification and Approval Processes Are Generally Viewed as Working Well, but Better Evaluative Information Needed to Improve Efficiency. GAO-11-14. Washington, D.C.: October 7, 2010. Air Ambulance: Effects of Industry Changes on Services Are Unclear. GAO-10-907. Washington, D.C. September 30, 2010. Aviation Safety: Improved Data Quality and Analysis Capabilities Are Needed as FAA Plans a Risk-Based Approach to Safety Oversight. GAO-10-414. Washington, D.C.: May 6, 2010. Aviation Safety: Better Data and Targeted FAA Efforts Needed to Identify and Address Safety Issues of Small Air Cargo Carriers. GAO-09-614. Washington, D.C.: June 24, 2009. Aviation Safety: Potential Strategies to Address Air Ambulance Safety Concerns. GAO-09-627T. Washington, D.C.: April 22, 2009. Aviation Safety: Improved Data Collection Needed for Effective Oversight of Air Ambulance Industry. GAO-07-353. Washington, D.C.: February 21, 2007. Commercial Space Launches: FAA Needs Continued Planning and Monitoring to Oversee the Safety of the Emerging Space Tourism Industry. GAO-07-16. Washington, D.C.: October 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Even with nearly 80,000 flights each day within the national airspace system, there has not been a fatal commercial aviation accident in more than 4 years. The U.S. airspace system is arguably one of the safest in the world, with key aviation stakeholdersthe FAA, airlines, airports, aircraft manufacturers, and the National Transportation Safety Board (NTSB)working together to ensure these results. As the federal agency responsible for regulating the safety of civil aviation in the United States, FAA is responsible for, among other things: setting aircraft certification standards, collecting fleet and flight activity data, conducting safety oversight of pilot training and general aviation operations, and safely integrating aircraft into the national airspace. As the aviation industry evolves, FAA must remain diligent in its efforts to ensure the continued safety of aviation. In 2010, Congress passed the Airline Safety and Federal Aviation Administration Extension Act, which, in part, called for FAA to better manage safety risks. This testimony focuses on (1) FAAs aircraft certification process and (2) FAAs use of data to enhance safety and improve aviation oversight. The testimony is based on GAOs previous work and updated with industry reports and information provided by FAA officials. GAO has previously recommended that FAA address several data quality weaknesses. FAA concurred with most of these recommendations and has taken steps toward addressing some. The Federal Aviation Administration (FAA) is responsible for approving the design and airworthiness of new aircraft and equipment before they are introduced into service. FAA approves changes to aircraft and equipment based on evaluation of industry submissions against standards set forth in federal aviation regulations and related guidance documents. In September 2011, we reported that, overall, FAA did a good job following its certification processes in assessing the composite fuselage and wings of Boeing's 787 against its airworthiness standards. However, the approval process--referred to as certification--presents challenges for FAA in terms of resources and maintaining up-to-date knowledge of industry practices, two issues that may hinder FAA's efforts to conduct certifications in an efficient and timely manner. FAA is currently assessing its certification process and identifying opportunities to streamline it. FAA plans to continue analyzing data reactively to understand the causes of accidents and incidents, and to augment this approach through implementation of a safety management system (SMS). SMS is a proactive approach that includes continually monitoring all aspects of aviation operations and collecting and analyzing appropriate data to identify emerging safety problems before they result in death, injury, or significant property damage. FAA has put in place various quality controls for its data; however, GAO has identified a number of areas where FAA does not have comprehensive risk-based data or methods of reporting that capture all incidents. The following are among the key areas GAO identified as needing improved data collection and analysis. Runway and ramp safety . Additional information about surface incidents could help improve safety in the airport terminal area, as data collection is currently limited to certain types of incidents, notably runway incursions, which involve the incorrect presence of an aircraft, vehicle, or person on a runway and certain airborne incidents, and does not include runway overruns, which occur when an aircraft veers off a runway or incidents in ramp areas, which can involve aircraft and airport vehicles. Airborne operational errors . FAA's metric for airborne losses of separation--a type of operational error--is too narrow to account for all potential risk. General aviation . FAA estimates of annual flight hours for the general aviation sector, which includes all forms of aviation except commercial and military, may not be reliable. Pilot training . FAA does not have a comprehensive system in place to measure its performance in meeting its annual pilot school inspection requirements. FAA has taken steps to address safety oversight issues and data challenges in many of these areas. For example, FAA is planning to develop a program to collect and analyze data on runway overruns, but it will be several years before FAA has obtained enough information about these incidents to assess risks. Sustained attention to these data collection and analysis issues will be necessary to ensure that FAA can more comprehensively and accurately assess and manage risk. |
DHS has several components with employees who earned AUO as overtime compensation. Table 1 describes the various components with employees earning AUO at the start of fiscal year 2014. Employees who work hours in addition to their regularly scheduled workweek may be entitled to compensation for the work performed, in accordance with applicable statutory and regulatory requirements. AUO is intended to compensate eligible federal employees in positions that require substantial amounts of irregular, unscheduled overtime that cannot be controlled through administrative means—such as by hiring additional personnel, rescheduling the hours of duty, or granting compensatory time-off duty to offset overtime hours required.AUO, the employee must be in such a position that requires the employee To earn to be generally responsible for recognizing, without supervision, circumstances that require him or her to remain on duty. The regulations include additional language addressing the circumstances under which an employee may be deemed eligible to earn AUO. Among other factors, circumstances must require that the employee remain on duty not merely because it is desirable, but for compelling reasons inherently related to continuance of his or her duties such that failure to carry on would constitute negligence. include, for example, such circumstances as when an employee must continue working because shift relief fails to report as scheduled. In addition, circumstances under which an employee may earn AUO must be in continuation of the full scheduled workday or the resumption of a full workday in accordance with a prearranged plan or an awaited event, and require that the employee have no choice as to when or where he or she may perform the work. For example, when an employee has an option to take work home, may complete it at the office, or has latitude in working hours to begin work later in the morning in order to work later in the evening, such additional hours should not be compensated using AUO. § 550.153(c) (providing also that responsibility for an employee remaining on duty when required by circumstances must be a definite, official, and special requirement of his or her position). irregular or occasional overtime work will continue with a duration and frequency to meet the minimum requirements for eligibility. Agencies are to determine the appropriate AUO pay rate as a percentage of an employee’s annual rate of locality-adjusted base pay, and it is paid as part of the employee’s annual salary rather than on an hourly basis as with other types of overtime compensation. in an AUO-eligible position worked 8 hours of irregular or occasional overtime per week on average, that employee is to receive AUO premium pay at a rate of 20 percent of his or her base pay, in addition to his or her regular pay. If an employee in this scenario has a base rate of pay of $50,000 per year, his or her AUO pay would equal an additional $10,000 per year or about an additional $385 per pay period. Table 2 shows the appropriate rates of pay established in regulation and the corresponding requisite number of eligible overtime hours worked. See generally 5 U.S.C. § 5542 (FEPA) and 29 U.S.C. § 207 (FLSA). Overtime under FEPA may also be referred to as Title 5 or 45 Act overtime. Certain employees, such as persons employed as bona fide executive, administrative, professional, and outside sales employees, are considered FLSA exempt and are not subject to the overtime compensation and other provisions of the FLSA. See 29 U.S.C. § 213. 2 hours or more of unscheduled duty per regular workday. As a result, LEAP superseded AUO for criminal investigators or other approved law enforcement officers as the appropriate means for earning premium pay—that is, employees earning LEAP are not eligible to earn AUO. Various entities have reviewed the use of AUO by DHS components, including reviews of current DHS components that existed prior to the formation of DHS in 2003. For example, pursuant to a provision of the Omnibus Consolidated Appropriations Act, 1997, the inspectors general of each federal department or agency that used AUO were required to conduct audits and report their findings on the use of AUO. The reviews conducted in satisfaction of this requirement encompassed legacy entities, such as the U.S. Border Patrol, that are now part of DHS. The Department of Justice (DOJ) OIG reported in 1997 that 6,900 mostly Border Patrol employees within the former Immigration and Naturalization Service (INS) received AUO in fiscal year 1996. The OIG also reported that its review of the records for a sample of 202 employees did not substantiate that the overtime worked was uncontrollable for 95 percent More recently, the DHS OIG and DHS components’ of the sample.offices of internal affairs, in response to allegations referred by OSC, have conducted reviews and investigations of AUO at DHS.2 for a timeline of selected AUO reviews and investigations related to AUO use at DHS. DHS components paid an average of $434 million in AUO per year from fiscal years 2008 through 2013—the last full year of available data.Across DHS, annual AUO expenditures increased from $307 million in fiscal year 2008 to $512 million in fiscal year 2013, or an increase of approximately $205 million. According to our analysis, this increase in expenditures occurred because of an approximately 22 percent increase in AUO earners from fiscal years 2008 through 2010 and an increase in AUO payments per employee because of average increases in salary. Since fiscal year 2011, the number of employees earning AUO increased by less than 2 percent, while AUO expenditures increased by about 8 percent. The majority of these AUO-earning employees are nonsupervisory Border Patrol agents within CBP, and are employed along the southwest border of the United States. DHS components spent increasing amounts on AUO each year from fiscal year 2008 through fiscal year 2013. Specifically, AUO payments increased on average 11 percent each year, from approximately $307 million in fiscal year 2008 to approximately $512 million in fiscal year 2013—the last full fiscal year of AUO payments data—as shown in figure 3. In the first half of fiscal year 2014, AUO payments totaled $255 million, and are on pace to equal fiscal year 2013 payments. According to our analysis, in fiscal years 2008 through 2010, an increase in the number of AUO earners drove the increase of AUO payments more than the spending per employee. The number of AUO-earning employees within DHS rose from approximately 23,000 in fiscal year 2008 to approximately 28,000 in 2010, an increase of about 22 percent. From fiscal years 2011 through 2013, the number of AUO-earning employees within DHS rose at a slower pace, increasing less than 2 percent during this time period. As of March 2014, the number of AUO-earning employees within DHS was near the fiscal year 2013 total of about 29,000. Our analysis indicates that since 2011, the rate of increase in the amount of AUO paid per individual outpaced the rate of growth of the total number of AUO earners, and contributed to an overall increase in AUO expenditures of 8 percent. Overall, from fiscal year 2008 through fiscal year 2013, the average annual AUO payment per employee increased by about 31 percent, or from about $13,000 in fiscal year 2008 to about $17,000 in fiscal year 2013, as shown in figure 4. CBP officials stated that this increase in the amount of AUO paid per individual was a result of an increase in average annual salary, since AUO is paid as a percentage of an employee’s base salary, and not necessarily because of increases in the AUO rates of pay authorized. CBP accounts for the majority of AUO expenditures within DHS, accounting for nearly 77 percent ($394 million of $512 million) of all AUO paid by DHS in fiscal year 2013. This is because CBP also has approximately 77 percent of the AUO-earning employees within DHS— primarily Border Patrol agents, who numbered approximately 21,500 in fiscal year 2013. CBP spent on average $327 million annually on AUO from fiscal years 2008 through 2013. CBP accounted for most of the department’s increase in AUO expenditures as well—with its AUO expenditures growing 75 percent from fiscal years 2008 through 2013. According to CBP officials, a CBP hiring initiative from 2005 through 2010, and the new hires’ subsequent career ladder progression to higher- compensated grades, drove the increased AUO expenditures, along with an increase of one grade for all Border Patrol agents compensated at General Schedule (GS)-11 and GS-12 levels in 2010. ICE accounted for the next largest percentage of AUO expenditures and AUO earners, constituting 21 percent ($105 million of $512 million) of all AUO expenditures in DHS in fiscal year 2013 and approximately 21 percent of AUO-earning employees within DHS that year. ICE increased its expenditures by a total of about 40 percent from fiscal years 2008 through 2013, spending approximately $96 million on AUO on average each year during this time period. USSS accounted for about 1 percent of AUO expenditures within DHS in fiscal year 2013 as well, and spent an average of $6.8 million in AUO each year from fiscal years 2008 through 2013. USSS AUO expenditures increased by about 6 percent from fiscal years 2008 through 2013. NPPD accounted for about 1 percent of AUO expenditures in fiscal year 2013, and its AUO expenditures increased by about 27 percent from fiscal years 2008 through 2013. NPPD spent an annual average of about $4.8 million during this time frame. USCIS and OCSO each accounted for less than 1 percent of total AUO expenditures within DHS in fiscal year 2013. Each spent on average less than $1 million on AUO each year from fiscal years 2008 through 2013. See table 3 for additional information on DHS expenditures on AUO since fiscal year 2008. Most AUO earners within DHS are CBP and ICE employees with responsibilities that involve the interdiction or removal of illegal entrants or deportable aliens. Given that these activities are more prevalent along the southwest border of the United States than elsewhere, most of the employees earning AUO are located at duty stations in this region. In fiscal year 2013, most AUO earners were located in Texas, followed by Arizona, California, and New Mexico. In total, at least 20,627, or 70 percent, of the approximately 29,500 AUO earners in DHS worked in these states in fiscal year 2013.93 percent of all AUO earners across DHS in fiscal year 2013: Four position titles accounted for about Border Patrol agent: Duties for this position include preventing the entry of terrorists, and the smuggling and unlawful entry of undocumented aliens, among other things. In fiscal year 2013, about 94 percent of AUO earners within CBP were Border Patrol agents. Immigration enforcement agent: Duties for this position include investigation, identification, and arrest of aliens in the interior of the United States, among other things. In fiscal year 2013, about 48 percent of AUO earners within ICE were immigration enforcement agents. Deportation officer: Duties for this position primarily include conducting legal research to support decisions on deportation cases, and assisting attorneys in representing the government in court. In fiscal year 2013, about 33 percent of AUO earners within ICE were deportation officers. Detention and deportation officer: Duties for this position primarily include directing, coordinating, and executing detention and removal activities. In fiscal year 2013, about 17 percent of AUO earners within ICE were detention and deportation officers. See figure 5 for DHS AUO earners by position title in fiscal year 2013. Outside of these four primary AUO-earning positions, we identified 169 other unique position titles under which employees within DHS have received AUO pay from fiscal years 2008 through 2013. At CBP, other AUO-earning positions included information technology specialists, paralegal specialists, management and program analysts, account managers, congressional liaison officers, human resources assistants, and attorney-advisors. At ICE, other AUO-earning positions included technical enforcement officers, criminal investigators, intelligence officers, and cooks. At USSS, the position most commonly compensated using AUO was physical security specialist. Other AUO-earning positions include information technology specialists, protective support technicians, and photographers. At NPPD, the positions most commonly compensated using AUO were chemical security inspectors and protective security advisors. At USCIS, the only position compensated using AUO was investigative specialist. At OCSO, the only position compensated using AUO was physical security specialist. Most AUO earners within CBP, ICE, USSS, USCIS, and OCSO are in nonsupervisory positions—compensated at the GS-13 level or below— while the majority of NPPD AUO earners are above the GS-13 level. According to ICE officials, payments of AUO to cooks and criminal investigators occurred as a result of position changes. In the case of cooks, individuals moved to AUO- earning positions from the cook position, but their position information was not immediately updated in the pay system. In the case of criminal investigators, individuals moved from AUO-earning positions to criminal investigator positions, and ICE continued to compensate them with AUO instead of the proper overtime mechanism, LEAP. According to ICE officials, ICE has taken corrective measures in these instances. positions such as chemical security inspectors and protective security advisors, are generally of an operational and nonsupervisory nature. In fiscal year 2013, employees within DHS components at GS-14 or -15 levels earned a total of about $41 million in AUO, or 8 percent of total DHS AUO expenditures while accounting for 6 percent of all DHS AUO earners. Whether or not a position is supervisory, or if an individual in a position functions in a supervisory capacity, does not itself render the individual in the position ineligible to earn AUO. Rather it is the specific functions and responsibilities of a position that determine eligibility. However, employees working in supervisory capacities generally are less likely to be performing the type of work that is eligible, such as independent investigative functions and responsibilities described in federal regulations and guidance as appropriate for compensation as AUO. DHS components have not implemented AUO appropriately. Specifically, some DHS component-level policies are not consistent with certain provisions of federal regulations or guidance. In addition, components have not regularly followed their AUO policies and procedures, contributing to significant and widespread AUO administration and oversight deficiencies across the department. DHS and component officials have acknowledged significant management problems with the administration and oversight of AUO and have recently started taking actions to address these problems, including suspending and deauthorizing AUO. Further, in May 2014, the DHS Deputy Secretary issued a memorandum on AUO administration directing all components using AUO to submit corrective action plans detailing efforts to strengthen AUO administration and oversight. Although these component plans could facilitate better management of AUO, DHS does not currently have sufficient oversight mechanisms in place to help ensure components implement their AUO policies appropriately and in accordance with law and regulation. In addition, DHS does not have a plan to report to Congress on its progress to remediate AUO implementation deficiencies. At the time of our review, DHS did not have a department-wide AUO policy or directive. Instead, DHS components have administered AUO under more than 20 policies and procedures. CBP has about 10 directives, memorandums, and standard operating procedures, 6 of which predate the creation of DHS in 2003; ICE has at least 5 instructions and memorandums, 3 of which predate the creation of DHS; NPPD consolidated its AUO policies and procedures, which included adapted ICE policies, into 1 instruction in September 2012; USCIS has 1 directive and 1 standard operating procedure; USSS has 3 directives and 1 memorandum; and OCSO has 1 policy, which is described in a training document. According to NPPD officials, NPPD consolidated its AUO policies and procedures in response to an ongoing OIG investigation, among other things. OIG reviewed NPPD’s ISCD from 2007 through 2012 and found, among other things, that numerous NPPD AUO guidances contributed to inconsistent AUO use within NPPD. forms for documenting AUO hours and activities. These officials also stated that variations in AUO policies within CBP have at times resulted in supervisor and employee confusion regarding which policy to follow, which could contribute to inconsistent oversight of AUO. Recognizing the challenges with having multiple AUO policies and procedures across the component, as well as in response to OSC- referred investigations on CBP’s use of AUO, CBP officials stated that in 2008 they began developing a component-wide AUO policy. CBP halted this policy development in 2009 while considering possible legislative changes that would replace AUO with a new type of overtime compensation structure for Border Patrol agents, the vast majority of CBP’s AUO-earning workforce. According to CBP officials, this initial effort to reform Border Patrol pay was never officially proposed in Congress, and in response to other AUO misuse allegations in 2011, CBP officials stated that CBP recommenced efforts to develop and implement a component-wide AUO policy. As of October 2014, CBP had not issued its component-wide AUO policy. More recently, legislation passed by both houses of Congress would replace AUO as a means for compensating Border Patrol agents with a system by which agents elect into a specified overtime compensation level, among other things. While testifying on AUO use in January 2014, the Deputy Chief of the U.S. Border Patrol stated that AUO no longer meets the current needs of the Border Patrol, in part because AUO does not provide flexibility or ensure continuous agent coverage of ports of entry and borders. For example, an agent’s patrol area that is over an hour away from the Border Patrol station extends the agent’s workday by the amount of time it takes for the relief agent to reach the patrol area and the outgoing agent to reach the station. DHS components’ AUO policies and procedures reflect inconsistent approaches to implementing and applying federal AUO regulations and guidance. For example, federal regulations require, and component policies and procedures include, provisions regarding employee AUO eligibility and agency reviews of AUO rates at appropriate intervals. Component AUO policies or procedures also include most provisions described in OPM’s 1997 guidance on AUO, which OPM encourages agencies to consider implementing but are not specifically required by federal regulation. For example, as shown in figure 6, OPM guidance urges agencies to conduct an independent review of AUO administration—to include assessing rates reviews, among other things— at least once every 5 years. CBP, ICE, NPPD, and USCIS policies or procedures include these provisions. However, OCSO and USSS policies do not include provisions for independent AUO administration reviews. According to USSS officials, they instead rely on alternative oversight mechanisms that assess general management controls. Similarly, OCSO officials stated that its AUO policy does not include such administration reviews because they based the policy on other component AUO policies, such as those legacy policies of ICE. Although the alternative oversight mechanisms assess several component management practices, they are not designed to independently assess the component’s administration of AUO. See 5 C.F.R. §§ 550.153, 550.161. Although DHS component AUO policies and procedures include a variety of oversight mechanisms, components have not regularly followed these policies and, as a result, have not provided adequate oversight of AUO. In assessing overall implementation of AUO across the department, DHS’s Deputy Secretary stated in a May 2014 memorandum that the administration of AUO is in a poor state in many parts of the department. Specifically, not all DHS components have regularly conducted AUO authorization and rates reviews, consistently documented AUO hours and activities or provided supervisory review, or routinely performed AUO administration reviews, in accordance with their AUO policies and procedures. Not all components have regularly conducted AUO authorization and rates reviews: DHS’s May 2014 AUO administration memorandum stated that most components have not regularly conducted AUO authorization reviews in compliance with OPM guidance. In addition, OIG and CBP reviews found that certain components made AUO payments to employees for work that did not qualify for AUO, in part because they did not conduct authorization reviews. For example: A March 2013 OIG inspection found that because NPPD did not conduct AUO authorization reviews, NPPD paid chemical security inspectors about $2 million in AUO in fiscal year 2012 for work that did not qualify for AUO. At the time of the review, NPPD disagreed with the OIG’s recommendation to deauthorize AUO for the chemical security inspectors, but in June 2014, NPPD determined that it would suspend AUO for its employees, including chemical inspectors, as of September 2014. A September 2013 investigation conducted by CBP’s Office of Internal Affairs on referral from OSC found that one CBP office improperly paid AUO to seven employees and three supervisors because office managers did not conduct AUO authorization reviews for these employees. The employees and supervisors—although authorized to receive AUO pay in their regular positions—were not eligible to receive AUO pay during the time frame under investigation because the office they were temporarily detailed to involved work that was controllable and administrative in nature. DHS component officials stated that they did not regularly conduct authorization reviews because they thought that reviewing AUO rates to determine amount of pay provided sufficient oversight. For example, the results of a rates review could find that an employee’s AUO rate should be decreased to 0 percent, based on the hours of AUO the employee worked in the prior review period. However, rates reviews do not provide components with sufficient information to determine whether those employees who worked enough hours of AUO to qualify for AUO should continue to receive AUO pay. Authorization reviews are to examine whether each employee who has received AUO pay in the past should continue to receive AUO. For example, authorization reviews should consider past and future expected work duties to ensure that they are administratively uncontrollable, among other things. Because DHS components have not regularly conducted AUO authorization reviews, components are limited in their ability to ensure that employees are in positions—and performing work—eligible for AUO. In addition, conducting regular authorization reviews could help agencies identify AUO-earning employees temporarily detailed to positions not authorized for AUO pay and who should have their AUO pay discontinued. DHS has begun to take action to ensure that components regularly conduct authorization reviews, which we discuss later in the report. According to OPR, its review did not explore effects of incomplete AUO pay rates reviews, including improper AUO payments. weeks. The rates review also found that an additional 64 employees should be deauthorized from AUO, because they did not report the required minimum number of AUO hours. According to the rates review report, ICE field offices were to adjust these AUO rates by July 13, 2014, or within 4 pay periods of the review. However, we found that ICE did not do so for 22 percent (140 of 629) of the employees identified for a rate decrease across more than 20 ICE field offices. In addition, ICE did not adjust the AUO rate to 0 percent for 55 percent (35 of 64) of the employees identified for deauthorization. We determined that as a result of not adjusting AUO rates, ICE paid about $76,000 to these employees using AUO as opposed to compensating them with more appropriate forms of overtime, as applicable. According to ICE officials, field office management—which is responsible for implementing AUO rates adjustments—has not regularly lowered AUO rates in response to these rates reviews, and ICE human capital officials did not have a mechanism in place to track the disposition of rates reviews. As a result, ICE did not have reasonable assurance that it was paying the appropriate AUO rate to applicable employees. In response to the findings of its June 2014 review, in July 2014, the ICE Office of Human Capital began disseminating biweekly reports to ICE headquarters and field office management detailing required AUO rates changes and deauthorizations to provide greater oversight and to track these adjustments. In addition, ICE has plans to develop standard operating procedures specific to the AUO rates change process to assist in clarifying and codifying expectations and processes, which could provide ICE more assurance that it is paying appropriate AUO rates to employees. Components have not consistently documented and reviewed AUO hours and activities: Insufficient documentation of AUO hours and activities has been a long-standing problem at DHS, according to findings from numerous reviews and investigations over the past 7 years. For example, a May 2008 investigation conducted by CBP’s Office of Internal Affairs on referral from OSC found that employees at two CBP locations did not document AUO activities in accordance with CBP policies, in part, because employees did not understand or were not aware of the documentation requirement. Investigations and reviews conducted by ICE, CBP, and USCIS also found that employees have not consistently provided sufficient information in AUO documentation forms. For example, employees recorded justifications such as “writing memos,” “file review,” or “supervisory duties” without further explanation. Similarly, we reviewed examples of NPPD activity documentation from December 2013 and found AUO activity descriptions, such as “team meeting” or “report writing,” that did not provide additional information to determine if the activities were administratively uncontrollable. Furthermore, according to DHS’s May 2014 AUO administration memorandum, many supervisors and managers do not consistently review AUO hours claimed to ensure work performed was (1) necessary overtime, (2) that the amount of time claimed was appropriate, and (3) that duties were administratively uncontrollable. For example, a January 2014 ICE investigation found that some supervisors conducted inconsistent reviews of AUO documentation because of minimal AUO guidance from ERO headquarters and the field office investigated. The investigation report noted that one supervisor may ask for additional clarification for AUO documentation that stated “cleaned up paperwork,” while another supervisor may consider it a sufficient justification of AUO duties. Because many supervisors do not consistently review AUO hours, AUO rates generally remained steady, often at the maximum level, for much of the department’s workforce authorized for AUO, according to the DHS May 2014 AUO administration memorandum. Our review of components’ AUO payments data found that, in fiscal year 2013, 88 percent (5,314 of 6,052) of ICE employees earning AUO and 90 percent (19,776 of 21,961) of CBP employees earning AUO received the maximum 25 percent of AUO pay. The DHS memorandum further stated that a particular challenge is that supervisors may routinely approve 2 hours of AUO for the same employees performing the same tasks at the same time on the same days of the week, and that work that is steady and consistent is not uncontrollable. According to testimony from OSC’s Special Counsel at a January 2014 congressional hearing, using AUO routinely, and every day, is an entrenched part of the culture at DHS components. In addition, officials from the department’s Office of the Chief Human Capital Officer (OCHCO) stated that certain employees were accustomed to using AUO without much oversight, and that this use of AUO has become part of the department’s culture. For example, a February 2013 investigation conducted by CBP’s Office of Internal Affairs, on referral by OSC, found that Border Patrol agents assigned to the Commissioner’s Situation Room regularly remained at the duty station 2 hours beyond the end of regularly scheduled shifts to transition between scheduled shifts. According to DHS’s May 2014 memorandum on AUO administration, without sufficient documentation, managers cannot adequately evaluate hours claimed or differentiate between uncontrollable and scheduled overtime. In addition, clear and complete AUO activity documentation and review of that documentation permit components to actively manage overtime expenditures, such as by varying schedules or staggering shifts to minimize overtime costs. The memorandum further states that when supervisors approve claimed AUO hours without appropriately reviewing activity documentation, employees receive inflated rates of AUO pay based on the approval of work that should have been scheduled in advance or completed the next working day. DHS has begun to take action to ensure that components consistently document and review AUO activities and hours, which we discuss later in the report. Components have not routinely conducted AUO administration reviews: Most DHS components have not routinely conducted independent reviews of AUO administration. Federal guidance suggests that these independent reviews are to include reviews of authorization and rates reviews and AUO activity documentation, among other things. As previously discussed, OCSO’s and USSS’s AUO policies do not include a mechanism to independently review AUO administration, and accordingly, these reviews have not been conducted. ICE’s AUO policies did not previously include such a mechanism until ICE updated its policy to include independent AUO administration reviews in July 2014. USCIS policy calls for annual AUO administration reviews; however, USCIS conducted them twice from 2008 to 2013. According to USCIS officials, USCIS did not conduct annual reviews because its response to the 2008 review recommended enhanced AUO administration controls, and the agency was implementing these controls before assessing AUO again in 2013. CBP and NPPD have conducted AUO administration reviews at least once every 5 years in accordance with component policies and consistent with OPM guidance. In particular, CBP’s Office of Internal Affairs inspects 30 to 50 offices’ management practices annually, which includes a review of AUO administration focused on time and attendance. Since fiscal year 2011, the Office of Internal Affairs has identified 31 instances of noncompliance with component AUO policies, and CBP offices have implemented more than 80 percent of related recommendations. According to NPPD officials, NPPD conducted a compliance review of 2013 AUO administration, in accordance with its 2012 AUO policy. NPPD found that, among other things, most of the duties NPPD employees claimed for AUO did not meet the criteria for AUO. DHS components that have routinely conducted AUO administration reviews are better positioned to identify AUO deficiencies, such as supervisors inconsistently reviewing AUO activity documentation, and to provide recommendations to improve these deficiencies. DHS has begun to take action to ensure that components routinely conduct AUO administration reviews, which we discuss later in the report. DHS has taken actions to address long-standing administration and oversight deficiencies related to the use of AUO within the department, but has limited plans to monitor component progress going forward. DHS issued two memorandums, one calling for components to suspend AUO for certain employees and one requiring components using AUO to develop and submit corrective action plans to improve AUO administration. First, on the basis, in part, of interim findings from a DHS Office of the General Counsel review of AUO practices initiated in October 2013, the DHS Secretary issued a January 2014 memorandum. This memorandum required DHS components to suspend AUO for those employees (1) engaged as full-time training instructors, (2) working in component headquarters’ offices, or (3) for whom prior internal investigations had found to be inappropriately provided AUO pay. In addition, the memorandum clarified that employees working in active operational capacities, and whose duties meet the requirements for AUO, may continue to earn AUO pay. In response, CBP, ICE, and NPPD According to subsequently deauthorized AUO for over 700 employees.USSS officials, they did not have any employees earning AUO who were training instructors or in positions ineligible for AUO, and that its employees based in headquarters were used in an operational capacity. Officials from DHS’s OCHCO stated that components were to manage the January 2014 memorandum’s implementation, because officials outside of the component could not judge who was in a headquarters position. The component plans were due to DHS’s Office of the Deputy Secretary and Office of General Counsel and, according to DHS officials, were subsequently shared with OCHCO. According to the May 2014 memorandum, components that had suspended AUO were not required to submit plans, but must do so prior to any decision to reinstate the use of AUO in the future. indicated that DHS and its components were not in full compliance with the rules governing AUO. The memorandum also noted that components could decide, after weighing the costs and benefits of appropriate AUO management, to eliminate the use of AUO in favor of other forms of overtime compensation. According to the memorandum, to improve components’ administration and oversight of AUO, the corrective action plans are to, among other things, update any existing policies and practices that exclude certain days from AUO calculation review periods; ensure that mechanisms are in place to continually evaluate position and employee AUO authorizations, including those for employees assigned to temporary or long-term details; improve existing documentation of AUO duties and hours with regard to completeness and clarity; and strengthen supervisor reviews of AUO hours claimed, particularly for those supervisors that routinely approve AUO for the same employees performing the same tasks at the same time on the same days of the week, among other things. According to a senior OCHCO official, CBP, ICE, and USSS submitted draft AUO corrective action plans to the Deputy Secretary on or closely to June 23, 2014. In lieu of a corrective action plan, NPPD submitted a memorandum detailing its intention to suspend AUO for its employees, which it implemented in September 2014. As of October 2014, the plans were not final because, according to the senior OCHCO official, OCHCO and the Office of the General Counsel are in the process of providing feedback to components on the draft AUO corrective action plans. The official added that components are to finalize these plans after DHS implements its department-wide AUO directive so that the plans may incorporate requirements from the directive, as appropriate. DHS components have already started taking some actions to strengthen AUO administration and oversight. For example: In April 2013, CBP began a review of all 187 positions authorized for AUO, and completed this review in June 2014. CBP deauthorized AUO for more than 1,900 employees in 139 positions on September 7, 2014. The authorization review found that 48 positions were still eligible for AUO based on the nature of work duties, and proposed next steps such as revising current position descriptions, issuing new position descriptions detailing AUO-qualifying duties performed by employees in those positions, and pursuing the recovery of instances of improper AUO payments. In addition, CBP officials responsible for human resources reported that they were in the process of updating AUO employee documentation forms from requiring one supervisory signature to requiring supervisors to sign each line describing AUO activities. Officials stated that this change could help supervisors better ensure that activities are appropriate for AUO, as opposed to a different type of overtime. According to ICE human capital officials, ICE also has actions under way to strengthen its AUO administration and oversight in addition to actions taken in response to the June 2013 OPR inspection of AUO misuse. These actions include, among other things, conducting a comprehensive review of position descriptions to determine eligibility for AUO authorization, which began in April 2014. As of December 2014, ICE reported that the review had not yet been finalized. Further, in July 2014, ICE issued a memorandum to all ICE supervisors regarding their responsibility to review the accuracy of subordinates’ time and attendance records—specifically with regard to AUO—and issued a Premium Pay Guide, which summarizes relevant statutory and regulatory requirements, requires independent AUO reviews and audits every 5 years, and provides examples of AUO-eligible duties, among other things. DHS officials reported that USSS has issued a series of messages to employees and supervisors regarding AUO eligibility, among other things. In addition to calling for corrective action plans, DHS’s May 2014 memorandum directed OCHCO to develop a department-wide AUO directive applicable to all DHS components using AUO by no later than 60 days from the date of the memorandum. A senior OCHCO official reported that OCHCO submitted the directive to components for comment within this time frame, and that, as of October 2014, officials were in the process of reviewing component comments. According to the May 2014 memorandum, the department-wide AUO directive is to include specific requirements to strengthen AUO administration and oversight, including designating OCHCO with ensuring consistent application of AUO policies and procedures across components. Specifically, the directive is to, among other things, prohibit the practice of excluding certain days, and provide guidance on accounting for extended employee absences from the performance of normal duties; require component-wide reviews of all AUO-certified positions every 3 years; establish roles and responsibilities for identifying; documenting; and, if necessary, temporarily discontinuing AUO pay for employees temporarily reassigned to other offices or duties; mandate that all time sheets used to record AUO hours require detailed descriptions of the work performed sufficient to permit supervisors to determine whether and how the work is administratively uncontrollable; and require all components using AUO to arrange for an annual, independent, third-party audit and to report the results of each audit to OCHCO. Further, the directive is to provide central, department-wide oversight of AUO use, by charging OCHCO and the Office of the General Counsel with monitoring components’ progress in remediating AUO deficiencies, among other things. able to finalize their corrective action plans and move forward with implementing any additional actions to improve AUO administration and oversight. The May 2014 memorandum also states that after DHS issues the AUO directive, components may—with concurrence of OCHCO to ensure consistency with the department-wide directive and the Office of the General Counsel for legality—issue component- or office-specific AUO guidance. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: Nov. 1, 1999). confirm that components are implementing AUO oversight mechanisms, such as rates reviews. However, these human resources assessments are broad and cover multiple aspects of a component’s human resource functions, and occur relatively infrequently. Given DHS components’ long- standing and pervasive problems with AUO administration and oversight, including AUO reviews as part of general human resources assessments provides the department with limited assurance that it can effectively monitor and evaluate components’ progress in addressing AUO deficiencies on a sustained basis. Developing and executing a department-wide oversight mechanism to ensure components implement AUO appropriately on a sustained basis, and in accordance with law and regulation, could better position DHS to monitor components’ progress remediating AUO deficiencies. Although the May 2014 memorandum states that the department-wide directive will require components to arrange for annual third-party audits of AUO and report the results to OCHCO, DHS does not plan to report these results or progress with remediating AUO deficiencies to Congress. A senior OCHCO official stated that the department does not have plans to report oversight results to Congress because DHS management has demonstrated its commitment to improving AUO administration and oversight through its plans to issue a department-wide AUO directive. However, given the long-standing and extensive nature of AUO administration and oversight deficiencies across DHS components, and AUO payments in excess of $500 million in fiscal year 2013 to compensate employees for overtime work performed, additional oversight by Congress would help hold DHS accountable for ensuring that its planned actions resolve the challenges the department has faced with managing AUO. If Congress is not kept apprised, in a timely manner, of DHS’s department-wide efforts to administer and oversee AUO, it will not have all the pertinent information necessary to oversee the department’s progress, if any, in this area. Submitting regular reports to Congress on the department’s progress in remediating AUO deficiencies could provide assurance that components have made progress remediating AUO implementation deficiencies and have sustained effective and appropriate use of AUO in accordance with law and regulation. In November 2013, USCIS and OCSO suspended AUO for all their AUO- eligible employees, or approximately 30 and 15 employees, respectively, in part because of various challenges with the administration of AUO across the department or within its respective component. Following these suspensions of AUO, the number of employees earning any overtime payments, the overall overtime hours worked per pay period, and the amount of overtime compensation decreased in USCIS and OCSO. According to officials from USCIS and OCSO, these decreases in employees’ receiving AUO and the decreases in hours of total overtime worked have contributed to investigation case backlogs for USCIS and decreased employee morale for both components. In addition, other DHS components responded to the Secretary’s January 2014 memorandum by deauthorizing AUO for over 700 employees in specific positions—mostly within CBP and ICE. These components were compensating these employees with AUO at an annual rate of approximately $16 million in AUO at the time. According to CBP officials, these deauthorizations may lead to issues with recruiting for headquarters-based and instructor positions from elsewhere in the component, and CBP and ICE officials stated they may also contribute to a decrease in employee morale. Following USCIS and OCSO suspensions of AUO, the total number of overtime hours worked and the total amounts of overtime expenditures decreased for both components. USCIS and OCSO officials stated that these reductions have affected the backlog of investigation cases and affected employee morale, respectively. However, officials stated that they should be able to mitigate these impacts on both components. Prior to USCIS suspending AUO effective November 19, 2013, approximately 30 USCIS employees were eligible to earn AUO—specifically, investigators within the Office of Security and Integrity (OSI). These investigators’ responsibilities, as part of OSI, include internal investigations. Prior to the suspension of AUO, 1 employee who received AUO payments also received compensation for scheduled overtime. Therefore, USCIS primarily compensated OSI investigators for overtime work using AUO pay. Since the suspension, USCIS has compensated Table 4 shows employees on an hourly basis for any overtime worked.the differences between USCIS use of AUO prior to the suspension and the use of scheduled overtime following suspension. With a drop in the number of employees earning overtime payments, and an approximate 78 percent drop in the average number of overtime hours worked per pay period, USCIS saw a decrease in total overtime payments of about $274,000 when comparing the 13 pay periods before and the 13 pay periods after the suspension of AUO. According to USCIS officials, the drop in the average number of employees earning overtime payments following USCIS’s November 2013 suspension of AUO may be partly a result of supervisors’ reluctance to approve overtime requests and is not reflective of the amount of work that needs to be accomplished. For example, in the 3 pay periods immediately following the suspension of AUO, 2 OSI employees claimed any overtime hours, while the rest claimed none. During the fourth through seventh pay periods after the AUO suspensions, an average of 21 employees earned overtime, closer to the average of employees earning AUO payments prior to the suspension. However, for the remaining 6 pay periods that we analyzed, the number of employees that earned overtime fell to an average of 4. Since the suspension of AUO, and USCIS’s subsequent deauthorization of AUO in July 2014, employees have been using scheduled overtime in accordance with USCIS requirements on when overtime can be approved, according to USCIS officials. USCIS officials stated that they have seen an increase in the case backlog—the number of open investigation cases compared with the number of new cases—although the backlog increase could also be due to other factors in addition to the suspension of AUO, such as new quality assurance processes and an increased number of cases being investigated per employee. According to USCIS, the number of open cases has increased from 268 in the fourth quarter of fiscal year 2013, just before the suspension of AUO, to 346 in the third quarter of fiscal year 2014. The number of new cases has decreased from 102 in the fourth quarter of fiscal year 2013 to 59 in the third quarter of fiscal year 2014. USCIS officials stated that USCIS management has addressed the case backlog by authorizing more personnel resources in fiscal year 2014 and may increase resources for fiscal year 2015. This may provide some relief for the overall amount of work that is required to address the case backlog. Prior to OCSO suspending AUO effective November 22, 2013, approximately 15 OCSO employees working in the force protection and technical services branches earned AUO. According to OCSO officials, these employees’ responsibilities include force protection, investigation, and technical security countermeasures, and these employees must be available after hours to respond to emergent events. Following the AUO suspension, OCSO used scheduled overtime to compensate these employees. Table 5 shows the differences between OCSO use of AUO prior to the suspension and the use of directed or scheduled overtime following the suspension. According to OCSO officials, fewer employees worked overtime after the suspension of AUO because OCSO has implemented budget efficiencies and some employees have opted out of working any overtime. However, the average number of overtime hours worked per employee remained the same—approximately 12 hours per pay period before and after the suspension of AUO. With a drop in the average number of employees earning overtime payments and about a 41 percent decrease in the total overtime hours worked per pay period, OCSO saw a decrease in total overtime payments of about $77,500 when comparing the 13 pay periods before suspension and the 13 pay periods after suspension. According to OCSO officials, employee morale has dropped for those affected by the AUO suspension, because of a decrease in pay and a sense that the suspension came with no warning and with no suggestion of wrongdoing on the part of OCSO in administering AUO or of OCSO employees in using AUO. As shown in table 7, the average employee’s additional overtime pay decreased from about $797 per pay period to about $666 per pay period following OCSO’s AUO suspension. To OCSO officials’ knowledge, no employees have resigned because of the AUO suspension. OCSO officials said there have not been any significant operational impacts resulting from the suspension of AUO, and overtime work is still done, as needed and appropriate, with approval by the supervisor in charge. In response to the DHS Secretary’s January 27, 2014, memorandum, CBP, ICE, and NPPD collectively deauthorized AUO for over 700 AUO- earning employees by March 2014. However, these employees are still able to earn other types of overtime, and deauthorizing AUO for these employees does not indicate that the hours worked prior to the memorandum should not have been compensable. In general, DHS’s January 2014 memorandum called for the suspension of AUO payments to individuals (1) working in component headquarters offices (unless in an operational, AUO-eligible capacity), (2) working as full-time training instructors, and (3) determined as having been inappropriately provided Employees located in headquarters and full-time instructors AUO pay.are more likely to be employed in supervised office environments, not performing independent investigative tasks that could be uncontrollable as described in federal regulation and guidance. For example, these employees may still be able to earn regular and scheduled overtime under FEPA and FLSA in a manner similar to those employees in USCIS and OCSO discussed previously. Table 6 shows CBP, ICE, and NPPD’s total number of employees deauthorized from AUO in response to DHS’s January 2014 memorandum. As shown in table 6, CBP deauthorized AUO for 565 employees in CBP headquarters and at other locations, who received a total of approximately $11.9 million per year in AUO pay assuming their salaries and AUO rates remained constant. This translates to about $21,000 per employee per year. ICE deauthorized AUO for 181 employees, including 111 employees whose positions were improperly classified as eligible for AUO. According to ICE officials, these improperly classified employees were authorized for AUO based on legacy designations from INS, and were not previously identified as being improperly authorized because ICE did not conduct annual reviews of position descriptions. In total, these ICE employees received approximately $4.5 million (or about $25,000 per employee) per year in AUO pay, assuming their salaries and AUO rates remained constant. NPPD deauthorized AUO for 3 employees who received approximately $54,500 (or about $18,000 per employee) per year in AUO pay, assuming their salaries and AUO rates remained constant. USSS officials stated that they did not suspend or deauthorize AUO for any employees because of the January 2014 memorandum. According to USSS officials, all USSS employees earning AUO perform in an operational, not administrative, capacity and remain eligible for AUO. CBP officials stated that these deauthorizations of AUO may affect recruitment of high-caliber employees into temporary details to headquarters-based or instructor positions because these employees may see a decrease in earnings as compared with earnings in field positions that remain eligible for AUO. ICE officials stated that they have not seen any discernible changes in attrition or recruitment related to the deauthorizations of AUO for headquarters-based and instructor positions. DHS employees perform critical missions on behalf of the American public. DHS components owe their employees and the public assurance that public funds, including those spent on AUO work, are used in a manner consistent with applicable law, regulation, and policy. Because certain positions and duties in DHS may require occasional or irregular overtime that cannot be scheduled in advance or controlled through administrative means, such as staggering shifts or hiring additional personnel, AUO provides DHS components with flexibility to compensate such AUO-eligible employees. The flexibility of AUO, however, requires the department and components to provide rigorous oversight of the use of AUO—including careful and consistent review of hours claimed and employee eligibility, among other things, so that AUO is appropriately used. Reviews as far back as 1997 show that current DHS entities—such as U.S. Border Patrol—have not properly managed AUO, partly because of a culture throughout DHS that employees in certain positions may use AUO without much oversight and regardless of whether AUO is the appropriate compensation for the work being performed. Furthermore, extensive AUO oversight deficiencies have continued since the formation of DHS in 2003 because DHS components have not implemented AUO in a manner consistent with federal law, regulation, and guidance, and component policies. DHS is taking actions to improve the management of AUO as detailed in the Deputy Secretary’s May 23, 2014, memorandum that requires components to develop and implement corrective action plans and requires DHS to develop a department-wide directive on AUO. Until the directive is finalized, it is unclear what provisions will be included, and whether oversight mechanisms will be robust enough and sufficient to address the myriad of deficiencies identified with AUO administration and oversight. Given the department’s total spending on AUO payments of over $500 million in fiscal year 2013 and DHS’s long-standing and widespread AUO administration and oversight deficiencies, ensuring accountability and safeguarding the expenditure of public funds is paramount. By developing and executing a department-wide oversight mechanism to ensure components implement AUO appropriately on a sustained basis, and in accordance with law and regulation, DHS could be better positioned to monitor components’ progress remediating AUO deficiencies. In addition, by DHS reporting annually to Congress on the extent to which DHS components have made progress in remediating AUO implementation deficiencies, including information from annual third- party AUO audits or other department AUO oversight efforts, Congress could have reasonable assurance that DHS components have sustained effective and appropriate use of AUO in accordance with law and regulation. To ensure that DHS components have sustained effective and appropriate use of AUO in accordance with law and regulation, Congress should consider requiring DHS to report annually to Congress on the use of AUO within the department, including the extent to which DHS components have made progress remediating AUO implementation deficiencies and information from annual third-party AUO audits or other department AUO oversight efforts. To better position DHS to monitor components’ progress remediating AUO deficiencies, we recommend that the Secretary of DHS develop and execute a department-wide oversight mechanism to ensure components implement AUO appropriately on a sustained basis, and in accordance with law and regulation. We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reprinted in appendix IV. In its comments, DHS agreed with the recommendation and outlined actions to address it. Specifically: DHS stated its forthcoming department-wide AUO policy will require 1) annual third-party audits of components’ use of AUO that will measure compliance with DHS policy, component requirements, and related regulations; 2) components to provide a copy of each completed annual third-party audit to component leadership and the DHS Chief Human Capital Officer; and 3) component heads to certify annually that their AUO-eligible positions meet statutory, regulatory, and DHS policy requirements. DHS further stated that the DHS Chief Human Capital Officer is to inform the DHS Under Secretary for Management of the results of these annual audits. If the audits identify any concerns, the Under Secretary for Management is to facilitate their resolution or elevate them to more senior leadership to ensure they are addressed. DHS reported that its department-wide AUO policy is going through final internal coordination and will be distributed to national labor unions for review and comment before publication. DHS estimates the AUO policy will be finalized and implemented by June 30, 2015. If fully implemented, these actions will address the intent of our recommendation. DHS also provided technical comments on a draft of this report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix V. This report addresses the following three questions: 1. How much has the Department of Homeland Security (DHS) spent on administratively uncontrollable overtime (AUO) pay from fiscal year 2008 through March 2014? 2. To what extent have DHS components implemented AUO appropriately? 3. How have recent AUO suspensions and deauthorizations at selected DHS components affected their use of overtime? For this report, we assessed DHS’s use of AUO for those components that compensated employees with AUO from fiscal year 2008 (October 2007) to March 22, 2014—the most currently available data for fiscal year 2014 at the time of our data request. However, when calculating annual averages, we used the last full year of available data—fiscal year 2013. Specifically, these components included U.S. Customs and Border Protection (CBP), U.S. Immigration and Customs Enforcement (ICE), National Protection and Program Directorate (NPPD), U.S. Citizenship and Immigration Services (USCIS), U.S. Secret Service (USSS), and the Office of the Chief Security Officer (OCSO). We include OCSO in our scope when we refer to DHS components throughout this report. We selected this time frame because the Office of Special Counsel (OSC) first raised allegations of AUO abuse within DHS in 2007. We omitted the Federal Law Enforcement Training Center (FLETC) and DHS’s Office of the Chief Information Officer (OCIO) because they previously used AUO on a limited basis to fewer than 10 employees, for about 1 year. Specifically, FLETC compensated 1 employee with AUO from July 2010 to October 2011, and OCIO compensated 4 employees with AUO from April 2012 to May 2013. To determine how much each DHS component spent on AUO from fiscal year 2008 through March 2014, we collected AUO payments data for all DHS components that paid AUO since October 2007. These data included AUO payments to individual employees within each component per pay period as well as occupational characteristics such as grade, series, position title, salary, location, and program office. These data allowed us to calculate, for each component and in aggregate for DHS, the total amount of AUO paid per fiscal year; the average amount of AUO pay per employee; the number of recipients of AUO; and the counts of AUO recipients by location, grade, and position title. We assessed the reliability of the AUO payments data by (1) interviewing agency officials about the data sources, the systems’ controls, and any quality assurance steps performed by officials before data were provided from the systems and (2) testing the data for missing data, duplicates, negative dollar amounts or payments, entries with values beyond expected ranges, or other entries that appeared to be unusual. We identified limitations to the data including incomplete duty location information for CBP, incomplete date ranges for NPPD, as well as missing AUO payments and other variables for several components. The missing locations data for CBP affected about 9 percent of all records that we received, and we included caveats in the report where relevant. In addition, NPPD was unable to provide AUO payments from fiscal years 2008 through 2009 because its system does not allow access to older years’ information at the level of detail requested. NPPD accounted for about 1 percent of all AUO payments during the years of data available to us, so we believe the absence of the fiscal years 2008 and 2009 of NPPD AUO payments data likely resulted in a minimal impact on our reported overall expenditures for those years. We included caveats in the report where relevant related to NPPD’s AUO payments data. Additionally, USSS was able to provide calendar year payments data only for 2008 through 2010, resulting in a 3 month overlap with the last 3 months of calendar year 2010 with the first 3 months of fiscal year 2011. Since USSS constitutes a relatively small amount of total AUO payments any impacts on the aggregate data are likely minimal. We found the components’ data sufficiently reliable and complete for providing descriptive information on the amount of AUO that DHS components paid and on the occupational characteristics of employees earning AUO payments. To determine the extent to which DHS components implemented AUO appropriately, we reviewed AUO policies and procedures for those DHS components that compensated employees with AUO from fiscal year 2008 through March 2014—CBP, ICE, NPPD, USSS, OCSO, and USCIS. We analyzed these policies and procedures to determine the extent to which they comply with relevant federal regulations and incorporate Office of Management and Personnel (OPM) guidance. We also interviewed OPM officials about these regulations and guidance to better inform our understanding of components’ AUO policies and procedures. To describe the extent to which AUO policies and procedures may have contributed to inappropriate implementation of AUO, we reviewed relevant Office of Inspector General (OIG) and OSC-referred component investigation and review reports that were based on referrals or otherwise begun between May 2007 and January 2014. We reviewed the Department of Justice’s OIG’s sampling methodology for its 1997 review of Border Patrol employees’ use of AUO, and found it sufficiently sound to report the results. In addition, to assess the extent to which ICE consistently adjusted AUO rates, we compared AUO rates changes for employees identified by ICE with personnel actions to decrease AUO rates, and thereby AUO payments, for identified employees. We selected ICE because officials indicated that AUO rates adjustments were not carried out in a timely fashion by field office officials. Further, we reviewed AUO activity documentation submitted by NPPD employees from December 1 through 14, 2013, because NPPD stores this documentation electronically and it was the most recently available documentation at the start of our review. We also met with agency officials from component program offices that compensated employees with AUO to obtain their perspectives on AUO policies and procedures, as well as implementation of these policies and procedures, including (1) DHS’s Office of the Chief Human Capital Officer (OCHCO); (2) CBP’s U.S. Border Patrol, Office of Internal Affairs, Office of Field Operations, Office of Training and Development, and Office of Air and Marine; (3) ICE’s Office of Human Capital, Enforcement and Removal Operations, Homeland Security Investigations, and Office of Professional Responsibility; (4) NPPD’s Infrastructure Security Compliance Division, Protective Security Coordination Division, and Federal Protective Service; (5) USCIS’s Office of Security and Integrity; (6) USSS’s Management and Organization Division, Office of Investigations, Office of Protective Operations, and Office of Technical Development and Mission Support; and (7) OCSO. We also met with representatives from two unions with bargaining unit members receiving AUO to obtain their perspectives on components’ use of AUO: the National Border Patrol Council and the National ICE Council. To determine actions DHS and its components have taken to strengthen AUO policies and procedures and to improve implementation of these policies and procedures, we reviewed DHS’s January 23, 2014, and May 23, 2014, memorandums on AUO administration. To determine how recent DHS suspensions of AUO have affected use of overtime, we analyzed employee overtime information for those employees who received AUO compensation from USCIS and OCSO, both of which suspended AUO in November 2013. Components use different terms to describe the cessation of AUO within their respective components. For the purposes of this report, we use “suspension” of AUO when addressing components’ actions to completely stop use of AUO for all employees without a formal, written determination as to a particular employee’s or position’s AUO eligibility. Specifically, we compared overtime hours worked and overtime paid for these employees prior to and after the AUO suspension. Our analysis included 13 pay periods of AUO payments and hours worked prior to the suspension of AUO and 13 pay periods of Federal Employees Pay Act (FEPA), or scheduled overtime, payments, and hours worked following the suspension of AUO. To determine how recent deauthorizations have affected components, we analyzed AUO paid to employees who DHS components subsequently deemed to be incorrectly receiving AUO following DHS Secretary Johnson’s January 27, 2014, memorandum, which instructed components to suspend AUO for certain categories of employees. For the purposes of this report, we use “deauthorization” of AUO when referring to components’ actions to identify specific employees or positions as no longer eligible for AUO. We analyzed a list of all individuals from CBP, ICE, and NPPD who had been deauthorized from receiving AUO, and requested that each record also include the reason for the person’s deauthorization. USSS responded that it did not have any employees meeting the criteria of the memorandum and therefore did not suspend or deauthorize any employees and did not provide any such data to us. We combined the CBP, ICE, and NPPD lists with the AUO payments data previously discussed, and calculated the amount of AUO that DHS components paid individuals just prior to the time of deauthorization in January 2014. Subsequently, we estimated the annual amount paid to these individuals by multiplying the total they were paid at the time of suspension by 26, the total number of pay periods in a year. Since these pre- and postsuspension data and deauthorization data are from the same sources as we assessed above, we interviewed agency officials about any specific limitations to these different data sets. We did not identify any additional limitations, and noted any caveats to our analysis. We found the data sufficiently reliable for providing descriptive information on how DHS and components’ November 2013 suspensions of AUO have affected use of overtime and the amount of AUO paid to employees deauthorized from receiving AUO following the January 2014 memorandum. In addition, we interviewed officials from USCIS and OCSO, as well as CBP, ICE, and NPPD, to collect their perspectives on the effect that the AUO suspensions and deauthorizations had on their ability to fulfill mission-critical tasks, such as any related attrition or morale issues. We conducted this performance audit from January 2014 to December 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Reviews of the Department of Homeland Security (DHS) Components’ Use of Administratively Uncontrollable Overtime (AUO) Various entities have investigated and reviewed the use of AUO by DHS components, including components that existed prior to the formation of DHS in 2003. More recently, the DHS Office of Inspector General (OIG) and DHS components’ offices of internal affairs, in response to allegations referred by the U.S. Office of Special Counsel (OSC), have conducted investigations and reviews of AUO at DHS. Table 7 lists these selected past investigations and reviews of DHS components’ use of AUO. DHS components have numerous and varying AUO oversight and administration policies and procedures. In particular, policies and procedures differ regarding the frequency of AUO authorization and rates reviews, activity documentation, and guidance on temporary details. Table 8 shows the extent to which selected AUO oversight and administration mechanisms are included in DHS component policies. In addition, some component AUO policies and procedures do not address federal regulation and guidance regarding temporary details. Specifically, CBP’s and USSS’s policies do not include guidance to identify whether employees should continue to receive AUO while on temporary details away from their normal duties. Table 9 shows the extent to which DHS component policies and procedures address federal regulations regarding employees earning AUO while assigned to a temporary detail. In addition to the contact named above, Adam Hoffman (Assistant Director), David Alexander, Cynthia Grant, Eric Hauswirth, Susan Hsu, Thomas Lombardi, Alicia Loucks, Elizabeth Luke, Ruben Montes de Oca, and Sean Standley made key contributions to this report. | DHS had approximately 29,000 employees earning AUO, a type of premium pay intended to compensate eligible employees for substantial amounts of irregular, unscheduled overtime. DHS components' use of AUO has been a long-standing issue since at least 2007, when reviews identified the inappropriate use of AUO in DHS. GAO was asked to review DHS components' use and implementation of AUO. This report addresses, among other things, how much DHS spent on AUO from fiscal year 2008 through March 2014 (the most current data available) and the extent to which DHS components implemented AUO appropriately. GAO analyzed AUO payments data from components that have regularly used AUO, which included U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, U.S. Secret Service, National Protection and Programs Directorate, U.S. Citizenship and Immigration Services, and Office of the Chief Security Officer. When calculating annual averages, GAO used the last full fiscal year of available data (2013). GAO also analyzed component AUO policies and procedures to assess compliance with federal regulations and guidance. Department of Homeland Security (DHS) components spent $512 million on administratively uncontrollable overtime (AUO) payments in fiscal year 2013 and $255 million through March 2014, mostly on Border Patrol agents. DHS's AUO expenditures increased from fiscal years 2008 through 2013, in part because of higher payments per earner. The average annual AUO payment per employee increased by about 31 percent, or from about $13,000 to about $17,000 from fiscal years 2008 through 2013, as shown in the figure below. Some DHS component policies are not consistent with certain provisions of federal regulations or guidance, and components have not regularly followed their respective AUO policies and procedures, contributing to widespread AUO administration and oversight deficiencies. For example, components have not consistently reviewed hours claimed and employee eligibility for AUO. In response, in 2014, DHS issued two memorandums. One required the suspension of AUO for certain employees. The other required components to submit plans to address deficiencies, which most DHS components have done. DHS also plans to issue a department-wide AUO directive and to monitor component implementation of corrective actions through its ongoing human resource office assessments every 3 to 4 years, among other things. However, this monitoring is too general and infrequent to effectively monitor or evaluate DHS components' progress. Given the department's long-standing and widespread AUO administration and oversight deficiencies, developing and executing a department-wide oversight mechanism to ensure components implement AUO appropriately on a sustained basis, and in accordance with law and regulation, could better position DHS to monitor components' progress remediating AUO deficiencies. Further, DHS's reporting annually to Congress on the extent to which DHS components have made progress in remediating AUO implementation deficiencies could provide Congress with reasonable assurance that DHS components have sustained effective and appropriate use of AUO in accordance with law and regulation. GAO recommends that DHS develop and execute a department-wide mechanism to ensure components implement AUO appropriately. Congress should consider requiring DHS to report annually on components' progress remediating AUO implementation deficiencies. DHS concurred with the recommendation. |
The IMF’s first purpose is promoting international monetary cooperation. Its Articles of Agreement, as amended, provide that it may make its resources available to members experiencing balance-of-payments problems; this is to be done under “adequate safeguards.” The IMF’s approach to alleviating a country’s balance-of-payments problems has two main components—financing and conditionality—that are intended to address both the immediate crisis as well as the underlying factors that contributed to the difficulties. Although financing is designed to help alleviate the short-term balance-of-payments crisis by providing a country with needed reserves, it may also support the longer term reform efforts by providing needed funding. The access to and disbursement of IMF financial assistance are conditioned upon the adoption and pursuit of economic and structural policy measures the IMF and recipient countries negotiate. This IMF “conditionality” aims to alleviate the underlying economic difficulty that led to the country’s balance-of-payments problem and ensure repayment to the IMF. As the reasons for and magnitude of countries’ balance-of- payments problems have expanded (due, in part, to the growing importance of external financing and changes in the international monetary system since the 1970s), conditionality has also expanded. According to the IMF, conditionality has moved beyond the traditional focus of reducing aggregate demand, which was appropriate for relieving temporary balance-of-payments difficulties, typically in industrial economies. Structural policies—such as reducing the role of government in the economy and opening the economy to outside competition—that take longer to implement and are aimed at increasing the capacity for economic growth—became an important part of conditionality. More recently, the financial crises in Mexico (1994-95) and in Asia and Russia (1997-99) have resulted in an increased focus on strengthening countries’ financial sectors and the gradual opening of their economies to international capital flows. Over time, the IMF has developed a broad framework for establishing and monitoring financial assistance arrangements that is applied on a case-by- case basis considering each country’s circumstances. This process, based on the IMF’s analysis of country data and projections of future economic performance, gives the IMF wide latitude in establishing an actual or potential balance-of-payments need, the amount and timing of resource disbursements, and the conditions for disbursements; and in monitoring and, in some cases, modifying the arrangements. Under its Articles of Agreement, as amended, the IMF provides financial assistance only to those countries with a balance-of-payments need. Under these Articles, the IMF primarily considers actual or potential difficulties in either the country’s balance of payments or its reserve position to be a basis for providing financial assistance. This framework has provided the IMF with wide latitude to consider countries’ individual circumstances and changes in the international monetary system in its financial assistance decisions. The specific conditions that the IMF and the country authorities negotiate are intended to address the immediate and underlying problems that contributed to the country’s balance-of-payments difficulty, while ensuring repayment to the IMF. These conditions are intended to be clear indicators of a country’s progress toward the overall program goals, such as strengthening the country’s balance of payments or reducing inflation. These conditions can include a variety of changes in a country’s fiscal, monetary, or structural policies. Fiscal policy conditions may call for countries to reduce budget deficits; Brazil’s program, for instance, called for limits on public sector debt. Monetary policy conditions seek to, among other things, rebuild international reserves to promote financial stability; Uganda’s program set a minimum level for its net international reserves. Changes in structural policies may include revisions to financial market regulation or tax policies; Korea’s program called for restructuring its financial supervisory system. Political constraints and economic uncertainty can make these negotiations sensitive and difficult. After a country fulfills any early IMF requirements, known as “prior actions,” and the IMF Executive Board approves the financial arrangement, the program is to take effect and the country is eligible to receive its first disbursement of funds. Korea and Argentina exemplify the differences that can exist between countries’ financial arrangements with the IMF. Korea’s program provided substantial funding at the earliest stage of the program to counter an ongoing balance-of-payments crisis in late 1997 resulting from substantial losses in Korea’s foreign currency reserves and the depreciation of the won, Korea’s currency. The country faced balance-of-payments problems primarily due to significant capital outflows. Korean banks had a large amount of short-term external debt that needed frequent refinancing. As market confidence fell, the willingness of external creditors to “roll over” or refinance these loans declined rapidly. The government’s attempt to support the exchange rate rapidly depleted official reserves of foreign currencies. The main goals for the program’s monetary policy were to limit the depreciation of the won and contain inflation. In contrast, Argentina’s 1998 program was designed as a precaution against a potential balance-of-payments problem that could result from external economic shocks. Although Argentina enjoyed good access to capital markets and had employed a strategy to lengthen the maturity of its debt and borrow when interest rates were low, it faced an uncertain future due to deteriorating conditions in the international financial environment and the effect this likely would have on its future access to capital markets. Argentina agreed to access IMF resources only if external conditions made access necessary. The program was principally concerned with maintaining fiscal discipline and enacting labor market and tax reforms that were intended to maintain investor confidence and strengthen the economy’s competitiveness. The process of monitoring a country’s progress toward overall program goals and compliance with program conditions involves both the borrower country and the IMF. The approach is designed to incorporate data on a country’s economic performance as well as the judgment of the IMF Executive Board and staff. IMF staff reviews a member’s economic performance and implementation of policy changes that were negotiated as conditions of the financial assistance. The staff then reports to the Executive Board at regularly scheduled intervals for each assistance program. In situations where conditions have not been met, the staff formally or informally advises the Executive Board. The staff may recommend that the Board grant a waiver for the nonobservance of the unmet conditions. Typically waivers can be recommended if the nonobservance is minor and program implementation is otherwise “on track.” If there is no waiver, additional financial assistance is not to be made available to the country and the program is effectively suspended until there is an agreement between the IMF and the country that is approved by the IMF Executive Board. This agreement may mandate policy changes before any further assistance is granted and change the conditions for future assistance. In monitoring compliance, IMF missions to each country documented a country’s progress in satisfying conditions. In some cases, the IMF determined the countries had made sufficient overall progress in meeting program conditions so that additional funds could be made available, even when the countries had not satisfied some key conditions. For example, in response to the Argentine government’s request, the IMF staff recommended, and the Executive Board approved, a waiver on the basis of the IMF’s judgment that there was sufficient overall progress in implementing the program and that the deviation from meeting the required condition was minor. In March 1999, the IMF Board approved a waiver when Argentina’s fiscal deficit (1.1 percent of gross domestic product) slightly exceeded its target of 1 percent. Access to funding was not delayed. Similarly, in April 1998, the IMF Board approved a waiver when the Ugandan government experienced a temporary shortfall in its checking account balances, causing it to miss a required condition. According to the IMF staff, this shortfall happened because the government made payments sooner than expected. The staff viewed this as a minor, technical issue and recommended the waiver. The IMF and borrower countries may also negotiate changes in conditions to respond to unanticipated developments. For example: The IMF and Korea revised Korea’s program several times during its first 2 months. The IMF acknowledged that the initial program was “overly optimistic” as economic conditions worsened; Korea continued to have access to financial assistance during these renegotiations. Brazil’s program was modified due to adverse events. The maintenance of the exchange rate regime was an objective of Brazil’s IMF program. Brazil turned to the IMF for assistance in September 1998, when its currency came under pressure as a result of the Russian crisis, and it experienced a significant loss of reserves. This reserve loss decelerated after the negotiations began; but, according to Brazilian officials, Brazil’s currency came under additional pressure after its IMF program had started. The reasons for this included the defeat in Brazil’s congress of two tax measures deemed crucial to the fiscal adjustment program and the reluctance of a number of Brazilian state governors to fulfill their financial obligations to the government. To try to stem the additional loss of reserves, the Brazilian government found it necessary to devalue and then float the currency. The IMF program was then revised to reflect the new economic situation and currency regime. In some cases, the IMF determined that the countries had not made sufficient overall progress in meeting program conditions. In these cases, no additional funds were made available until, in the IMF’s judgment, satisfactory progress had been achieved. The IMF delayed disbursements to Indonesia at various points during its current program until the IMF determined that the country had made sufficient overall progress in meeting the program requirements. For example, the IMF delayed Indonesia’s disbursements from mid-March 1998 to early in May 1998 due to the IMF staff’s determination that Indonesia had made insufficient progress in carrying out its program. The first review was completed in May 1998. Indonesia met none of the required conditions addressing macroeconomic components of the program and one of the key conditions for structural economic changes. IMF staff recommended that the Board grant Indonesia’s request for waivers of these conditions on the basis of actions taken by the government. (For example, the government had established a new comprehensive bank-restructuring program in January 1998 to be implemented by a new agency, the Indonesian Bank Restructuring Agency.) Following the Board’s approval, Indonesia received its next disbursement. At this time, the IMF moved from quarterly to monthly reviews of Indonesia’s program. Disbursements were also delayed in the process of completing several subsequent reviews. The IMF faced continued problems in Russia’s implementation of its IMF program. Over time, the IMF delayed disbursements and program approval, reduced the amount of the disbursement, and ultimately suspended the program. According to the IMF, it delayed disbursements because of Russia’s poor tax collections, reflecting a lack of government resolve to collect taxes. However, throughout Russia’s program the IMF staff expressed the view that Russia’s key senior authorities were committed to the program and should be supported; therefore, the IMF Board continued to approve disbursements. Events in 1998 particularly illustrate this. The delayed approval of the 1998 program, due to cabinet changes and difficulty in meeting the revenue package, meant that Russia received no funds between January and June 1998. The program was finally approved in June 1998, on the basis of implementation of prior actions. In July 1998, the IMF approved additional funds to Russia but reduced the amount of the disbursement from $5.6 billion to $4.8 billion due to delays in getting two measures passed in the Duma. The IMF was scheduled to release the next disbursement in September 1998, but Russia had deviated so far from the program that the IMF made no further disbursements. In March 1999, Russia requested that the program be terminated. In April 1999, the IMF and Russia announced they had reached agreement on a new arrangement. To date, the IMF Board has not approved the new arrangement. Although all borrowers restrict trade to some extent, only a few of the 98 current IMF borrowers are traders large enough to affect the U.S. economy. Trade policies were not the major focus of IMF conditions for structural reform in the four borrowers we studied that are important U.S. trade partners. The IMF did seek to promote trade liberalization in these countries, however, and Brazil, Indonesia, and Korea undertook some actions to liberalize their trade regimes. Also, although U.S. imports from some of these countries have grown in some sectors, the effect of trade policy changes on U.S. imports has probably been of lesser magnitude than the effect of the substantial macroeconomic changes that these countries experienced. In its programs with four important U.S. trade partners, the IMF focused primarily on macroeconomic and structural reforms other than trade reforms. As we noted earlier, the IMF seeks to address the immediate and underlying problems that contributed to a country’s balance-of-payments problem; restrictive trade policies were not major factors contributing to the countries’ needs for IMF assistance. Nevertheless, the IMF sought to promote trade liberalization in the countries, as it deemed appropriate. Part of the IMF’s mission, as embodied in its Articles of Agreement, is to facilitate the expansion and balanced growth of world trade. As such, countries that have borrowed from the IMF sometimes have liberalized their trade systems within the context of their financial arrangements. Borrowers have eliminated or reduced tariffs or nontariff barriers to imports and have ended or altered export policies, such as subsidies and export restrictions. Brazil, Indonesia, and Korea have undertaken some trade liberalization within the context of their recent IMF financial arrangements. Nevertheless, their overall conditionality has focused primarily on macroeconomic and structural reforms other than trade reform because restrictive trade policies per se were not major causes of their balance-of- payments difficulties, according to the Treasury Department and the IMF. Reflecting this, only one of the trade liberalization measures taken was a required condition—the requirement that Indonesia reduce export taxes on logs and sawn timber. Further, although some of the import and export policies to be eliminated or modified under their IMF arrangements have been of concern to the United States and other countries, the stated purpose of these measures is not to benefit the three countries’ trading partners. Rather, the purpose is to help resolve the countries’ balance-of- payments problems and address the underlying causes of these problems by promoting greater efficiency in their economic systems. Korea has eliminated four export subsidies, reduced some import barriers, and made improvements to the transparency of its subsidy programs. Indonesia has made many changes to its trade policies in the context of its IMF financial arrangements, including reducing or eliminating some import tariffs and export restrictions. Indonesia has committed to phase out most remaining nontariff import barriers and export restrictions by the time its IMF program ends in the year 2000. Brazil has committed to limit the scope of its interest equalization export subsidy program to capital goods and has suspended for 1999 a tax rebate given to exporters. Further, according to the IMF, Brazil has kept its pledge not to impose any new trade restrictions that hinder regional integration, are inconsistent with the World Trade Organization, or that are for balance-of-payments purposes. The large macroeconomic changes in these four countries caused by their recent financial crises greatly complicate predicting and measuring the trade policies’ impact on the United States. Our analysis of 1997-98 trade data reveals that overall U.S. imports from Brazil, Indonesia, Korea, and Thailand rose moderately in 1998. However, there have been substantial increases in U.S. imports from these countries in certain sectors. For example, imports of one category of flat-rolled steel from Korea rose by 36 percent to $355.8 million, and paper and paperboard imports from Indonesia were up by 284 percent to $40.8 million. Under U.S. law, there are procedures to investigate and remedy situations, such as steel import surges, where U.S. industry believes rising imports are attributable to foreign government policy and harm its economic interests. In some sectors, rising imports may be due to other factors besides government policies. For example, market factors, such as increasing U.S. coffee consumption and the need for more natural rubber for the larger tires being used in U.S. motor vehicles, may be the reason for some of the import surges. Also, chemical imports are causing price pressures on U.S. producers in the United States, but the import increases are partly due to depressed demand within Asia that has led to increased shipments to the United States. Mr. Chairman, this concludes our statement this morning. My colleagues and I would be pleased to answer any questions you or members of the subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the International Monetary Fund (IMF), focusing on the: (1) conditions IMF negotiates with its borrower countries; and (2) trade policies of borrower countries. GAO noted that: (1) IMF has a process for establishing and monitoring financial arrangements with member countries and it generally followed the process for the six countries in GAO's study; (2) the process encompasses data collection and analysis as well as judgment by the IMF Executive Board and staff, and gives IMF wide latitude in assessing a country's initial request for assistance, negotiating terms and conditions for that assistance, and determining the country's continued access to IMF resources; (3) under its charter, IMF limits financial assistance to members with a balance-of-payments need; IMF has broadly interpreted this to encompass a wide range of financial difficulties; (4) IMF has continued to make disbursements to a country that had not met all conditions when it decided that the country was making satisfactory progress; this decision was based on IMF's analysis of data on the country's progress and IMF's judgment; (5) when IMF determined that the country's progress in meeting key conditions was insufficient, disbursements have been delayed, and have not resumed unless or until satisfactory progress was achieved, in IMF's judgment; (6) IMF financial arrangements in four borrower countries that are important trading partners of the United States focus primarily on macroeconomic and structural reforms rather than trade reform because restrictive trade policies were not major causes of the countries' financial problems leading to the request for IMF assistance, according to the Department of the Treasury and IMF; (7) nevertheless Brazil, Indonesia, and Korea have undertaken some trade liberalization within the context of their most recent IMF arrangements; (8) according to Treasury, Thailand's recent IMF financial arrangements have had no trade liberalization commitments because trade policies were not the root causes of its financial crisis, and also because Thailand's trade system was more open than the other three countries' systems; (9) the large macroeconomic changes in these four borrower countries caused by their recent financial crises have probably been a more important source of changes in their trade policies; (10) this greatly complicates the task of measuring the impact of the trade policies on the United States; and (11) the countries' trade policies can distort trade in specific sectors, however, which could contribute to import surges. |
DOD and VA offer health care benefits to active duty servicemembers and veterans, among others. Under DOD’s health care system, eligible beneficiaries may receive care from military treatment facilities or from civilian providers. Military treatment facilities are individually managed by each of the military services—the Army, the Navy, and the Air Force. Under VA, eligible beneficiaries may obtain care through VA’s integrated health care system of hospitals, ambulatory clinics, nursing homes, residential rehabilitation treatment programs, and readjustment counseling centers. VA has organized its health care facilities into a polytrauma system of care that helps address the medical needs of returning servicemembers and veterans, in particular those who have an injury to more than one part of the body or organ system that results in functional disability and physical, cognitive, psychosocial, or psychological impairment. Persons with polytraumatic injuries may have injuries or conditions such as TBI, amputations, fractures, and burns. Over the past 6 years, DOD has designated over 29,000 servicemembers involved in Operation Iraqi Freedom and Operation Enduring Freedom as wounded in action, and almost 70 percent of these servicemembers are from the Army active, reserve, and national guard components. Servicemembers injured in these conflicts are surviving injuries that would have been fatal in past conflicts, due, in part, to advanced protective equipment and medical treatment. The severity of their injuries can result in a lengthy transition from patient back to duty, or to veterans’ status. Initially, most seriously injured servicemembers from these conflicts, including activated National Guard and Reserve members, are evacuated to Landstuhl Regional Medical Center in Germany for treatment. From there, they are usually transported to military treatment facilities in the United States, with most of the seriously injured admitted to Walter Reed Army Medical Center or the National Naval Medical Center. According to DOD officials, once they are stabilized and discharged from the hospital, servicemembers may relocate closer to their homes or military bases and are treated as outpatients by the closest military or VA facility. Returning injured servicemembers must potentially navigate two different disability evaluation systems that generally rely on the same criteria but for different purposes. DOD’s system serves a personnel management purpose by identifying servicemembers who are no longer medically fit for duty. The military’s process starts with identification of a medical condition that could render the servicemember unfit for duty, a process that could take months to complete. The servicemember goes through a medical evaluation board proceeding, where medical evidence is evaluated, and potentially unfit conditions are identified. The member then goes through a physical evaluation board process, where a determination of fitness or unfitness for duty is made and, if found unfit for duty, a combined percentage rating is assigned for all unfit conditions and the servicemember is discharged from duty. The injured servicemember then receives monthly disability retirement payments if he or she meets the minimum rating and years of duty thresholds or, if not, a lump-sum severance payment. VA provides veterans compensation for lost earning capacity due to service-connected disabilities. Although a servicemember may file a VA claim while still in the military, he or she can only obtain disability compensation from VA as a veteran. VA will evaluate all claimed conditions, whether they were evaluated by the military service or not. If the veteran is found to have one or more service-connected disabilities with a combined rating of at least 10 percent, VA will pay monthly compensation. The veteran can claim additional benefits, for example, if a service-connected disability worsens. While the Army took near-term actions to respond to reported deficiencies in care for its returning servicemembers, and the Senior Oversight Committee is undertaking efforts to address more systemic problems, challenges remain to overcome long-standing problems and ensure sustainable progress. In particular, efforts were made to respond to problems in four key areas: (1) case management, (2) disability evaluation systems, (3) TBI and PTSD, and (4) data sharing between DOD and VA. The three review groups identified several problems in these four areas including: a need to develop more comprehensive and coordinated care and services; a need to make the disability systems more efficient; more collaboration of research and establishment of practice guidelines for TBI and PTSD; and more data sharing between DOD and VA. While efforts have been made in all four areas, challenges have emerged including staffing for the case management initiatives and transforming the disability evaluation system. The three review groups reporting earlier this year identified numerous problems with DOD’s and VA’s case management of servicemembers, including a lack of comprehensive and well-coordinated care, treatment, and services. Case management—a process intended to assist returning servicemembers with management of their clinical and nonclinical care throughout recovery, rehabilitation, and community reintegration—is important because servicemembers often receive services from numerous therapists, providers, and specialists, resulting in differing treatment plans as well as receiving prescriptions for multiple medications. One of the review groups reported that the complexity of injuries in some patients requires a coordinated method of case management to keep the care of the returning servicemember focused and goal directed, and that this type of care was not evident at Walter Reed. The Dole-Shalala Commission recommended that recovery coordinators be appointed to craft and manage individualized recovery plans that would be used to guide the servicemembers’ care. The Dole-Shalala Commission further recommended that these recovery coordinators come from outside DOD or VA, possibly from the Public Health Service, and be highly skilled and have considerable authority to be able to access resources necessary to implement the recovery plans. The Army and the Senior Oversight Committee’s workgroup on case management have initiated efforts to develop case management approaches that are intended to improve the management of servicemembers’ recovery process. See table 1 for selected efforts by the Army and Senior Oversight Committee to improve case management services. The Army’s approach includes developing a new organizational structure for providing care to returning active duty and reserve servicemembers who are unable to perform their duties and are in need of health care—this structure is referred to as a Warrior Transition Unit. Within each unit, the servicemember is assigned to a team of three key staff and this team is responsible for overseeing the continuum of care for the servicemember. The Army refers to this team as a “triad,” and it consists of a (1) primary care manager—usually a physician who provides primary oversight and continuity of health care and ensures the quality of the servicemember’s care; (2) nurse case manager—usually a registered nurse who plans, implements, coordinates, monitors, and evaluates options and services to meet the servicemember’s needs; and (3) squad leader—a noncommissioned officer who links the servicemember to the chain of command, builds a relationship with the servicemember, and works along side the other parts of the triad to ensure the needs of the servicemember and his or her family are met. As part of the Army’s Medical Action Plan, the Army established 32 Warrior Transition Units, to provide a unit in every medical treatment facility that has 35 or more eligible servicemembers. The Army’s goal is to fill the triad positions according to the following ratios: 1:200 for primary care managers; 1:18 for nurse case managers; and 1:12 for squad leaders. This approach is a marked departure for the Army. Prior to the creation of the Warrior Transition Units, the Army separated active and reserve component soldiers into different units. One review group reported that this approach contributed to discontent about which group received better treatment. Moreover, the Army did not have formalized staffing structures nor did it routinely track patient-care ratios, which the Independent Review Group reported contributed to the Army’s inability to adequately oversee its program or identify gaps. As the Army has sought to fill its Warrior Transition Units, challenges to staffing key positions are emerging. For example, many locations have significant shortfalls in registered nurse case managers and non- commissioned officer squad leaders. As shown in figure 1, about half of the total required staffing needs of the Warrior Transition Units had been met across the Army by mid-September 2007. However, the Army had filled many of these slots thus far by temporarily borrowing staff from other positions. Permanently assigned (832) Temporarily borrowed (451) Unfilled (1,127) The Warrior Transition Unit staffing shortages are significant at many locations. As of mid-September, 17 of the 32 units had less than 50 percent of staff in place in one or more critical positions. (See table 2.) Consequently, 46 percent of the Army’s returning servicemembers who were eligible to be assigned to a unit had not been assigned, due in part to these staffing shortages. As a result, these servicemembers’ care was not being coordinated through the triad. Army officials reported that their goal is to have all Warrior Transition Units in place and fully staffed by January 2008. The Senior Oversight Committee’s approach for providing a continuum of care includes establishment of recovery coordinators and recovery plans, as recommended by the Dole-Shalala Commission. This approach is intended to complement the military services’ existing case management approaches and place the recovery coordinators at a level above case managers, with emphasis on ensuring a seamless transition between DOD and VA. The recovery coordinator is expected to be the patient’s and family’s single point of contact for making sure each servicemember receives the care outlined in the servicemember’s recovery plan—a plan to guide and support the servicemember through the phases of medical care, rehabilitation, and disability evaluation to community reintegration. The Senior Oversight Committee has indicated that DOD and VA will establish a joint Recovery Coordinator Program no later than October 15, 2007. At the time of our review, the committee was determining the details of the program. For example, the Dole-Shalala Commission recommended this approach for every seriously injured servicemember, and the Senior Oversight Committee workgroup on case management was developing criteria for determining who is “seriously injured.” The workgroup was also determining the role of the recovery coordinators—how they will be assigned to servicemembers and how many are needed, which will ultimately determine what the workload for each will be. The Senior Oversight Committee has, however, indicated that the positions will be filled with VA staff. A representative of the Senior Oversight Committee told us that the recovery coordinators would not be staffed from the U.S. Public Health Service Commissioned Corps, as recommended by the Dole- Shalala Commission. The official told us that it is appropriate for VA to staff these positions because VA ultimately provides the most care for servicemembers over their lifetime. Moreover, Senior Oversight Committee officials told us that depending on how many recovery coordinators are ultimately needed, VA may face significant human capital challenges in identifying and training individuals for these positions, which are anticipated to be complex and demanding. As we have previously reported, providing timely and consistent disability decisions is a challenge for both DOD and VA. In a March 2006 report about the military disability evaluation system, we found that the services were not meeting DOD timeliness goals for processing disability cases; used different policy, guidance and processes for aspects of the system; and that neither DOD nor the services systematically evaluated the consistency of disability decisions. On multiple occasions, we have also identified long-standing challenges for VA in reducing its backlog of claims and improving the accuracy and consistency of its decisions. The controversy over conditions at Walter Reed and the release of subsequent reports raised the visibility of problems in the military services’ disability evaluation system. In a March 2007 report, the Army Inspector General identified numerous issues with the Army Physical Disability Evaluation System. These findings included a failure to meet timeliness standards for determinations, inadequate training of staff involved in the process, and servicemember confusion about the disability rating system. Similarly, in recently-issued reports, the Task Force on Returning Global War on Terror Heroes, the Independent Review Group, and the Dole-Shalala Commission found that DOD’s disability evaluation system often generates long delays in disability determinations and creates confusion among servicemembers and their families. Also, they noted significant disparities in the implementation of the disability evaluation system among the services, and in the purpose and outcome of disability evaluations between DOD and VA. Two reports also noted the adversarial nature of DOD’s disability evaluation system, as servicemembers endeavor to reach a rating threshold that entitles them to lifetime benefits. In addition to these findings about current processes, the Dole-Shalala Commission questioned DOD’s basic role in making disability payments to veterans and recommended that VA assume sole responsibility for disability compensation for veterans. In response to the Army Inspector General’s findings, the Army made near- term operational improvements. For example, the Army developed several initiatives to streamline its disability evaluation system and address bottlenecks. These initiatives include reducing the caseloads of evaluation board liaisons who help servicemembers navigate the disability evaluation system. In addition, the Army developed and conducted the first certification training for evaluation board liaisons. Furthermore, the Army increased outreach to servicemembers to address confusion about the process. For example, it initiated briefings conducted by evaluation board liaisons and soldiers’ counsels to educate servicemembers about the process and their rights. The Army also initiated an online tool that enables servicemembers to check the status of their case during the evaluation process. We were not able to fully assess the implementation and effectiveness of these initiatives because some changes are still in process and complete data are not available. To address more systemic concerns about the timeliness and consistency of DOD’s and VA’s disability evaluation systems, DOD and VA are planning to pilot a joint disability evaluation system. DOD and VA are reviewing multiple options that incorporate variations of the following three elements: (1) a single, comprehensive medical examination to be used by both DOD and VA in their disability evaluations; (2) a single disability rating performed by VA; and (3) incorporating a DOD-level evaluation board for adjudicating servicemembers’ fitness for duty. For example, in one option, the DOD-level evaluation board makes fitness for duty determinations for all of the military services; whereas in another option, the services make fitness for duty determinations, and the DOD-level board adjudicates appeals of these determinations. Another open question is whether DOD or VA would conduct the comprehensive medical examination. Table 3 summarizes four pilot options under consideration by DOD and VA. As recent pilot planning exercises verified, in addition to agreeing on which pilot option to implement, DOD and VA must address several key design issues before the pilot can begin. For example, it has not been decided how DOD will use VA’s disability rating to determine military disability benefits for servicemembers in the pilot. In addition, DOD and VA have not finalized a set of performance metrics to assess the effect of the piloted changes. DOD and VA officials had hoped to begin the pilot on August 1, 2007, but the intended start date slipped as agency officials took steps to further consider alternatives and address other important questions related to recent and expected events that may add further complexity to the pilot development process. For example, the Senior Oversight Committee may either choose or be directed by the Congress to pilot the Dole-Shalala recommendation that only VA and not DOD provide disability payments to veterans. Implementing this recommendation would require a change to current law, and could affect whether or how the agencies implement key pilot elements under consideration. In addition, the Veterans’ Disability Benefits Commission, which is scheduled to report in October 2007, may recommend changes that could also influence the pilot’s structure. Further, the Congress is considering legislation that may require DOD and VA to conduct multiple, alternative disability evaluation pilots. DOD and VA face other critical challenges in creating a new disability evaluation system. For example, DOD is challenged to overcome servicemembers’ distrust of a disability evaluation process perceived to be adversarial. Implementing a pilot without adequately considering alternatives or addressing critical policy and procedural details may feed that distrust because DOD and VA plan to pilot the new system with actual servicemembers. The agencies also face staffing and training challenges to conduct timely and consistent medical examinations and disability evaluations. Both the Independent Review Group and the Dole-Shalala Commission recommended that only VA establish disability ratings. However, as we noted above, VA is dealing with its own long-standing challenges in providing veterans with timely and consistent decisions. Similarly, if VA becomes responsible for servicemembers’ comprehensive physical examinations, it would face additional staffing and training challenges, at a time when it is already addressing concerns about the timeliness and quality of its examinations. Further, while having a single disability evaluation could ensure more consistent disability ratings, VA’s Schedule for Rating Disabilities is outdated because it does not adequately reflect changes in factors such as labor market conditions and assistive technologies on disabled veterans’ ability to work. As we have reported, the nature of work has changed in recent decades as the national economy has moved away from manufacturing-based jobs to service- and knowledge-based employment. Yet VA’s disability program remains mired in concepts from the past, particularly the concept that impairment equates to an inability to work. The three independent review groups examining the deficiencies found at Walter Reed identified a range of complex problems associated with DOD and VA’s screening, diagnosis, and treatment of TBI and PTSD, signature injuries of recent conflicts. Both conditions are sometimes referred to as “invisible injuries” because outwardly the individual’s appearance is just as it was before the injury or onset of symptoms. In terms of mild TBI, there may be no observable head injury and symptoms may overlap with those associated with PTSD. With respect to PTSD, there is no objective diagnostic test and its symptoms can sometimes be associated with other psychological conditions (e.g., depression). Recommendations from the review groups examining these areas included better coordination of DOD and VA research and practice guidelines and hiring and retaining qualified health professionals. However, according to Army officials and the Independent Review Group report, obtaining qualified health professionals, such as clinical psychologists, is a challenge, which is due to competition with private sector salaries and difficulty recruiting for certain geographical locations. The Dole-Shalala Commission noted that while VA is considered a leader in PTSD research and treatment, knowledge generated through research and clinical experience is not systematically disseminated to all DOD and VA providers of care. Both the Army and the Senior Oversight Committee are working to address this broad range of issues. (See table 4.) The Army, through its Medical Action Plan, has policies in place requiring all servicemembers sent overseas to a war zone to receive training on recognizing the symptoms of mild TBI and PTSD. The Army is also exploring ways to track events on the battlefield, such as blasts, that may result in TBI or PTSD. In addition, the Army recently developed policies to provide mild TBI and PTSD training to all social workers, nurse case managers, psychiatric nurses, and psychiatric nurse practitioners to better identify these conditions. As of September 13, 2007, 6 of the Army’s 32 Warrior Transition Units had completed training for all of these staff. A Senior Oversight Committee workgroup on TBI and PTSD is working to ensure health care providers have education and training on screening, diagnosing, and treating both mild TBI and PTSD, mainly by developing a national Center of Excellence as recommended by the three review groups. This Center of Excellence is expected to combine experts and resources from all military services and VA to promote research, awareness, and best practices on mild TBI as well as PTSD and other psychological health issues. A representative of the Senior Oversight Committee workgroup on TBI and psychological health told us that the Center of Excellence would include the existing Defense and Veterans Brain Injury Center—a collaboration among DOD, VA, and two civilian partners that focuses on TBI treatment, research, and education. DOD and VA have been working for almost 10 years to facilitate the exchange of medical information. However, the three independent review groups identified the need for DOD and VA to further improve and accelerate efforts to share data across the departments. Specifically, the Dole-Shalala Commission indicated that DOD and VA must move quickly to get clinical and benefit data to users, including making patient data immediately viewable by any provider, allied health professional, or program administrator who needs the data. Furthermore, in July 2007, we reported that although DOD and VA have made progress in both their long- term and short-term initiatives to share health information, much work remains to achieve the goal of a seamless transition between the two departments. While pursuing their long-term initiative to develop a common health information system that would allow the two-way exchange of computable health data, the two departments have also been working to share data in their existing systems. See table 5 for selected efforts under way by the Army and Senior Oversight Committee to improve data sharing between DOD and VA. As part of the Army Medical Action Plan, the Army has taken steps to facilitate the exchange of data between its military treatment facilities and VA. For example, the Army Medical Department is developing a memorandum of understanding between the Army and VA that would allow VA access to data on severely injured servicemembers who are being transferred to a VA polytrauma center. The memorandum of understanding would also allow VA’s Veterans Health Administration and Veterans Benefits Administration access to data in a servicemember’s medical record that are related to a disability claim the servicemember has filed with VA. Army officials told us that the Army’s medical records are part paper (hard copy) and part electronic, and this effort would provide the VA access to the paper data until the capability to share the data electronically is available at all sites. Given that DOD and VA already have a number of efforts under way to improve data sharing between the two departments, the Senior Oversight Committee, through its data sharing workgroup, has been looking for opportunities to accelerate the departments’ sharing initiatives that are already planned or in process and to identify additional data sharing requirements that have not been clearly articulated. For example, the Senior Oversight Committee has approved several policy changes in response to the Dole-Shalala Commission, one of which requires DOD and VA to ensure that all essential health and administrative data are made available and viewable to both agencies, and that progress is reported by a scorecard, by October 31, 2008. A representative of the data sharing workgroup told us that the departments are achieving incremental increases to data sharing capabilities and plan to have all essential health data—such as outpatient pharmacy, allergy, laboratory results, radiology reports, and provider notes—viewable by all DOD and VA facilities by the end of December 2007. Although the agencies have recently experienced delays in efforts to exchange data, the representative said that the departments are on track to meet all the timelines established by the Senior Oversight Committee. A Senior Oversight Committee workgroup on data sharing has also been coordinating with other committee workgroups on their information technology needs. Although workgroup officials told us that they have met numerous times with the case management and disability evaluation systems workgroups to discuss their data sharing needs, they have not begun implementing necessary systems because they are dependent on the other workgroups to finalize their information technology needs. For example, the Senior Oversight Committee has required DOD and VA to establish a plan for information technology support of the recovery plan to be used by recovery coordinators, which integrates essential clinical (e.g., medical care) and nonclinical aspects (e.g., education, employment, disability benefits) of recovery, no later than November 1, 2007. However, this cannot be done until the case management workgroup has identified the components and information technology needs of these clinical and nonclinical aspects, and as of early September this had not been done. Data sharing workgroup representatives indicated that the departments’ data sharing initiatives will be ongoing because medications, diagnoses, procedures, standards, business practices, and technology are constantly changing, but the departments expect to meet most of the data sharing needs of patients and providers by end of fiscal year 2008. Our preliminary observations are that fixing the long-standing and complex problems spotlighted in the wake of Walter Reed media accounts as expeditiously as possible is critical to ensuring high-quality care for our returning servicemembers, and success will ultimately depend on sustained attention, systematic oversight by DOD and VA, and sufficient resources. Efforts thus far have been on separate but related tracks, with the Army seeking to address service-specific issues while DOD and VA are working together to address systemic problems. Many challenges remain, and critical questions remain unanswered. Among the challenges is how the efforts of the Army—which has the bulk of the returning servicemembers needing medical care—will be coordinated with the broader efforts being undertaken by DOD and VA. The centerpiece of the Army’s effort is its Medical Action Plan, and the success of the plan hinges on staffing the newly-created Warrior Transition Units. Permanently filling these slots may prove difficult, and borrowing personnel from other units has been a temporary fix but it is not a long- term solution. The Army can look to the private sector for some skills, but it must compete for personnel in a civilian market that is vying for medical professionals with similar skills and training. Perhaps one of the most complex efforts under way is that of redesigning DOD’s disability evaluation system. Delayed decisions, confusing policies, and the perception that DOD and VA disability ratings result in inequitable outcomes have eroded the credibility of the system. Thus, it is imperative that DOD and VA take prompt steps to address fundamental system weaknesses. However, as we have noted, key program design and operational policy questions must be addressed to ensure that any proposed system redesign has the best chance for success and that servicemembers and veterans receive timely, accurate, and consistent decisions. This will require careful study of potential options, a comprehensive assessment of outcome data associated with the pilot, proper metrics to gauge success, and an evaluation mechanism to ensure needed adjustments are made to the process along the way. Failure to properly consider alternatives or address critical policy and procedural details could exacerbate delays and confusion for servicemembers, and potentially jeopardize the system’s successful transformation. Mr. Chairman, this completes my prepared remarks. We would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information about this testimony, please contact John H. Pendleton at (202) 512-7114 or pendletonj@gao.gov or Daniel Bertoni at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made major contributions to this report are listed in appendix II. In the aftermath of deficiencies identified at Walter Reed Medical Center, three separate review groups—the President’s Commission on Care for America’s Returning Wounded Warriors, commonly referred to as the Dole-Shalala Commission; the Independent Review Group, established by the Secretary of Defense; and the President’s Task Force on Returning Global War on Terror Heroes—investigated the factors that may have led to these problems. Selected findings of each report are summarized in table 6. In addition to the contact named above, Bonnie Anderson, Assistant Director; Michele Grgich, Assistant Director; Jennie Apter; Janina Austin; Joel Green; Christopher Langford; Chan My Sondhelm; Barbara Steel- Lowney; and Greg Whitney, made key contributions to this statement. DOD Civilian Personnel: Medical Policies for Deployed DOD Federal Civilians and Associated Compensation for Those Deployed. GAO-07-1235T. Washington, D.C.: September 18, 2007. Global War on Terrorism: Reported Obligations for the Department of Defense. GAO-07-1056R. Washington, D.C.: July 26, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Remain Far from Having Comprehensive Electronic Medical Records. GAO-07-1108T. Washington, D.C.: July 18, 2007. Defense Health Care: Comprehensive Oversight Framework Needed to Help Ensure Effective Implementation of a Deployment Health Quality Assurance Program. GAO-07-831. Washington, D.C.: June 22, 2007. DOD’s 21st Century Health Care Spending Challenges, Presentation for the Task Force on the Future of Military Health Care. Statement delivered by David M. Walker, Comptroller General of the United States. GAO-07-766-CG. Washington, D.C.: April 18, 2007. Veterans’ Disability Benefits: Long-Standing Claims Processing Challenges Persist. GAO-07-512T. Washington, D.C.: March 7, 2007. DOD and VA Health Care: Challenges Encountered by Injured Servicemembers during Their Recovery Process. GAO-07-589T. Washington, D.C.: March 5, 2007. VA Health Care: Spending for Mental Health Strategic Plan Initiatives Was Substantially Less Than Planned. GAO-07-66. Washington, D.C.: November 21, 2006. VA and DOD Health Care: Efforts to Provide Seamless Transition of Care for OEF and OIF Servicemembers and Veterans. GAO-06-794R. Washington, D.C.: June 30, 2006. Post-Traumatic Stress Disorder: DOD Needs to Identify the Factors Its Providers Use to Make Mental Health Evaluation Referrals for Servicemembers. GAO-06-397. Washington, D.C.: May 11, 2006. Military Disability System: Improved Oversight Needed to Ensure Consistent and Timely Outcomes for Reserve and Active Duty Service Members. GAO-06-362. Washington, D.C.: March 31, 2006. VA and DOD Health Care: Opportunities to Maximize Resource Sharing Remain. GAO-06-315. Washington, D.C.: March 20, 2006. VA and DOD Health Care: VA Has Policies and Outreach Efforts to Smooth Transition from DOD Health Care, but Sharing of Health Information Remains Limited. GAO-05-1052T. Washington, D.C.: September 28, 2005. Federal Disability Assistance: Wide Array of Programs Needs to be Examined in Light of 21st Century Challenges. GAO-05-626. Washington, D.C.: June 2, 2005. Veterans’ Disability Benefits: Claims Processing Problems Persist and Major Performance Improvements May Be Difficult. GAO-05-749T. Washington, D.C.: May 26, 2005. DOD and VA: Systematic Data Sharing Would Help Expedite Servicemembers’ Transition to VA Services. GAO-05-722T. Washington, D.C.: May 19, 2005. VA Health Care: VA Should Expedite the Implementation of Recommendations Needed to Improve Post-Traumatic Stress Disorder Services. GAO-05-287. Washington, D.C.: February 14, 2005. VA and Defense Health Care: More Information Needed to Determine If VA Can Meet an Increase in Demand for Post-Traumatic Stress Disorder Services. GAO-04-1069. Washington, D.C.: September 20, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In February 2007, a series of Washington Post articles disclosed troublesome deficiencies in the provision of outpatient services at Walter Reed Army Medical Center, raising concerns about the care for returning servicemembers. These deficiencies included a confusing disability evaluation system and servicemembers in outpatient status for months and sometimes years without a clear understanding about their plan of care. The reported problems at Walter Reed prompted broader questions about whether the Department of Defense (DOD) as well as the Department of Veterans Affairs (VA) are fully prepared to meet the needs of returning servicemembers. In response to the deficiencies reported at Walter Reed, the Army took a number of actions and DOD formed a joint DOD-VA Senior Oversight Committee. This statement provides information on the near-term actions being taken by the Army and the broader efforts of the Senior Oversight Committee to address longer-term systemic problems that impact health care and disability evaluations for returning servicemembers. Preliminary observations in this testimony are based largely on documents obtained from and interviews with Army officials, and DOD and VA representatives of the Senior Oversight Committee, as well as on GAO's extensive past work. We discussed the facts contained in this statement with DOD and VA. While efforts are under way to respond to both Army-specific and systemic problems, challenges are emerging such as staffing new initiatives. The Army and the Senior Oversight Committee have efforts under way to improve case management--a process intended to assist returning servicemembers with management of their care from initial injury through recovery. Case management is especially important for returning servicemembers who must often visit numerous therapists, providers, and specialists, resulting in differing treatment plans. The Army's approach for improving case management for its servicemembers includes developing a new organizational structure--a Warrior Transition Unit, in which each servicemember would be assigned to a team of three key staff--a physician care manager, a nurse case manager, and a squad leader. As the Army has sought to staff its Warrior Transition Units, challenges to staffing critical positions are emerging. For example, as of mid-September 2007, over half the U.S. Warrior Transition Units had significant shortfalls in one or more of these critical positions. The Senior Oversight Committee's plan to provide a continuum of care focuses on establishing recovery coordinators, which would be the main contact for a returning servicemember and his or her family. This approach is intended to complement the military services' existing case management approaches and place the recovery coordinators at a level above case managers, with emphasis on ensuring a seamless transition between DOD and VA. At the time of GAO's review, the committee was still determining how many recovery coordinators would be necessary and the population of seriously injured servicemembers they would serve. As GAO and others have previously reported, providing timely and consistent disability decisions is a challenge for both DOD and VA. To address identified concerns, the Army has taken steps to streamline its disability evaluation process and reduce bottlenecks. The Army has also developed and conducted the first certification training for evaluation board liaisons who help servicemembers navigate the system. To address more systemic concerns, the Senior Oversight Committee is planning to pilot a joint disability evaluation system. Pilot options may incorporate variations of three key elements: (1) a single, comprehensive medical examination; (2) a single disability rating done by VA; and (3) a DOD-level evaluation board for adjudicating servicemembers' fitness for duty. DOD and VA officials hoped to begin the pilot in August 2007, but postponed implementation in order to further review options and address open questions, including those related to proposed legislation. Fixing these long-standing and complex problems as expeditiously as possible is critical to ensuring high-quality care for returning servicemembers, and success will ultimately depend on sustained attention, systematic oversight by DOD and VA, and sufficient resources. |
The size and composition of the nuclear stockpile have evolved as a consequence of the global security environment and the national security needs of the United States. According to NNSA’s Stockpile Stewardship and Management Plan for Fiscal Year 2016, the stockpile peaked at 31,255 weapons in 1967, and in September 2013, the stockpile consisted of 4,804 weapons—the smallest since the Eisenhower Administration. The New Strategic Arms Reduction Treaty between the United States and Russia, which entered into force on February 5, 2011, is to reduce the operationally deployed stockpile even further by 2018. Weapons that were originally produced on average 25 to 30 years ago are now well past their original design life of approximately 15 to 20 years. In addition, no new nuclear weapons have been developed since the closing days of the Cold War. Before the end of the U.S. underground nuclear testing program in 1992, developing and maintaining the nuclear stockpile were largely accomplished by a continual cycle of weapon design, weapon testing, and the incorporation of lessons learned in the next design. A critical step in this process was conducting underground nuclear explosive tests. Since 1992, the United States has observed a self-imposed moratorium on nuclear explosive testing and has, instead, relied on a program of nonnuclear testing and modeling to ensure the reliability, safety, and effectiveness of the stockpile. While the United States maintains the policy of no new nuclear testing or weapon designs, and the stockpile is reduced in absolute numbers, confidence in the existing stockpile and the effectiveness of the deterrent must remain high to meet U.S. national security needs. For this reason, the United States is continuing to modernize the existing stockpile through life-extension programs (LEP). LEPs are modifications that refurbish warheads or bombs by replacing aged components with the intent of extending the service life of weapons by 20 to 30 years, while increasing safety, improving security, and addressing defects. NNSA’s Office of Defense Programs is responsible for the manufacture, maintenance, refurbishment, surveillance, and dismantlement of nuclear weapons. Most modern nuclear weapons consist of three sets of materials and components—a primary, a secondary, and a set of nonnuclear components. When detonated, the primary and secondary components, which together are referred to as the weapon’s “nuclear explosive package,” produce the weapon’s explosive force, or “yield.” Some nonnuclear components—collectively called “limited-life components”—have shorter service lives than the weapons themselves and, therefore, must be periodically replaced. There are two key efforts in the stockpile surveillance program—Core Surveillance and the Enhanced Surveillance Program. NNSA’s Core Surveillance, in one form or the other, has been in place for nearly 60 years. In contrast, the Enhanced Surveillance Program was established in the mid-1990s to assist in surveillance and evaluation of the stockpile primarily by identifying aging signs, developing aging models to predict the impact of aging on the stockpile, and developing diagnostic tools. Since the late 1950s, Core Surveillance has focused on sampling and testing the nuclear stockpile to provide continuing confidence in its reliability. Core Surveillance conducts tests that provide current information—essentially a snapshot of the current condition of the stockpile—for the annual assessment of the stockpile. According to NNSA officials, Core Surveillance focuses mainly on identifying the “birth defects” of a system—the manufacturing defects in current components and materials. Under Core Surveillance, NNSA’s national security laboratories and production plants are to evaluate the current state of weapons and weapon components for the attributes of function, condition, material properties, and chemical composition through the following activities: System-Level Laboratory Testing. For such tests, units from each type of stockpiled weapon are chosen annually, either randomly or specifically, and sent to the Pantex Plant in Texas for disassembly, inspection, reconfiguration, and testing by the national security laboratories. System-Level Flight Testing. These tests drop or launch a weapon with its nuclear material removed. NNSA coordinates flight testing with DOD, which is responsible for providing the military assets (e.g., aircraft and missiles) needed to drop or launch a weapon. Component and Material Testing. These tests are conducted on nuclear and nonnuclear components and materials by both the national security laboratories and the production plants that manufactured them. Organizationally, Core Surveillance is part of NNSA’s Directed Stockpile Work Program. This program also conducts, among other things, maintenance of active weapons in the stockpile, LEPs, and dismantlement and disposition of retired weapons. Core Surveillance activities were funded at approximately $217 million in fiscal year 2016. According to NNSA documents, through scientific and engineering efforts, the Enhanced Surveillance Program enables the agency to better predict where defects might occur in the future to help determine useful lifetimes of weapons and certain key components, such as switches or detonators, and to help plan when replacement is needed. The creation of the Enhanced Surveillance Program in the mid-1990s came at a time when concerns were growing (1) with an aging stockpile and (2) that Core Surveillance tended to produce diminishing returns. More specifically, in a 2006 study, NNSA and the Sandia National Laboratories found that as more is known about manufacturing and current aging defects—the focus of Core Surveillance—fewer and fewer manufacturing-related defects are discovered. This 2006 study suggested a different approach to surveillance for aging weapons. According to NNSA officials, the Enhanced Surveillance Program conducts three key activities: Aging studies. Enhanced Surveillance Program aging studies support decisions on when and whether to reuse or replace weapons components and materials. As part of these studies the program identifies and develops new materials and components that can substitute for materials that are no longer available; identifies inadequately performing components; and assesses performance of existing components to assist in weapons life-extension decisions. For example, to assist in making decisions on the life extension of weapons, the Enhanced Surveillance Program assessed the feasibility of reusing certain components. Specifically, according to NNSA documents, in fiscal year 2014, the Enhanced Surveillance Program validated the reuse of a battery for one weapon through aging studies, resulting in eliminating the need and cost to redesign the part. In another example, according to NNSA officials, Enhanced Surveillance Program aging models made it possible to certify the potential reuse of a key part of the W80 warhead to allow life extension of that weapon. NNSA also uses information from these aging studies in LEPs to guide decisions on when future weapons modifications, alterations, and life extensions need to occur to reduce the risk of potential problems from future defects. Finally, NNSA uses information from the aging studies in the national security laboratory directors’ annual assessment of the condition of the stockpile. Computational modeling. On the basis of its aging studies and other data, the Enhanced Surveillance Program develops computational models to predict the impacts of aging on weapons components and materials. According to the Enhanced Surveillance Program’s federal program manager, computational predictive models primarily benefit weapons systems managers at the three nuclear security laboratories. The federal program manager noted that the models allow a projection of the future performance of the systems and anticipate failures with sufficient time to correct them. Diagnostic tool development. The Enhanced Surveillance Program develops diagnostic tools to support Core Surveillance and allow the evaluation of weapons without the need to dismantle and destroy them. This is important since new weapons are not being produced. One diagnostic tool developed by the program was the high-resolution computed tomography image analysis tool for a particular nuclear component, implemented in fiscal year 2009. NNSA officials said this diagnostic tool has enhanced the ability to identify potential defects or anomalies without the need to dismantle or destroy the component. Organizationally, the Enhanced Surveillance Program is a part of NNSA’s Engineering Program, which is part of NNSA’s broader research, development, test, and evaluation (RDT&E) program. The Engineering Program creates and develops tools and capabilities to support efforts to ensure weapons are safe and reliable. NNSA’s total RDT&E budget allocation for fiscal year 2016 is $1.8 billion; the Enhanced Surveillance Program budget allocation for fiscal year 2016 is approximately $39 million. According to agency documents, because of long-standing concerns over the stockpile surveillance program, NNSA launched its 2007 initiative to, among other things, better integrate stockpile surveillance program activities. The concerns date back to the mid-1990s. For example, our July 1996 report on the surveillance program found the agency was behind in conducting surveillance tests and did not have written plans for addressing the backlog. A January 2001 internal NNSA review of the surveillance program made several recommendations to improve surveillance, including addressing the selection and testing approach for weapons and components, developing new tools to allow for nondestructive testing of the stockpile, improving aging and performance models, and achieving closer coordination and integration of Core Surveillance and the Enhanced Surveillance Program. Further, an April 2004 review of the Enhanced Surveillance Program by DOE’s Office of Inspector General found that NNSA experienced delays in completing some Enhanced Surveillance Program milestones and was at risk of not meeting future milestones. The report noted that such delays could result in NNSA’s being unprepared to identify age-related defects in weapons and impact the agency’s ability to annually assess the condition of the stockpile. Finally, an October 2006 DOE Office of Inspector General report found that NNSA had not eliminated its surveillance testing backlog. Faced with this criticism, a growing backlog of Core Surveillance’s traditional surveillance testing, budgetary pressures, and an aging stockpile, NNSA developed its 2007 initiative. According to its project plan, the 2007 initiative sought to establish clear requirements for determining stockpile surveillance needs and to integrate all surveillance activities—to include Core Surveillance and the Enhanced Surveillance Program—through a strengthened management structure. In addition, NNSA sought to create a more flexible, cost-effective, and efficient surveillance program by, among other things, dismantling fewer weapons and increasing the understanding of the impact of aging on weapons, components, and materials by being able to predict the effects of aging activities. According to an NNSA official who previously oversaw surveillance activities, because of the nature of its work, the Enhanced Surveillance Program was intended to be a key part of this transformation effort. More specifically, according to the 2007 initiative project plan, one proposal was to increase evaluations of aging effects on nonnuclear weapons components and materials. The 2007 initiative project plan noted that more than 100 such evaluations would be undertaken at the Sandia National Laboratories in fiscal year 2007, the first year of the initiative’s implementation. In addition, the 2007 initiative project plan stated that the Enhanced Surveillance Program would continue to assess the viability of diagnostic tools in support of Core Surveillance. NNSA implemented some aspects of its 2007 initiative but did not fully implement its envisioned role for the Enhanced Surveillance Program and has not developed a long-term strategy for the program. NNSA has substantially reduced the program’s funding since 2007 and recently refocused some of its RDT&E programs on multiple weapon life- extension efforts and supporting efforts. A February 2010 internal NNSA review noted that NNSA had implemented some important aspects of the 2007 initiative. For example, NNSA updated guidance laying out processes for identifying surveillance requirements. In addition, the agency had implemented a governance structure consisting of working committees to harmonize requirements between Core Surveillance and the Enhanced Surveillance Program. Furthermore, the agency had created a senior-level position to lead the overall surveillance effort and better integrate Core Surveillance and the Enhanced Surveillance Program. However, according to NNSA documents and officials, the agency did not fully implement its envisioned role for the Enhanced Surveillance Program. Instead of increasing the role of the program by conducting the range of aging studies as envisioned, NNSA budgeted less funding to it, delayed some planned work, and transferred work to other NNSA programs. The amount of funding the agency budgeted to the Enhanced Surveillance Program declined from $87 million in fiscal year 2007—the first year of the 2007 initiative’s implementation—to $79 million in fiscal year 2008. NNSA has continued to budget less funding to the Enhanced Surveillance Program. Funding dropped to approximately $38 million in fiscal year 2015, a reduction of more than 50 percent from fiscal year 2007. While the Enhanced Surveillance Program has experienced reductions in funding and scope since the 2007 initiative, Core Surveillance funding has generally kept pace with required stockpile testing, according to an NNSA official. After an initial funding reduction from $195 million in fiscal year 2007 to $158 million in fiscal year 2009, NNSA increased the budgeted funding to Core Surveillance in 2010 and has stabilized its funding levels since then. Agency officials said they believe the Core Surveillance program is now generally stable. Figure 1 shows funding levels for the two programs for fiscal years 2007 through 2015. NNSA also delayed some key Enhanced Surveillance Program activities during this time. For example, NNSA did not complete the proposed evaluations of the effects of aging on nonnuclear components and materials that were to be largely carried out at the Sandia National Laboratories. These evaluations—which NNSA viewed as an important part of the Enhanced Surveillance Program when it was being managed as a campaign, according to an NNSA official—were initiated in fiscal year 2007 and originally estimated to be completed by 2012. However, a 2010 NNSA review concluded these evaluations had not occurred. According to a contract representative at the Sandia National Laboratories overseeing Enhanced Surveillance Program work, these evaluations no longer have an estimated time frame for completion and their systematic completion, as was once envisioned, is no longer a program goal. Furthermore, while the program has developed some diagnostic tools to aid Core Surveillance, such as high-resolution computed tomography image analysis, NNSA officials and the NNSA fiscal year 2016 budget request said that other efforts to develop diagnostic tools had been deferred because of lack of funding. In addition, NNSA transferred some Enhanced Surveillance Program work to other programs. For example, NNSA transferred experiments (and related funding) to measure aging effects and to provide lifetime assessments on the plutonium pits—a key nuclear weapons component—from the Enhanced Surveillance Program to NNSA’s Science Campaign in fiscal year 2009. According to the Enhanced Surveillance Program’s federal program manager, NNSA has budgeted reduced funding because of competing internal priorities. The federal program manager said that the Enhanced Surveillance Program has to compete for funding with other internal high- priority activities, such as LEPs and infrastructure projects in a climate of overall agency funding constraints caused by, among other things, internal agency pressures to achieve budgetary savings to enable modernization of the stockpile and other priorities. In addition, Core Surveillance’s importance in detecting “birth defects” of weapons—the manufacturing defects or signs of aging in current components and materials—has increased, according to NNSA officials, as NNSA has undertaken and completed more LEPs. In fiscal year 2016, NNSA shifted the focus of some of its RDT&E efforts, including efforts in the Enhanced Surveillance Program, to meet the immediate needs of its ongoing and planned LEPs and related supporting efforts. According to NNSA officials, the funding and scope reductions in the Enhanced Surveillance Program reflect ongoing internal prioritization tensions within NNSA over meeting immediate needs—such as understanding current stockpile condition using traditional surveillance methods—and investing in the science, technology, and engineering activities needed to understand the impacts of aging on weapons and their components in the future. The Enhanced Surveillance Program federal program manager as well as other stakeholders, such as the JASON group of experts, noted funding changes may have a larger impact on the program than is immediately apparent. NNSA officials said that the program plays a considerably broader role in assessing the condition of the stockpile than its name suggests and supports a wide variety of efforts, including the statutorily required annual assessment process, weapons life extension and modernization programs, and ongoing efforts to maintain weapons systems. According to a 2014 NNSA analysis conducted by the Enhanced Surveillance Program’s federal program manager, slightly less than 15 percent of the program’s fiscal year 2014 budget allocation supported the development of diagnostic tools largely for Core Surveillance. About half of the program’s fiscal year 2014 budget allocation went to conducting aging studies, predictive modeling, and component and material evaluation studies that may support Core Surveillance but also benefit weapons life extension and modernization programs and ongoing efforts to maintain weapons systems, according to agency officials. The analysis found that about one-third of the Enhanced Surveillance Program’s fiscal year 2014 budget allocation went to activities supporting the annual assessment process and ongoing or planned LEPs. As of April 2016, NNSA was no longer pursuing the vision for the Enhanced Surveillance Program contained in the 2007 initiative and did not have a current long-term strategy for the program. Specifically, the fiscal year 2017 Stockpile Stewardship and Management Plan noted that NNSA refocused all of its RDT&E engineering activities—including the activities within the Enhanced Surveillance Program—on supporting more immediate stockpile needs and, according to the program’s federal program manager, NNSA has not developed a corresponding long-term strategy for the program. Enhanced Surveillance Program officials continue to focus on year-to-year management of the program under reduced funding levels to maintain key stockpile assessment capabilities, such as supporting Core Surveillance activities, the annual assessment process, and LEPs. Our previous work has demonstrated that a long-term strategy is particularly important for technology-related efforts such as the Enhanced Surveillance Program. Specifically, our April 2013 report found that for technology-related efforts, without a long-term strategy that provides an overall picture of what an agency is investing in, it is difficult for Congress and other decision makers to understand up front what they are funding and what benefits they can expect. In 1993, GPRA established a system for agencies to set goals for program performance and to measure results. GPRAMA, which amended GPRA, requires, among other things, that federal agencies develop long- term strategic plans that include agencywide goals and strategies for achieving those goals. Our body of work has shown that these requirements also can serve as leading practices for strategic planning at lower levels within federal agencies, such as NNSA, to assist with planning for individual programs or initiatives that are particularly challenging. Taken together, the strategic planning elements established under these acts and associated Office of Management and Budget guidance, and practices we have identified, provide a framework of leading practices in federal strategic planning and characteristics of good performance measures. For programs or initiatives, these practices include defining strategic goals, defining strategies that address management challenges and identify resources needed to achieve these goals, and developing and using performance measures to track progress in achieving these goals and to inform management decision making. Our review of NNSA documents and interviews with NNSA officials found that NNSA does not have a current long-term strategy for the Enhanced Surveillance Program defining the program’s strategic goals that includes these practices. Strategic goals explain the purpose of agency programs and the results—including outcomes—that they intend to achieve. The Enhanced Surveillance Program has general long-term goals, such as “developing tools and information useful to ensure the stockpile is healthy and reliable.” However, the program’s long-term goals do not provide outcomes that are measurable or that encompass the entirety of the program. NNSA officials told us they use annual goals, which help manage work on a yearly basis. For example, the program’s goals for fiscal year 2015 included “develop, validate and deploy improved predictive capabilities and diagnostics to assess performance and lifetime for nuclear and non-nuclear materials.” By managing work on an annual basis, longer-term work—such as technology development projects extended over several years—may receive a lower priority and thus, according to NNSA officials, may not be funded. In addition, NNSA funds the program’s annual requirements as part of the agency’s annual budget formulation process and funds the program in accordance with the agency’s internal process for allocating its budget authority. For fiscal year 2016, the agency budgeted funding for the program at a slightly higher level to meet stockpile requirements, such as surveillance, and the annual assessment process. However, without a current long-term strategy for the program, NNSA cannot plan for any management challenges that threaten its ability to meet its long-term strategic goals or the resources needed to meet those goals. Moreover, NNSA program officials told us that the agency has not defined specific quantifiable performance measures that could be used to track the program’s progress toward its long-term goals, as called for by leading practices. The need for NNSA to develop clear, measureable performance metrics for the Enhanced Surveillance Program has been highlighted in past reviews, namely by DOE’s Inspector General and by the JASON group. For example, in a September 2012 report, the Inspector General noted that NNSA’s performance measure for the program was based on the percentage of funding spent rather than on work accomplishments. Furthermore, a July 2013 memorandum from the director of the Office of Management and Budget to executive agency heads noted that, in accordance with OMB Circular A-11 and GPRAMA, agencies should describe the targeted outcomes of research and development programs using meaningful, measurable, quantitative metrics, where possible, and describe how they plan to evaluate the success of the programs. We found in past work that effective long-term planning is needed to guide decision making in programs, including laboratory research and development programs, so that congressional and other decision makers can better understand up front what they are funding and what benefits they can expect. As NNSA refocused its research and technology development efforts for the Enhanced Surveillance Program on LEPs and related activities and as NNSA officials said that they recognized the need for a new long-term strategy for the program, it is an opportune time to incorporate sound federal strategic planning practices. A new strategy for the program that incorporates outcome-oriented strategic goals, addresses management challenges and identifies resources needed to achieve these goals, and develops and uses performance measures to track progress in achieving goals would allow the agency to better inform long-term planning and management decision making for the program. By seeking to increase the nondestructive evaluations of nonnuclear components—work that was to be conducted under the Enhanced Surveillance Program—NNSA sought to reduce Core Surveillance’s backlog of mandated system-level tests requiring the dismantling of these components. However, NNSA did not fully implement its vision for the Enhanced Surveillance Program in its 2007 initiative. For example, rather than expanding the program, NNSA budgeted reduced funding for it, and the program did not complete the proposed evaluations of the effects of aging on nonnuclear components and materials. More recently, NNSA directed its RDT&E programs to focus on LEPs and related activities. This includes the Enhanced Surveillance Program. Enhanced Surveillance Program personnel have focused on year-to-year management of a program that has seen a nearly 50-percent funding reduction over the past decade and have not yet sought to redefine a strategy for how the program can best complement NNSA’s other efforts to assess the condition of the stockpile, including Core Surveillance. With funding appearing to have been stabilized and with NNSA’s adopting a different approach for all of its RDT&E programs, it is an opportune time to develop an Enhanced Surveillance Program strategy. A new long-term strategy for the program that incorporates outcome-oriented strategic goals, addresses management challenges and identifies resources needed to achieve these goals, and develops and uses performance measures to track progress in achieving goals would allow the agency to better inform long-term planning and management decision making for the program. To help ensure that NNSA can better inform long-term planning and management decision making as well as to ensure that the Enhanced Surveillance Program complements NNSA’s other efforts to assess the nuclear weapons stockpile, we recommend that the NNSA Administrator develop a long-term strategy for the Enhanced Surveillance Program that incorporates outcome-oriented strategic goals, addresses management challenges and identifies resources needed to achieve these goals, and develops and uses performance measures to track progress in achieving these goals. We provided a draft of this report to the NNSA Administrator for review and comment. In his written comments, the NNSA Administrator agreed with our recommendation that the agency develop a long-term strategy for the Enhanced Surveillance Program. The Administrator noted that the growth envisioned for the Enhanced Surveillance Program did not materialize as originally intended but that the agency remains committed to long-term success of the program. The Administrator noted that the agency estimated completing a long-term strategy for the program by June 2017. We are sending copies of this report to the appropriate congressional committees, the NNSA Administrator, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix II. David C. Trimble, (202) 512-3841 or trimbled@gao.gov. In addition to the individual named above, Jonathan M. Gill (Assistant Director), Greg Campbell, William Horton, Nancy Kintner-Meyer, Rebecca Shea, and Kiki Theodoropoulos made key contributions to this report. | DOE participates in the annual process to assess the safety and reliability of the U.S. nuclear stockpile, which is now made up largely of weapons that are beyond their original design lifetimes. In 2007, faced with a mounting backlog of required tests, DOE's NNSA announced plans to use its Enhanced Surveillance Program for a more cost-effective surveillance approach under its 2007 Surveillance Transformation initiative. Under this initiative, predictive models were to assess the impact of aging on weapons in the stockpile without having to dismantle them as the agency has done in the past. The Senate Report accompanying the National Defense Authorization Act for Fiscal Year 2015 included a provision that GAO review the status of the Enhanced Surveillance Program. This report assesses the extent to which NNSA implemented the vision for the Enhanced Surveillance Program from its 2007 initiative and developed a long-term strategy for the program. GAO reviewed NNSA plans and budget and other documents; interviewed agency officials; and discussed surveillance issues with members of a group of nationally known scientists who advise the government and who reviewed the program in September 2013. The Department of Energy's (DOE) National Nuclear Security Administration (NNSA) did not fully implement the Enhanced Surveillance Program as envisioned in the agency's 2007 Surveillance Transformation Project (2007 initiative) and has not developed a long-term strategy for the program. Surveillance is the process of inspecting a weapon through various tests of the weapon as a whole, the weapon's components, and the weapon's materials to determine whether they are meeting performance expectations, through dismantling the weapon or through the use of diagnostic tools. As called for in its 2007 initiative, NNSA took steps to improve the management of the overall surveillance program, which primarily tests dismantled weapons and their components, but the agency did not increase the role of the Enhanced Surveillance Program, as envisioned. The program develops computational models to predict the impact of stockpile aging; identifies aging signs; and develops diagnostic tools. Under the 2007 initiative, NNSA was to conduct more Enhanced Surveillance Program evaluations using computer models to predict the impacts of aging on specific weapon components—especially nonnuclear components and materials—and to assess the validity of more diagnostic tools. Instead of expanding the program's role, NNSA reduced program funding by more than 50 percent from fiscal year 2007 to fiscal year 2015. NNSA also delayed some key activities and reduced the program's scope during this time. For example, NNSA did not complete its proposed evaluations of the impact of aging on nonnuclear components and materials. These evaluations, originally estimated to be completed by 2012, were dropped as program goals in fiscal year 2016, according to NNSA officials and contractor representatives. In fiscal year 2016, NNSA broadly refocused the Enhanced Surveillance Program on multiple nuclear weapon life-extension efforts and supporting activities but has not developed a corresponding long-term strategy for the program. Instead, program officials have focused on developing general long-term goals and managing the program on a year-to-year basis under reduced funding levels to maintain key stockpile assessment capabilities. These general goals, however, do not provide measureable outcomes or encompass the entirety of the program. In addition, as GAO's previous work has shown, managing longer term work, such as multiyear technology development projects, on an annual basis makes it difficult for Congress and other decision makers to understand up front what they are funding and what benefits they can expect. As a result, these projects may receive a lower priority and may not be consistently funded. GAO's body of work has identified a number of leading practices in federal strategic planning that include defining strategic goals, defining strategies and resources for achieving these goals, and developing and using performance measures to track progress in achieving these goals and to inform management decision making. A new strategy for the Enhanced Surveillance Program that incorporates outcome-oriented strategic goals, addresses management challenges and identifies resources needed to achieve these goals, and develops and uses performance measures to track progress in achieving goals would allow the agency to better inform long-term planning and management decision making for the program as well as help ensure that it complements NNSA's other efforts to assess the nuclear weapons stockpile. GAO recommends that the NNSA Administrator develop a long-term strategy for the Enhanced Surveillance Program that incorporates leading practices. NNSA concurred with GAO's recommendation and estimated completion of a long-term strategy by June 2017. |
The military services have two types of bands: (1) premier and specialty bands and (2) regional and field bands. The premier and specialty bands are predominately located in the National Capital Region and have a ceremonial mission, but they also engage in community-relations activities. For example, the bands’ performances include ceremonies at Arlington National Cemetery and events where high-level officials—such as the President and the military service Secretary and Chief of Staff— are in attendance. The regional and field bands are located throughout the United States and worldwide, and provide musical support to military units or commands by fulfilling ceremonial missions, participating in community-relations events, and performing for military service members. Bands typically consist of multiple musical groups, such as a ceremonial band, brass quintet, and popular music group. Figure 1 describes examples of the different types of musical groups a band may have. The Office of the Assistant to the Secretary of Defense for Public Affairs establishes policies and implementation guidance for DOD’s public affairs programs, including community-relations activities. In this role, the Office of the Assistant to the Secretary of Defense for Public Affairs oversees the execution and movement of military bands to support community-relations activities. The services vary in their structures for managing their bands. The Navy centrally manages its band program, while the Army, Marine Corps, and Air Force have decentralized management of their bands. All Navy regional and field bands and the U.S. Naval Academy Band are field activities of the U.S. Navy Band. The U.S. Navy Band also provides funding to the Navy’s regional and field bands through Fleet Band Activities, while the Superintendent of the U.S. Naval Academy provides funding to the U.S. Naval Academy Band. The Army, Marine Corps, and Air Force have service headquarters-level organizations that manage their band programs, but their service guidance provides that local commands maintain control over and provide funding for their bands. Table 1 identifies Army, Navy, Marine Corps, and Air Force offices that manage their military service’s bands, and summarizes the offices’ responsibilities. The band members’ roles and responsibilities related to training and deployment vary across the services. Members of all bands except for the U.S. Marine Band—“The President’s Own” complete basic training and ongoing physical fitness requirements. Their responsibilities for deploying to a combat environment and whether they perform nonmusical duties in combat environments vary by service and the type of band. Except for the Marine Corps, the primary mission of band members who deploy is to perform music. Marine Corps band members, according to band program officials, provide perimeter security or support convoy operations when deployed to a combat environment. In addition, according to military- service band program officials, members of Army National Guard and Air National Guard bands can be called upon to assist other Air and Army National Guard units with civil-defense duties and disaster-relief efforts. Table 2 shows the basic training, ongoing physical fitness, and combat environment requirements for the military services’ bands. The military services reduced their number of bands by 9.3 percent, and also reduced military personnel authorizations dedicated to bands by 7.5 percent, from fiscal year 2012 through 2016. Over the same period, the Navy and Air Force reported increases in their total operating costs for bands, while the Marine Corps reported that its costs declined. The Army did not have complete data for the operating costs of its reserve bands from fiscal year 2012 through 2015, but reported declines in total operating costs for its active-duty and National Guard bands. Pay and allowance costs of active-duty military personnel dedicated to bands decreased from calendar year 2012 through 2016 for all of the military services, consistent with the reductions in military personnel authorizations dedicated to active-duty bands. The number of bands in the four military services decreased from 150 in fiscal year 2012 to 136 in fiscal year 2016, a decline of 9.3 percent (see table 3). The extent of reductions in the number of bands varied by service, with the Air Force reporting the largest decrease and the Army reporting the smallest decrease in the number of bands from fiscal year 2012 through 2016. From fiscal year 2012 through 2016, according to military-service data and officials, total military personnel authorizations dedicated to bands decreased by 7.5 percent—from 7,196 in fiscal year 2012 to 6,656 in fiscal year 2016 (see table 4). The extent of reductions in military personnel authorizations varied by service and component. For example, the total number of military personnel authorizations dedicated to Air National Guard bands declined from 320 to 200—or 37.5 percent—from fiscal year 2012 through fiscal year 2016, while the total for Army Reserve and National Guard bands stayed the same in that period. According to military-service officials, resource constraints have led to past reductions in the size of their bands. Our analysis shows that the total military personnel authorizations dedicated to bands account for a relatively small amount of the military services’ end-strength authorizations, and have decreased at a similar rate compared to total service end-strength authorizations from fiscal year 2012 through 2016. Specifically, in fiscal years 2012 through 2016, the number of military personnel authorizations dedicated to bands was less than half a percent of the military services’ end strength for all services. In addition, the total number of military personnel authorizations dedicated to bands declined by 7.5 percent compared to a 6.6 percent decline in personnel authorizations overall (from 2.3 million authorizations in fiscal year 2012 to 2.1 million authorizations in fiscal year 2016) across the four military services over this period. The Army plans to reduce the number of bands and military band personnel from fiscal year 2017 through 2019. The Army plans to close 12 bands—8 active-duty bands and 4 reserve bands—and reduce the number of personnel authorizations dedicated to 43 National Guard bands over this period. As a result of these reductions, the Army plans to reduce the total number of military personnel authorizations dedicated to Army bands from 4,497 in fiscal year 2016 to 3,865 in fiscal year 2019, or by about 14 percent (see table 5). The other three services do not have plans to change the number or size of their bands at this time, according to service officials. The Navy and Air Force reported that the total operating costs of their bands increased from fiscal year 2012 through 2016, and the Marine Corps reported decreased costs over this period. The Army did not have complete data for its reserve bands from fiscal year 2012 through 2015, but reported decreases in total operating costs for its active-duty and National Guard bands. Operating costs for the bands include expenses not related to military personnel, such as travel, transportation, instruments, uniforms, office supplies, and civilian salaries. According to military-service band program officials, the military services use operations and maintenance appropriations to fund their band programs. At the component level, the Army active-duty, Army National Guard, Marine Corps active-duty, and Air National Guard bands reported decreases in their total operating costs from fiscal year 2012 through 2016, and the Navy active-duty and Air Force active-duty bands reported increases in their costs in the same period (see table 6). Navy band program officials stated that their bands’ operating costs increased in part because the band program was not adequately funded to meet its mission in fiscal years 2012 through 2014 prior to the band program’s reorganization in fiscal year 2015. In addition, the officials stated that the U.S. Navy Band had onetime renovation costs of $749,000 in fiscal year 2016 for its office facilities and had to increase civilian and contractor staffing to meet its new command responsibilities as a result of the band program’s reorganization. An Air Force band program official stated that local commands are responsible for funding their bands, so bands may have had unique circumstances that led to increases in costs over time. For example, the official noted that after Bolling Air Force Base transitioned to Joint Base Anacostia-Bolling, the band at that location became responsible for funding such things as building maintenance for its facilities on the base. The official stated that the band was not previously responsible for these expenses, which, in part, led to increases in the band’s funding in fiscal years 2014 through 2016. Travel and equipment expenses are among the highest-operating cost areas for individual bands, according to military-service band program officials. Bands travel throughout their areas of operations or responsibility to perform at events. In addition, according to military- service guidance or band program officials, bands maintain professional- grade instruments for their band members. Band program officials or band commanders we met with noted that band members need to have professional-grade instruments for several reasons, including working at a high number of events in a range of weather conditions and a variety of venues, such as an indoor reception or an outdoor parade. One band commander we met with stated that the band’s travel costs were about $364,000 in fiscal year 2016, accounting for 43 percent of the band’s total costs of about $850,000. That same band commander stated that the band’s supply costs, such as instruments, instrument supplies, and uniforms, were at least about $142,000, or at least 17 percent of the band’s total costs in fiscal year 2016. For another band, the band commander we met with reported that travel costs were about $228,000, or 68 percent of the band’s total costs of about $338,000 in fiscal year 2016, while the band’s costs of purchasing instruments, instrument supplies, sheet music, and sound supplies were about $92,000, or 27 percent of the band’s total costs. According to data from the Defense Finance and Accounting Service, pay and allowance costs of active-duty military personnel dedicated to bands decreased from calendar year 2012 through 2016 for all of the military services (see table 7). Although a direct comparison with personnel authorizations is not possible because the personnel counts above are in fiscal years and pay and allowance costs were reported by DOD in calendar years, the Army, Navy, Marine Corps, and Air Force reduced pay and allowance costs over time consistent with the overall decrease of 10 percent in personnel authorizations dedicated to active-duty bands from fiscal year 2012 through 2016. We were not able to obtain data that were sufficiently reliable for determining trends in the pay and allowance costs of military personnel dedicated to National Guard and reserve bands in the Army and Air National Guard bands in the Air Force in time for our review. For example, in calendar years 2012 through 2016, the Defense Finance and Accounting Service could not identify pay and allowance data for between about 6 and 24 percent of the military personnel that the Defense Manpower Data Center reported were dedicated to Army National Guard bands in those years. The military services consider the instrumentation needed to perform at required events, as well as needs of the region or command to which a band is assigned, to determine the size and location of their military bands. In addition, the military services assess the overall size of, and ongoing needs for, their bands through existing force-structure and budget review processes, typically in response to proposed resource reductions. The military services consider the instrumentation needed to perform at a variety of required events to organize their bands. According to military- service band program officials, the premier and specialty bands tend to be larger than regional and field bands because of the bands’ unique missions and the number and types of high-profile events these bands perform. In fiscal year 2016, the premier and specialty bands ranged in size from 35 to 252 military personnel authorizations. According to military-service guidance or band program officials, the military services organize their premier and specialty bands so that each band consists of multiple musical groups to meet a variety of musical requirements. These groups can range from a large concert band to smaller musical groups, such as a rock band, and can perform simultaneously at different venues. For example, Army force-structure and band program officials stated that the U.S. Army Band—“Pershing’s Own,” which had 252 military personnel authorizations dedicated to the band in fiscal year 2016, has seven musical groups that performed a total of about 6,000 events in fiscal year 2016, according to data from the Army. According to these officials, the musical groups include a 54-member ceremonial band that supports official government events and military funerals at Arlington National Cemetery, a 54-member concert band that performs at official and public engagements, and the 16-member Herald Trumpets ensemble, comprised of 14 trumpet players and 2 drummers, that performs at the White House to welcome foreign ambassadors and visiting heads of state. Regional and field bands are generally smaller than premier and specialty bands and, with the exception of one 15-member band in the Air Force, ranged in size from 35 to 75 military personnel authorizations in fiscal year 2016. Similar to the premier and specialty bands, the military services have organized these bands with multiple musical groups to perform at required events. For example, Air Force guidance states that regional and field bands must have a sufficient number of band members to support State Funeral Plans and deployments, and to ensure the bands have adequate personnel for assignment rotations both within and outside of the United States. In the case of the Marine Corps, officials stated that each regional and field band needed to be the size of a rifle platoon to meet its ceremonial requirements and because band members may deploy to support combat operations. Figure 2 shows the organization of a 35-member Navy band, illustrating how a military band is organized into multiple groups to meet its musical requirements. The military services have generally determined the location of their regional and field bands based on the command or region the bands support. Appendixes III and IV include a map showing the location of active-duty and reserve-component bands, respectively, in fiscal year 2016. Army— Army guidance provides rules of allocation and stationing for regional and field bands. The guidance allows planners to determine required resources and personnel to execute music support operations and identify stationing and mission command relationships. Allocations and stationing are based on the type of organization being supported, such as division headquarters or training centers, as well as the number of brigades. According to the Army, the Army has assigned its active-duty regional and field bands to division commands and training centers, and its National Guard and reserve regional and field bands geographically based on factors such as (1) population centers to support recruitment and retention of Army musicians and (2) the location of troop and veteran populations in the states and territories. For example, an Army Reserve band is located in Los Angeles County, California, which had the highest estimated number of veterans in the United States as of the end of fiscal year 2015, and an Army National Guard band is located in Maricopa County, Arizona, which had the second-highest estimated number of veterans in the United States. Navy—According to Navy band program officials, the Navy’s regional and field bands are located in the largest fleet or headquarters locations. Each of the regional and field bands located within the contiguous United States has a geographic area of responsibility, while the operational commanders define the geographic areas of responsibility for the regional and field bands in Hawaii, Italy, and Japan. Marine Corps—A Marine Corps band program official stated that the Marine Corps has assigned its regional and field bands to major commands. Marine Corps guidance requires the commanding general of the commands to which bands are assigned to determine the size of each band’s area of responsibility for performing events, which the guidance defines as the geographic area in which an installation, its units, and personnel have an economic and social impact. For example, the commanding general of the 3rd Marine Aircraft Wing, located in San Diego, California, established its band’s area of responsibility for military and civilian events as within a 100-mile radius of the installation, to include other specific Marine Corps units outside of this radius, such as the Marine Corps Air Station in Yuma, Arizona. A band program official stated that the Marine Corps regional and field bands are located at major commands to provide ceremonial support to the largest number of Marines and subordinate commands. Air Force—According to Air Force band program officials, the Air Force has assigned its active-duty regional and field bands to major commands and generally located Air National Guard bands in states with higher numbers of Air National Guard wings. Air Force guidance assigns a geographic area of responsibility to the active-duty and Air National Guard regional and field bands located within the contiguous United States, while the commands for the two bands located in Germany and Japan assign their bands’ geographic areas of responsibility. Air Force band program officials noted that they have also kept the active-duty bands located with major commands, in part, because they are spread out evenly across the United States where the bands can reach large population centers. The military services consider the overall size of, and ongoing needs for, their military bands through existing force-structure and budget reviews. In the past, the services have generally assessed the size of their bands in response to proposed resource reductions; however, Army force- structure officials stated that the Army plans to make recommendations based on a review of its music structure by the end of fiscal year 2017. Army—The Army reviews the number and size of its bands through its annual Total Army Analysis process, during which the Army determines how it allocates its end strength among its units. Army force-structure officials stated that they have considered several factors when making force-structure decisions regarding band numbers and size, including senior-leader priorities, critical mission needs for other organizations, and the location of other military bands. For example, Army force-structure officials stated that having an Air Force band in San Antonio, Texas, was a factor in the Army’s plans to close an Army band in San Antonio in fiscal year 2019. In February 2017, the Director of the Army Staff directed the Commander of the U.S. Army Training and Doctrine Command and the Chief of Army Music to conduct a comprehensive review of the Army’s music structure, including determining the proper organization, mission and goals, functions, priorities, and management oversight for Army bands. Army force-structure officials stated that the recommendations from that review will be made to the Vice Chief of Staff of the Army by the end of fiscal year 2017. Navy—Navy manpower officials stated that the Navy considers the number and size of Navy bands as part of the service’s annual budget- development process. During the fiscal year 2012 budget-development process, the Navy decided to reduce the size of its bands to offset resource needs for other programs because of reductions to the Navy’s overall end strength, according to Navy manpower officials. Subsequent to the decision to make these reductions, the Navy reorganized its band program effective in fiscal year 2015, in part because Navy band program officials wanted to ensure that all Navy bands had a sufficient number of band members to meet their primary mission of performing ceremonies. Marine Corps—Marine Corps officials stated that they consider their bands as part of reviews of total force structure. In fiscal year 2013, after a force-structure review, the Marine Corps closed two regional and field bands because of budget reductions. In addition, in March 2017, a Marine Corps force-structure official stated that in an ongoing review officials had considered reducing the size of bands to offset increases in end strength needed to support other new Marine Corps capabilities. However, the official stated that the Marine Corps decided not to reduce the size of bands because senior officials recognized how much the bands are used by commands; also, the leadership noted the value of the bands to troop welfare and to community relations, and noted band members’ secondary role of providing perimeter security in combat. Air Force—Air Force manpower officials stated that the Air Force reviews the number and size of its bands through its annual budget-development process. From fiscal year 2012 to 2014, the Air Force closed three active- duty and six Air National Guard regional and field bands to address budget reductions or to offset increases for other mission needs, according to band program officials. During the fiscal year 2015 budget- development process, the Air Force Bands Division submitted four options for reducing the number and size of bands that took into consideration, among other things, the reduced support to major commands and the number of outreach opportunities missed to connect with industry leaders and the public in the areas that would no longer have band support. However, the Air Force did not implement any of these options. Air Force manpower officials noted that, when the Air Force has proposed past reductions, the commanders and community leaders strongly advocated for maintaining bands assigned to their command and local areas because of the bands’ effect on troop morale and community relations. The military services have tracked and used information on band events; however, the services have not developed objectives and measures to assess how their bands are addressing the bands’ missions, such as inspiring patriotism, enhancing the morale of troops, and promoting U.S. interests abroad. All four military services have tracked information, such as the number and type of band events, and military bands reported using this information to aid their planning for any improvements at future events. The type of tracked information varies, but all services at a minimum track the number and types of events the bands have performed, as well as the number of audience members at these events and broadcast audience counts. In addition, the Navy, Marine Corps, and Air Force track the number of event requests their bands are not able to fulfill. Military bands generally enter this information into a database or regularly report the information to the services’ band program offices. We found that the number of audience members varies widely depending on the type of event. For example, according to Air Force data, one of the U.S. Air Force Band’s musical groups performed at the Super Bowl in 2016 in front of an estimated 71,000 ticketholders, while another musical group performed at a service member’s promotion ceremony that had an estimated 75 people in attendance. Table 8 shows the reported number of events performed by the military bands, the number of event requests that were declined, and the estimated number of audience members at events in fiscal year 2016, according to data collected by the military services. Each military service categorizes the types of events performed by its bands differently. The Army, Navy, and Air Force track several specific categories for the types of events their bands perform. For example, the Army tracks, among other categories, the number of funerals performed, which accounted for 35 percent of the events Army bands performed in fiscal year 2016 according to Army data. The Marine Corps categorizes the types of events its regional and field bands perform more broadly as either “Military” or “Civilian,” and reported that 79 percent and 21 percent of the events performed by these bands in fiscal year 2016 were “Military” and “Civilian,” respectively. Military bands perform at a variety of events, such as military ceremonies, community events or parades, and funerals for service members. According to band commanders we met with, their bands prioritize performing at military ceremonies or events where service members are in attendance. In addition, the Office of the Assistant to the Secretary of Defense for Public Affairs issues an annual outreach planning document that articulates, for the upcoming fiscal year, (1) the military services’ priorities for community-relations activities, (2) key resources available for use, (3) summary details about known and anticipated activities, and (4) certain cost information for the identified activities. The responses to our questionnaire showed how individual bands track and use information to plan future events. According to their responses, 101 of 125 bands (or 81 percent) responded that they track social-media analytics, such as frequency of mentions on Facebook. In addition, we found that bands use their band websites, Facebook, Twitter, and YouTube to expand the reach of their events. For example, in November 2016, the U.S. Army Field Band posted a YouTube video of the band’s performance of the “Battle Hymn of the Republic” that had about 1.6 million views as of June 2017. The U.S. Air Force Band also posted a YouTube video in December 2015 of an event at Union Station in Washington, D.C., that had about 4 million views as of June 2017. The military bands that responded to our questionnaire identified the following examples of how they used tracked information to make changes to their performances: An Army band reported changing the timing of summer concerts from Sundays to Saturdays to meet its audiences’ preference. An Air Force band determined that audience members wanted an overall entertainment product with performances using lighting, staging, and other elements—rather than just music. An Army band stationed in a foreign country determined that audiences wanted mostly small-group performances, local pop music, and other music that caters to both U.S. and local national audiences. While the military services have tracked information on the events their bands performed, they have not developed objectives and performance measures to assess how their bands are addressing the bands’ missions, such as inspiring patriotism, enhancing the morale of troops, and promoting U.S. interests abroad. Table 9 shows the missions for the military bands, according to military-service guidance. In May 2017, officials from the Office of the Assistant to the Secretary of Defense for Public Affairs stated that DOD is revising its guidance for community- relations policy implementation to incorporate an overarching mission for military bands. Band program officials cited several examples of how they can determine that their bands are addressing their missions. Indicators of demand—Band program officials noted that the audience counts and number of declined event requests as cited above indicate the demand for their events and that the demand exceeds supply. Air National Guard band program officials stated that in addition to the counts of performances cited above, Air National Guard bands survey audiences during summer tours to understand how their bands are received by the general public. For example, based on responses to these surveys in 2016, program officials reported that 1,135 (or 98 percent) of 1,154 survey respondents stated that they had a better understanding of the federal and state missions of the Air National Guard after attending the bands’ performances. Examples of effectiveness—Military service band program officials cited examples where bands were used to address specific challenges or objectives in their local area of operations. Air Force band program officials provided an example where recruiters at a base had difficulty recruiting diverse service members, so in March 2016 an Air Force band performed recruiting concerts at local schools; the result was the band reached 7,000 students, and recruiters reported an increase in queries after these events. Support from senior leadership—Officials from all of the military- service band programs stated that senior leadership has supported the bands’ missions, citing how bands aid in outreach to troops, communities, or international audiences. For example, Navy and Air Force band program officials stated that senior leadership has noted how performances by bands can be an initial step towards improving relationships with foreign nations. The Commanding Officer of the U.S. Navy Band provided an example where the Chairman of the Joint Chiefs of Staff hosted a delegation from a foreign nation that had tense relations with the United States at the time. According to the Commanding Officer, the U.S. Navy Band’s chorus provided after- dinner entertainment, and as part of the performance, sang one of the foreign nation’s folk songs in the native language, which was videotaped, posted on YouTube, and had over 1.1 million views. While these examples provide important context about the bands’ reach and impact, the approaches do not include measurable objectives nor exhibit several of the important attributes performance measures should include. GAO’s Standards for Internal Control in the Federal Government states that management should define objectives in specific and measurable terms so they are understood at all levels of the entity and that performance towards achieving those objectives can be assessed. In addition, the standards state that management should establish activities to monitor performance measures. GAO has developed several important attributes that performance measures should include if they are to be effective in monitoring progress and determining how well programs are achieving their mission, such as performance measures being clear, objective, and measurable, and having baseline and trend data to identify, monitor, and report changes in performance and to help ensure that performance is viewed in context. Table 10 identifies each attribute and its corresponding definition. However, we found that the services have not developed objectives and performance measures that include several of the important attributes for successful performance measures to assess how their bands are addressing the bands’ missions. Specifically, the services’ approaches do not exhibit the linkage attribute in that there is not clear alignment between the information and how it affects the bands’ ability to achieve their missions. GAO’s key attributes state that linkages between an organization’s mission and measures are most effective when they are clearly communicated and create a line of sight so that everyone understands how their work contributes to the organization’s efforts. Also, the military services have not established a baseline for the information, so they are not able to assess the program’s performance and progress over time. Identifying and reporting deviations from the baseline as a program proceeds provides valuable information for oversight by identifying areas of program risk and their causes to decision makers. Lastly, the services’ approaches are not using the GAO attribute of measurable targets to facilitate future assessments of whether overall objectives were achieved. Officials from all of the military-service band program offices stated that they have not quantified whether their bands are addressing their missions because the bands’ missions, such as inspiring patriotism, enhancing the morale of troops, and promoting U.S. interests abroad, are not quantitatively measurable. While we believe that inspiring patriotism and enhancing the morale of troops could be quantitatively measured through techniques such as surveys and focus groups, band program officials stated, and we recognize, that they have limited resources to conduct these types of activities. We also acknowledge that evaluating how the bands are addressing their missions is difficult. However, using the information the military services already track, such as the number of events performed or the number of audience members in attendance, the services could, for example, develop a baseline assessment for current performance, set measurable targets, and monitor trends over time to assess progress. DOD and the services are taking steps to improve how they track information on events to measure the effectiveness of military bands. In September 2016, the Chief of Army Music established an Army Music Analytics Team to define and gather data points to regularly collect information from Army bands to report quantifiable effects on event performance, audience engagement, and messaging. In June 2017, an Army band program official stated that the team has expanded its scope to collaborate with academia and industry to obtain insights and identify metrics that can be used to demonstrate the effectiveness of Army bands. Also, in response to our review, officials from the Office of the Assistant to the Secretary of Defense for Public Affairs stated that they met with service band program officials and band commanders to establish standard metrics to collect on events performed by bands. According to these officials, DOD plans to include these metrics in its guidance on community-relations policy implementation. DOD’s and the services’ actions represent key steps that can inform and guide efforts to establish measurable objectives and performance measures that include important attributes. Developing and implementing measurable objectives and performance measures for their band programs that demonstrate linkage to the bands’ missions, include an established baseline of data, and have measurable targets could provide DOD and congressional decision makers with the information they need to assess the value of the military bands relative to resource demands for other priorities. DOD uses military bands to inspire patriotism, enhance the morale of the troops, and promote public awareness by supporting a range of activities, including funerals for military service members, events where high-level officials such as the President are in attendance, and community-relations activities such as parades in local communities. However, the services have not developed measurable objectives and performance measures that include important attributes for successful performance measures, including linkage, a baseline, or measurable targets, to assess how their bands are addressing the bands’ missions. While we acknowledge that evaluating how bands are addressing their missions is difficult, the information the services already collect and the additional steps they have been taking to measure their bands’ effectiveness could inform and guide efforts to establish such measurable objectives and performance measures that are consistent with GAO’s Standards for Internal Control in the Federal Government and GAO’s past work on important attributes of performance measures. Doing so could provide information that would assist DOD and congressional decision makers as they assess the value of the military bands relative to resource demands for other priorities. To help ensure that each service can provide information to decision makers as they assess the value of the military bands relative to resource demands for other priorities, we recommend that the Secretaries of the Army, Navy, and Air Force, and the Commandant of the Marine Corps, direct the Chief of Army Music, Commanding Officer of the U.S. Navy Band, Chief of the Air Force Bands Division, and Director of Marine Corps Communications, respectively, each to develop and implement measurable objectives and performance measures for their respective services’ bands. At a minimum, these measures should include the important attributes for successful performance measures of demonstrating linkage to the program’s mission, establishing a baseline, and having measurable targets to demonstrate program performance. We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix V, DOD concurred with our recommendations. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (213) 830-1011 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To gather information about military bands for this review, we sent a questionnaire to Army, Marine Corps, and Air Force bands actively performing in fiscal year 2017 and to the Executive Officer of the U.S. Navy Band and Director of Navy Fleet Band Activities. The Executive Officer of the U.S. Navy Band or the Director of Navy Fleet Band Activities completed a questionnaire on behalf of each Navy band actively performing in fiscal year 2017 because the Navy centrally manages the operations of Navy bands. For the other three services, the individual bands completed the questionnaire. The total number of bands or band operating locations surveyed was 134. As part of the questionnaire’s development, a representative from each military service familiar with the service’s bands reviewed a draft questionnaire for substantive issues, and a GAO survey specialist reviewed the questionnaire for technical issues. To minimize errors that might occur from respondents interpreting our questions differently than we intended, we pretested our questionnaire with a Navy band program official with responsibilities for managing the Navy music program; and leadership from three active-duty bands from the Army, Marine Corps, and Air Force, one Army Reserve band, and two Army National Guard bands. During the pretests, conducted in person or by phone, we asked the officials to read the instructions and each question out loud and to tell us how they interpreted the question. We then discussed the instructions and questions with officials to determine whether (1) the instructions and questions were clear and unambiguous, (2) the terms we used were accurate, (3) the questionnaire was unbiased, and (4) the questionnaire did not place an undue burden on the officials completing it, and (5) to identify potential solutions to any problems identified. We noted any potential problems and modified the questionnaire based on the feedback received from the reviewers and pretests, as appropriate. To administer the questionnaire, we sent e-mail notifications to each recipient beginning on February 6, 2017. On February 8, 2017, we sent the questionnaire as a Microsoft Word form and a cover e-mail and asked the recipients to fill in the questionnaire and e-mail it back to us. We closed the survey on March 20, 2017. Overall we received completed questionnaires for 129 bands or band operating locations, for a response rate of 96 percent. Because we attempted to contact all bands rather than a sample and we are not generalizing results to any bands, there was no sampling error. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses were processed and analyzed, or the types of people who do not respond can influence the accuracy of the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors and help ensure the accuracy of the answers that were obtained. For example, a social-science survey specialist designed the questionnaire, in collaboration with analysts having subject- matter expertise. Then, as noted earlier, the draft questionnaire was pretested to ensure that questions were relevant, clearly stated, and easy to comprehend. The questionnaire was also reviewed by military-band subject-matter experts and a survey specialist, as mentioned above. Data from the Word questionnaires were entered manually by a GAO contractor, data entry was checked, and any data-entry errors were corrected before analyses. We examined the results to identify inconsistencies and other indications of error, and addressed such issues as necessary. Quantitative data analyses were conducted by an analyst using Microsoft Excel, and another analyst verified the analyses. The verbatim wording of a key survey question whose results are discussed in the body of this report is below. 23. In fiscal year 2016, which of the following types of metrics, if any, did your band track related to the engagements it performed? Please check one box in each row. Broadcast or streamed audience counts Social media analytics (e.g., Facebook Likes, Twitter retweets or favorites) Type of engagement (e.g., civic engagement, base support, recruiting) Other positive or negative feedback not listed above (please specify below) a. If your band has made changes to how, when, what, or where it performs based on observations from the metrics above, please provide examples from fiscal year 2016. The box will expand as you type. We sent a questionnaire to Army, Marine Corps, and Air Force bands actively performing in fiscal year 2017 and to the Executive Officer of the U.S. Navy Band and Director of Navy Fleet Band Activities to gather information on the types of facilities and modes of transportation used by the bands. The Executive Officer of the U.S. Navy Band or the Director of Navy Fleet Band Activities completed questionnaires on behalf of each Navy band actively performing in fiscal year 2017 because the Navy centrally manages the operations of Navy bands. For the other three services, the individual bands completed the questionnaire. Based on the responses to our questionnaire, we made the following observations about the bands’ facilities and transportation resources. Based on the responses to our questionnaire, the types of facilities that bands used varied. When asked to describe the facilities used by their band in fiscal year 2016, bands responded that their facilities included band halls, chapels or church buildings, armories, and former base dining halls, among others. Bands provided additional details on the facilities they used in fiscal year 2016, including the following: Bands reported using between one and six buildings. Premier and specialty bands typically reported using more buildings than the regional and field bands. Of the 128 bands that responded, 78 (or 61 percent) stated that they shared at least one building with another organization. In some cases, bands responded that they shared a building but not the band’s offices or rooms with another organization. In other cases, bands indicated that they shared specific areas with another organization. For example, one band reported that its rehearsal hall was occasionally used as a classroom, while another band stated that one of its larger musical groups rehearsed in the base dining facility. Bands generally reported having rehearsal space, office space, and storage space. The overall size of these three types of spaces ranged from 260 to about 48,000 square feet. Premier and specialty bands reported that the overall size of their rehearsal, office, and storage space ranged from 5,000 to about 48,000 square feet, while regional and field bands reported the overall size of these spaces ranged from 260 to over 28,000 square feet. Four bands reported that the facilities the band used in fiscal year 2016 were built in fiscal years 2012 through 2016, which they reported had a total cost of $56 million. In addition, 10 bands identified single projects greater than $1 million to repair, renovate, or construct a facility for the bands’ use that were initiated in fiscal years 2012 through 2016, which they reported had a total cost of about $29 million. Bands also described the projects and why they were needed. For example, one band reported that the project provided space so that multiple music groups could train at the same time. In another case, a band reported that renovations were needed to correct aged facilities based on inspection results. Based on the responses to our questionnaire, the transportation used to travel to performances varied by band. When asked to identify the modes of transportation the bands used to travel to performances and whether bands had exclusive use of any vehicles in fiscal year 2016, bands provided us with the following information: When traveling to events, bands reported most often using (1) base motor-pool vehicles; (2) buses, cars, vans, or trucks leased or chartered from a private company; or (3) commercial air. Of the 128 that responded, 69 bands (or 54 percent) stated that they had exclusive use of certain vehicles, such as box trucks, pickup trucks, passenger and cargo vans, and buses, among others. The numbers of vehicles that bands had exclusive use of ranged from 1 to 24, with premier and specialty bands reporting that they had exclusive use of more vehicles than regional and field bands. Specifically, premier and specialty bands reported having exclusive use of 1 to 24 vehicles per band, while regional and field bands reported having exclusive use of 1 to 8 vehicles per band. Figure 3 shows the location of the 31 Army, 11 Navy, 12 Marine Corps, and 9 Air Force active-duty bands in fiscal year 2016. The active-duty bands have different areas of responsibility for performing events: Army guidance states that a band’s geographic area of responsibility is the same as its installation commander’s geographic area of responsibility. Navy guidance establishes a geographic area of responsibility for bands located within the contiguous United States, while the operational commanders define the geographic areas of responsibility of the regional and field bands in Hawaii, Italy, and Japan. Marine Corps guidance states that the commanding general of the commands to which bands are assigned determines the size of each band’s area of responsibility. Air Force guidance assigns a geographic area of responsibility to its bands located within the contiguous United States, while the commands for the bands located in Germany and Japan assign their bands’ geographic areas of responsibility. Figure 4 shows the location of the 5 Air National Guard, 51 Army National Guard, and 17 Army Reserve bands in fiscal year 2016. According to military-service band program officials, members of the Air National Guard, Army National Guard, and Army Reserve bands are on duty for one weekend per month and 2 weeks during the summer. In addition, these bands have different areas of responsibility for performing events: Air Force guidance assigns a geographic area of responsibility for each Air National Guard band. The Army National Guard bands generally perform events within their respective state or territory, according to Army band program officials. According to Army band program officials, the Army has assigned the Army Reserve bands to Army Reserve Regional Support Commands, and these bands perform events throughout the command’s area of responsibility. In addition to the contact named above, key contributors to this report were Margaret A. Best (Assistant Director), William J. Cordrey, Felicia M. Lopez, Vikki L. Porter, Richard S. Powelson, Michael D. Silver, Jared A. Sippel, Wayne J. Turowski, and Melissa A. Wohlgemuth. | The Department of Defense (DOD) uses military bands to enhance the morale of the troops, provide music for ceremonies, and promote public awareness. Bands across the military services support a range of activities, including funerals for military service members, events attended by high-level officials, and community-relations activities such as parades. In fiscal year 2013, DOD restricted its community-relations activities, including placing travel restrictions on bands, as a result of the sequestration ordered in March 2013. DOD reinstated community-relations activities at a reduced capacity in fiscal year 2014. House Report 114-537 included a provision for GAO to review DOD's requirement for military bands. This report (1) describes the trends in personnel and costs for bands from fiscal year 2012 through 2016, and (2) assesses the extent to which the military services have evaluated how the bands are addressing their missions, among other objectives. GAO analyzed data from the military services on military band personnel and reported operating costs of bands. GAO also reviewed the military services' guidance and approaches to evaluating their bands and interviewed band program officials at the military services. All of the military services reported reducing the number of military band personnel from fiscal year 2012 through 2016, but trends in total reported operating costs for the bands, such as travel and equipment expenses, varied across the services. Total military personnel dedicated to bands decreased from 7,196 in fiscal year 2012 to 6,656 in fiscal year 2016, or 7.5 percent (see figure). The Navy and Air Force reported that their total operating costs for bands over this period increased by $4.1 million and $1.6 million, respectively, and the Marine Corps reported that its costs declined by about $800,000. The Army did not have complete cost data for its reserve bands, but reported that the operating costs of its active-duty and National Guard bands declined by $3.6 million and about $500,000, respectively, from fiscal year 2012 through 2016. The military services have not developed objectives and measures to assess how their bands are addressing the bands' missions, such as inspiring patriotism and enhancing the morale of troops. All four military services have tracked information, such as the number and type of band events. Further, military-service officials cited the demand for band performances, anecdotal examples, and support from senior leadership, as ways to demonstrate the bands are addressing their missions. However, the military services' approaches do not include measurable objectives or performance measures that have several important attributes, such as linkage to mission, a baseline, and measurable targets, that GAO has found are key to successfully measuring a program's performance. Military band officials cited the difficulty and resources required to quantify how the bands are addressing their missions, but the military services are taking steps to improve how they track information on band events to measure the bands' effectiveness. GAO believes these key steps could inform and guide the services' efforts to develop and implement measurable objectives and performance measures. Doing so could provide decision makers with the information they need to assess the value of the military bands relative to resource demands for other priorities. GAO recommends that the Army, Navy, Marine Corps, and Air Force each develop and implement measurable objectives and performance measures for their bands. DOD concurred with the recommendations. |
The nonprime mortgage market has two segments: Subprime: Generally serves borrowers with blemished or limited credit histories, and the loans feature higher interest rates and fees than prime loans. Alt-A: Generally serves borrowers whose credit histories are close to prime, but the loans have one or more high-risk features, such as limited documentation of income or assets or the option of making monthly payments that are lower than would be required for a fully amortizing loan. Of the 14.5 million nonprime loans originated from 2000 through 2007, 9.4 million (65 percent) were subprime loans and 5.1 million (35 percent) were Alt-A loans. In both of these market segments, two types of loans are common: fixed- rate mortgages, which have unchanging interest rates, and adjustable-rate mortgages (ARM), which have interest rates that can adjust periodically on the basis of changes in a specified index. Specific types of ARMs are prevalent in each market segment. “Short-term hybrid ARMs” accounted for 70 percent of subprime mortgage originations from 2000 through 2007 (see fig. 1). These loans have a fixed interest rate for an initial period (2 or 3 years) but then “reset” to an adjustable rate for the remaining term of the loan. In the Alt-A segment, “payment-option ARMs” are a common adjustable-rate product, accounting for 17 percent of Alt-A mortgage originations from 2000 through 2007. For an initial period of typically 5 years, or until the loan balance reaches a specified cap, this product provides the borrower with multiple payment options each month, including minimum payments that are lower than what would be needed to cover any of the principal or all of the accrued interest. After the initial period, payments are “recast” to include an amount that will fully amortize the outstanding balance over the remaining loan term. Several payment categories describe the performance of mortgages, including nonprime mortgages: Current: The borrower is meeting scheduled payments. Delinquent: The borrower is 30 to 89 days behind in scheduled payments. Default: The borrower is 90 days or more delinquent. At this point, foreclosure proceedings against the borrower become a strong possibility. In the foreclosure process: The borrower has been delinquent for more than 90 days, and the lender has elected to foreclose in what is often a lengthy process. The loan is considered active during the foreclosure process. Completed the foreclosure process: The borrower’s loan terminates and foreclosure proceedings end with one of several possible outcomes. For example, the borrower may sell the property or the lender may repossess the home. Prepaid: The borrower has paid off the entire loan balance before it is due. Prepayment often occurs as a result of the borrower selling the home or refinancing into a new mortgage. In this report, we describe mortgages in default or in the foreclosure process as “seriously delinquent.” As we have stated in previous reports, a combination of falling house prices, aggressive lending practices, and weak economic conditions have contributed to the increase in troubled mortgages. For example, in 2009, we noted that falling house prices had left a substantial proportion of nonprime borrowers in a negative equity position—that is, their mortgage balances exceeded the current value of their homes—limiting their ability to sell or refinance their homes in the event they could not stay current on their mortgage payments. Additionally, we reported that an easing of underwriting standards and wider use of certain loan features associated with poorer loan performance contributed to increases in mortgage delinquencies and foreclosures. These features included mortgages with higher loan-to-value (LTV) ratios (the amount of the loan divided by the value of the home at loan origination), adjustable interest rates, limited or no documentation of borrower income or assets, and deferred payment of principal or interest. Also, in some cases, mortgage originators engaged in questionable sales practices that resulted in loans with onerous terms and conditions that made repayment more difficult for some borrowers. Furthermore, rising unemployment has contributed to mortgage defaults and foreclosures because job loss directly affects a borrower’s ability to make mortgage payments. The foreclosure crisis has imposed significant costs on borrowers, neighborhoods, and taxpayers. For example, vacant and foreclosed properties have contributed to neighborhood blight and reduced property values in many communities. Additionally, foreclosures affecting minority populations and the high incidence of subprime lending to members of these groups have heightened concerns that these groups have received disparate treatment in mortgage lending. In light of these costs and concerns, Congress and federal agencies have taken a number of steps to address and prevent a recurrence of ongoing problems in the mortgage market. These efforts include programs to modify or refinance the loans of distressed borrowers and legislation to strengthen mortgage-lending standards and prevent mortgage originators from steering borrowers into high-risk or high-cost mortgages. As of December 31, 2009, 63 percent of the 14.50 million nonprime loans originated from 2000 through 2007 (the last year in which substantial numbers of nonprime mortgages were made) was no longer active. Fifty percent of the nonprime loans originated during this period had prepaid, and 13 percent had completed foreclosure (see fig. 2). Among the 4.59 million nonprime loans that remained active as of the end of 2009, about 16 percent was in default (90 or more days late) and about 14 percent was in the foreclosure process, for a total serious delinquency rate of 30 percent (see fig. 3). About 12 percent was in a less serious stage of delinquency (30 to 89 days late), and the remaining 58.5 percent was current. The performance of nonprime mortgages originated from 2000 through 2007 deteriorated from the end of 2008 through the end of 2009. At the end of 2009, 1.38 million active nonprime loans were seriously delinquent, compared with 1.10 million at the end of 2008. Over the 12-month period, the serious delinquency rate rose from 21 percent to 30 percent. About three-quarters of the year-over-year change in the number of serious delinquencies was due to an increase in defaults, while the remainder was due to an increase in loans in the foreclosure process. As shown in figure 4, the number of active nonprime loans in default grew each quarter, with the largest increases occurring in the third and fourth quarters of 2009. By comparison, the number of active nonprime loans in the foreclosure process grew in the first two quarters of the year, held almost steady in the third quarter, and declined in the last quarter of 2009. The decline in the number of loans in the foreclosure process may be attributable to decisions by lenders not to begin foreclosure proceedings on defaulted loans. In addition, among all nonprime loans originated from 2000 through 2007, the cumulative percentage that had completed the foreclosure process increased from 10 percent at the end of 2008 to 13 percent at the end of 2009. About 475,000 nonprime loans completed foreclosure in 2009, or roughly 119,000 per quarter. Most (63 percent) of the 759,000 decline in the number of active loans in 2009 was attributable to loans completing foreclosure, rather than to prepayments. In 2009, the performance of nonprime loans differed between the subprime and Alt-A market segments and, within each segment, among product types (fixed-rate mortgages versus ARMs). Nonprime loan performance also varied by the year of loan origination (cohort year) and by location. In general, the subprime market segment performed worse than the Alt-A segment in 2009. Of the 2.76 million subprime loans that were active at the end of 2008, 10 percent (267,000) completed foreclosure in 2009. By comparison, 8 percent (208,000) of the 2.59 million Alt-A loans that were active at the end of 2008 completed foreclosure in 2009. Cumulatively, 15 percent (1.41 million) of subprime loans originated from 2000 through 2007 had completed foreclosure as of December 31, 2009, compared with 9 percent (474,000) of Alt-A loans. Among active loans at the end of 2009, 36 percent (858,000) of subprime loans were seriously delinquent, compared with 23 percent (517,000) of Alt-A loans. However, Alt-A loans accounted for 55 percent (152,000) of the 277,000 year-over-year increase in the number of seriously delinquent loans. Within the subprime and Alt-A market segments, loan performance varied by product type. As we stated in a previous report, serious delinquency rates were higher for certain adjustable-rate products common in the subprime and Alt-A market segments than they were for fixed-rate products or the market as a whole. Although many nonprime borrowers with adjustable-rate loans fell behind on their mortgages before their payments increased, the higher serious delinquency rates for these products may partly reflect the difficulties some borrowers had in making their payments when their interest rates reset to higher levels or when their monthly payments recast to fully amortizing amounts. In the subprime market segment, the serious delinquency rate for short-term hybrid ARMs was 48 percent at the end of 2009, compared with 21 percent for fixed-rate mortgages and 36 percent for all active subprime loans (see fig. 5). The serious delinquency rate increased by 11 percentage points for short-term hybrid ARMs in 2009, compared with 8 percentage points for fixed-rate mortgages and 10 percentage points for all active subprime loans. However, the year-over-year increase in the number of fixed-rate mortgages that were seriously delinquent (over 62,000) was greater than the corresponding increase among short-term hybrid ARMs (over 47,000), even though short-term hybrid ARMs were more prevalent than fixed-rate mortgages among subprime loans. In the Alt-A segment, the serious delinquency rate at the end of 2009 was higher for payment-option ARMs (38 percent) than for fixed-rate mortgages (15 percent) and active Alt-A mortgages as a whole (23 percent) (see fig. 6). The serious delinquency rate increased by 14 percentage points for payment-option ARMs in 2009, compared with 7 percentage points for fixed-rate mortgages and 9 percentage points for all active Alt-A mortgages. Although the serious delinquency rate grew faster for payment- option ARMs than for fixed-rate mortgages, the year-over-year increase in the number of seriously delinquent loans was greater for fixed-rate mortgages (about 63,000) than for payment-option ARMs (over 36,000), reflecting the preponderance of fixed-rate mortgages in the Alt-A market segment. Nonprime mortgages originated from 2004 through 2007 accounted for most of the distressed loans at the end of 2009. Of the active subprime loans originated from 2000 through 2007, 94 percent of those that were seriously delinquent as of December 31, 2009, were from those four cohorts. In addition, loans from these cohorts made up 77 percent of the subprime loans that had completed the foreclosure process. This pattern was more pronounced in the Alt-A market, where 98 percent of the loans that were seriously delinquent as of December 31, 2009, were from the 2004 through 2007 cohorts. Similarly, 95 percent of the Alt-A loans that had completed the foreclosure process were from those cohorts. Also, within each market segment, the percentage of mortgages completing the foreclosure process generally increased for each successive loan cohort (see fig. 7). Within 3 years of loan origination, 5 percent of subprime loans originated in 2004 had completed the foreclosure process, compared with 8 percent of the 2005 cohort and 16 percent each of the 2006 and 2007 cohorts. Among Alt-A loans, 1 percent of the 2004 cohort had completed the foreclosure process within 3 years of origination, compared with 2 percent of the 2005 cohort, 8 percent of the 2006 cohort, and 13 percent of the 2007 cohort. This trend is partly attributable to a decline in the appreciation of or an absolute decline in house prices in much of the country beginning in 2005 and worsening in subsequent years. This situation made it more difficult for some borrowers to sell or refinance their homes to avoid default or foreclosure. In addition, borrowers who purchased homes but came to owe more than the properties were worth, had incentives to stop making mortgage payments to minimize their financial losses. The deterioration in loan performance for the successive cohorts may also reflect an increase in riskier loan and borrower characteristics over time, such as limited documentation of borrower income and higher ratios of debt to household income. The proportion of active nonprime loans that were seriously delinquent as of December 31, 2009, varied across the states. Four states—Florida, Illinois, Nevada, and New Jersey—had serious delinquency rates above 35 percent at the end of 2009. Seven states had serious delinquency rates between 30 and 35 percent; 9 states had serious delinquency rates between 25 and 30 percent; and 19 states had serious delinquency rates between 20 and 25 percent. The remaining 12 states had serious delinquency rates of less than 20 percent, including Wyoming’s rate of 15 percent, which was the lowest in the country. Detailed data on the performance of nonprime loans by cohort year and location, as well as by market segment and product type, are available in the electronic supplement to this report. House price changes and loan and borrower characteristics, such as loan amount, combined LTV (CLTV) ratio, and borrower credit score, were among the variables that we found influenced the likelihood of default on nonprime loans originated from 2004 through 2006, the peak years of nonprime mortgage lending. In addition, nonprime loans that lacked full documentation of borrower income and assets were associated with increased default probabilities, and the influence of borrowers’ reported income varied by product type, loan purpose, and the level of documentation. For purchase loans in particular, borrower race and ethnicity were associated with the probability of default. However, these associations should be interpreted with caution because we lack data on factors—such as borrower wealth, first-time homebuyer status, and employment status—that may influence default rates and that may also be associated with race and ethnicity. Prior research has shown that various loan, borrower, and economic variables influence the performance of a mortgage. We developed a statistical model to examine the relationship between such variables and the probability of a loan defaulting within 24 months after the borrower’s first payment. We focused on the probability of a loan defaulting within 24 months as our measure of performance because a large proportion of nonprime borrowers had hybrid ARMs and prepaid their loans (e.g., by refinancing) within 2 years. For the purposes of this analysis, we defined a loan as being in default if it was delinquent by at least 90 days, in the foreclosure process (including loans identified as in real-estate-owned status), paid off after being 90 days delinquent or in foreclosure, or already terminated with evidence of a loss. We developed the statistical model using data on nonprime mortgages originated from 2004 through 2006. To include more information on borrower demographics (i.e., race, ethnicity, and reported income) than is available in the CoreLogic LP data, we matched CoreLogic LP records to HMDA records. Although we matched about three-quarters of the CoreLogic LP loans, and the loans that we could match were similar in important respects to the loans that we could not match, our estimation results may not be fully representative of the securitized portion of the nonprime market or the nonprime market as a whole. (See app. II for additional information on our matching methodology.) We produced separate estimates for the three most prevalent nonprime loan products: (1) short-term hybrid ARMs, representing 51 percent of nonprime loans originated during this period; (2) longer-term ARMs— those with interest rates that were fixed for 5, 7, or 10 years before adjusting (11 percent of originations); and (3) fixed-rate mortgages (27 percent of originations). For each product type, we produced separate estimates for purchase and refinance loans and for loans to owner- occupants and investors. Twenty-four months after the first loan payment, default rates were highest for short-term hybrid ARMs and, across product types, were generally higher for purchase loans than refinance loans. Appendix I provides additional information about our model and estimation results. Consistent with prior research, we found that lower rates of house price appreciation or declines in house prices were strongly associated with a higher likelihood of default for each product type and loan purpose. To illustrate the role of this variable, we estimated the default probability assuming house price changes that resembled the actual patterns in certain metropolitan areas, all else being equal. For example, for short- term hybrid ARMs used for home purchases, house price appreciation of 25 percent in the 1st year of the loan and then 20 percent in the 2nd year was associated with about a 5 percent estimated default probability, all else being equal (see fig. 8). Assuming instead that house prices stayed about level in the 1st year of the loan and then dropped by about 10 percent in the 2nd year, the estimated default probability for short-term hybrid ARM purchase loans increased by about 26 percentage points, to 31 percent. These two scenarios approximate the actual house price changes in Los Angeles beginning in early 2004 and mid-2005, respectively, and are emblematic of a number of markets in which a period of substantial house price growth was followed by a period of decline. Assuming that house prices rose by a modest 2 percent per year—approximating the pattern in a number of midwestern markets—the estimated default probability was about 22 percent. As shown in figure 8, the influence of house prices changes on estimated default probabilities was greater for short-term hybrid ARMs than for other mortgage products. House price changes may also reflect broader economic trends, thereby affecting the precision of estimated impacts of other broad economic variables, such as employment growth, on mortgage defaults. In our model, we included a variable for state-level employment growth and noted that the variable was positively correlated with the variable for house price changes. With that in mind, we found that for purchase and refinance loans of all product types, lower rates of employment growth were associated with somewhat higher estimated default probabilities. For example, for short-term hybrid ARM purchase loans, moving from a 4 percent employment growth rate over 24 months to a zero percent employment growth rate was associated with about a 1 percentage point increase in estimated default probabilities. For each of the other product types and loan purposes, the corresponding change was between 1 and 2 percentage points. In general, we found that higher loan amounts, higher CLTV ratios, and lower credit scores also were strongly associated with higher likelihoods of default. For example: Loan amount: For each product type and loan purpose, we estimated the default probability assuming a loan amount near the 25th percentile for that product and purpose and compared this with the estimated default probability assuming a loan amount near the 75th percentile for that product and purpose. For short-term hybrid ARMs used for home purchases, moving from a loan amount of $125,000 to $300,000 was associated with a 6 percentage point increase in estimated default probability, all else being equal (see fig. 9). A similar pattern held across product types, with a larger effect for purchase loans than refinance loans. CLTV ratio: For each product type and loan purpose, we estimated the default probability assuming a CLTV ratio close to the 25th percentile for that product and purpose and compared this with the estimated default probability assuming a CLTV ratio close to the 75th percentile for that product and purpose. For short-term hybrid ARMs used for home purchases, moving from a CLTV ratio between 80 and 90 percent to a CLTV ratio of 100 percent or more was associated with a 10 percentage point increase in estimated default probability, all else being equal (see fig. 9). For short-term hybrid ARMs used for refinancing, moving from a CLTV ratio of less than 80 percent to a CLTV ratio of 90 percent was associated with a 7 percentage point increase in estimated default probability. For the other product types, the effects of increasing the CLTV ratio were smaller for both purchase and refinance loans. Borrower credit score: For each product type and loan purpose, we estimated the default probability assuming a borrower credit score near the 75th percentile for that product and purpose and compared this with the estimated default probability assuming a loan amount near the 25th percentile for that product and purpose. For short-term hybrid ARMs used for home purchases, moving from the higher credit score to the lower one was associated with a 10 percentage point increase in estimated default probability, all else being equal (see fig. 9). For the other product types (whether for home purchase or refinancing), the effects were smaller. We also found that the difference between the loan’s initial interest rate and the relevant interest rate index (interest rate spread) had a significant influence on estimated default probabilities, which is generally consistent with other economic research showing a positive relationship between higher interest rates and default probabilities for nonprime mortgages. Across product types and loan purposes, the interest rate spread had a statistically significant influence on estimated default probabilities. For example, for short-term hybrid ARMs, moving from a spread of 3.0 percent (near the 25th percentile for that product) to a spread of 4.5 percent (near the 75th percentile) was associated with about a 4 percentage point increase in default probability for purchase and refinance loans, all other things being equal. We also estimated the effect of the debt-service-to-income (DTI) ratio at origination and found that for all product types, this variable did not have a strong influence on the probability of default within 24 months. This relatively weak association, based on the DTI ratio at origination, could differ from the impact of changes to the DTI ratio after origination due, in part, to changes in borrower income or indebtedness. For example, a mortgage that is affordable to the borrower at origination may become less so if the borrower experiences a decline in income or takes on additional nonmortgage debt. Loans originated with limited documentation of borrowers’ income or assets became prevalent in the nonprime mortgage market, particularly in the Alt-A market segment. We found that documentation of borrower income and assets influenced the probability of default of nonprime loans originated from 2004 through 2006. For purchase and refinance loans of all product types, limited documentation of income and assets was associated with a 1 to 3 percentage point increase in the estimated probability of default, all other things being equal. Our results are generally consistent with prior research showing an association between a lack of documentation and higher default probabilities. Because our data indicated that borrowers with full documentation loans had different reported risk characteristics (e.g., credit score, CLTV ratio, and reported income) than borrowers with limited documentation loans, we more closely explored the relationship between documentation level and default for short-term hybrid ARMs (the most common nonprime product) taking these differences into account. On average, short-term hybrid ARM purchase loans with limited documentation went to borrowers with higher credit scores, higher reported incomes, and somewhat lower CLTV ratios, compared with borrowers who had full documentation loans. To account for these differences, we estimated default probabilities separately for borrowers with full and limited documentation loans, using the mean credit score, reported income, and CLTV ratio values specific to each group. Using this method, the expected default probability for the limited documentation group was 3 percentage points lower than for the full documentation group, reflecting their better reported risk characteristics. However, in reality, borrowers with limited documentation loans had a 5 percentage point higher default rate than borrowers with full documentation loans. The differences between the estimated and actual default probabilities for these borrowers suggest that the reported risk characteristics—particularly income—may be misstated, or that other unobserved factors may be associated with the use of the limited documentation feature. For example, mortgage originators or borrowers may have used the limited documentation feature in some cases to overstate the financial resources of borrowers and qualify them for larger, potentially unaffordable loans. In addition, borrowers who used the feature could have experienced decreases in their income after loan origination, thereby making it more difficult for them to stay current on their payments. We also found that the influence of borrowers’ reported income varied by product type and loan purpose and, in some cases, depended on whether the loan had full documentation. For example, for short-term hybrid ARMs used for home purchases and refinancing, moving from $60,000 to $100,000 in reported income was associated with an 1 percentage point decrease in the estimated default probability for loans with full documentation, all else being equal (see fig. 10). However, for loans with limited documentation, the same change in reported income was associated with a slight increase (0.2 percentage points) in estimated default probability for purchase loans and a small decrease (0.5 percentage points) for refinance loans. For fixed-rate mortgages used for purchase and refinancing, moving from $60,000 to $100,000 in reported income was associated with small decreases in estimated default probabilities for both full and limited documentation loans, although the decreases were slightly smaller for loans with limited documentation. For longer-term ARMs, moving from the lower to the higher income level generally did not affect the estimated default probabilities for purchase or refinance loans, regardless of the level of documentation. Some researchers and market observers have noted that the foreclosure crisis has hit minority borrowers particularly hard. We found that, for certain product types and loan purposes, reported race and ethnicity were associated with the probability of default for nonprime mortgages. Not controlling for other variables, black or African-American borrowers had higher 24-month default rates across product types than white borrowers, especially for purchase loans. For example, for short-term hybrid ARMs, black or African-American borrowers had about a 12 percentage point higher default rate than white borrowers for purchase loans and about a 2 percentage point higher default rate for refinance loans (see fig. 11). Additionally, Hispanic or Latino borrowers (of all races) generally had higher default rates than (non-Hispanic) white borrowers. For example, Hispanic or Latino borrowers had about an 8 percentage point higher default rate than white borrowers for short-term hybrid ARM purchase loans and about a 2 percentage point higher default rate for refinance loans. For fixed-rate refinance loans, however, Hispanic borrowers had essentially the same default rate as white borrowers. Various factors may help to explain some of the observed differences in the default rates between racial and ethnic groups. Across product types, black or African-American borrowers had lower average credit scores and reported incomes than white and Hispanic or Latino borrowers. Also, black or African-American borrowers generally were more likely than white borrowers to have CLTV ratios of 90 percent or more. For short- term hybrid ARMs and longer-term ARMs, black or African-American and Hispanic or Latino borrowers were less likely to have loans that originated in 2004, when house price appreciation was still strong in many parts of the country. In addition, Hispanic or Latino borrowers had a higher incidence of limited documentation loans and were concentrated in California, where house price declines in a number of areas were particularly severe. Controlling for these variations, we found that the differences in estimated default probabilities by racial and ethnic group were still significant but considerably smaller than the actual observed differences (i.e., the differences without the statistical controls in place). Taking short-term hybrid ARMs used for home purchases as an example, when we estimated default probabilities by racial and ethnic group holding the other variables in our model to the mean values for each group, we found that the estimated default probability for black or African-American borrowers was about 7 percentage points higher than for white borrowers, compared with the observed 12 percentage point difference that we have previously discussed (see fig. 12). Using the same assumptions, the corresponding default probability for Hispanic or Latino borrowers was about 4 percentage points higher than for white borrowers. For short-term hybrid ARMs used for refinancing, black or African-American borrowers had only about a 1 percentage point higher estimated default probability than white borrowers, while Hispanic or Latino borrowers had about the d about the same estimated default probability as white borrowers. same estimated default probability as white borrowers. Inferences drawn from these statistical results should be viewed with caution because we lack data for variables that may help to explain the remaining differences in estimated default probabilities between borrowers of different racial and ethnic groups. Unobserved factors that may influence the likelihood of default may also be associated with race and ethnicity. For example: First-time homebuyer: We could not determine which nonprime borrowers were first-time homebuyers, but other evidence suggests that members of minority groups are disproportionately first-time homebuyers. To the extent that black or African-American and Hispanic or Latino borrowers with purchase loans were disproportionately first- time homebuyers, their higher estimated default probabilities may partly reflect limited experience with the risks and costs of homeownership. As shown in figure 12, we found that the differences in estimated default rates between racial and ethnic groups were much smaller for nonprime refinance loans—which, by definition, exclude first-time homebuyers— than they were for purchase loans. Employment status: We did not have data on the employment status of nonprime borrowers, but unemployment rates are generally higher for black or African-American and Hispanic or Latino workers than for white workers. The higher estimated default probabilities that we found for black or African-American and Hispanic or Latino borrowers may reflect that nonprime borrowers from minority groups were disproportionately affected by unemployment in recent years. Wealth: Although we obtained data on reported income by matching CoreLogic LP and HMDA records, we did not have information on nonprime borrowers’ savings or other assets, which may affect their ability to keep up with their mortgage payments if faced with job loss or other unexpected changes in income or expenses. However, according to the Survey of Consumer Finances, nonwhite and Hispanic families generally are less likely to save or hold financial assets than non-Hispanic white families. Furthermore, the median value of assets for nonwhite and Hispanic families having financial assets is dramatically less than for non- Hispanic white families. Origination channel or lender steering to higher-cost or riskier loans: We did not have data on whether the nonprime loans were originated by mortgage brokers (intermediaries between borrowers and lenders) or directly by a lender’s retail branch, or how the loans were marketed to the borrowers. Some evidence suggests that broker-originated loans were associated with higher default rates and that, at least in some markets, minority families were more likely to access the mortgage market through brokers rather than through retail lenders. In addition, some researchers and market observers have raised concerns that some nonprime loan originators used questionable marketing tactics in lower-income and minority neighborhoods. Such practices may have led borrowers to take out higher-cost or riskier loans than necessary, which may have increased their probability of default. Mortgage market participants, financial regulators, investors, and public policy analysts use mortgage data for a variety of purposes. Some of the broad uses of such data include monitoring and modeling the performance of mortgages and mortgage-backed securities, assessing the soundness of financial institutions with mortgage-related holdings, and examining fair lending and consumer protection issues. For example, in a 2009 report, we used loan-level mortgage data to assess the implications of proposed mortgage reform legislation on consumer protections and on the availability of mortgage credit. Existing sources of data on nonprime mortgages contain a range of information to support these different uses. Loan-level data with broad national coverage of the nonprime market segment are available from several sources: four mortgage databases (three maintained by private firms and one by the federal government) and two major credit reporting agencies. For comparison, we also reviewed information on a HUD database of FHA-insured mortgages, because the borrower populations served by FHA and the nonprime market earlier in the decade had some similarities (e.g., relatively low credit scores) and the database is rich in detail. CoreLogic LP Asset-Backed Securities (ABS) Database: A private sector database of nonprime loans that contains information on nonagency securitized mortgages in subprime and Alt-A pools. The data are supplied by a number of different parties, including loan servicers; broker-dealers; and security issuers, trustees, and administrators. CoreLogic LP Loan Level Servicing (LLS) database: A private sector database of prime, nonprime, and government-guaranteed mortgages that contains data supplied by participating loan servicers. The mortgages include loans in agency and nonagency securitizations and loans held in lenders’ portfolios. Lender Processing Services (LPS) Loan Level Data: Similar to the LLS database, this private sector database contains data supplied by participating loan servicers on prime, nonprime, and government- guaranteed mortgages, including loans in agency and nonagency securitization and loans held in lenders’ portfolios. Consumer credit file data: Two national credit reporting agencies—both private firms—provide anonymous data from consumer credit files that include information on prime, nonprime, and government-guaranteed mortgages. FFIEC HMDA data: A federal government database that contains information reported by lenders on about 80 percent of all mortgages funded each year, including nonprime loans. HUD Single Family Data Warehouse (SFDW): A federal government database with information on mortgages insured by FHA. Among the data sources that include nonprime mortgages, the private databases and extracts of credit file data can be licensed or purchased for a fee. Recent HMDA data can be acquired at no charge. Some of these data may be subject to use restrictions determined by the provider. The private companies and credit reporting agencies update data on a daily or monthly basis and provide the updated data to users within 1 month or upon request. HMDA data are updated annually with a lag of 9 months. While these data sources currently offer some similar data elements, the sources vary in their coverage of loan, property, and borrower attributes. In part, this variation reflects the different primary purposes of the data sets. For example, the HMDA database is intended to provide the public with loan data that can assist in identifying potential risks for discriminatory patterns to help enforce antidiscrimination laws and evaluate bank community reinvestment initiatives. Accordingly, the HMDA data provide relatively detailed information about mortgage borrowers but no information about the performance of the loans. By contrast, the CoreLogic LP and LPS databases offer performance data to support the benchmarking and analysis of loans or mortgage-backed securities. Figure 13 presents some of the available data elements, with a focus on data that may assist in evaluating the probability of mortgage default and differences in mortgage outcomes across demographic groups. All of the nonprime data sources report on loan amount. The sources vary in their coverage of other loan attributes, such as mortgage type and performance status. All of the nonprime data sources report the property location at the ZIP code or Census-tract level, while coverage of other property attributes, such as property type and appraised value, varies. In the category of borrower attributes, all but one of the nonprime data sources provide borrower credit score at loan origination and owner-occupancy status. Among the nonprime data sources, only the HMDA data and credit reporting agency data provide additional demographic information on borrowers. Several other sources of mortgage data provide useful information about the mortgage market, including nonprime loans, but do not provide loan- level detail or, in some cases, lack broad national data coverage. For example, the Mortgage Bankers Association’s National Delinquency Survey provides quarterly summary statistics on the performance of the overall mortgage market and different market segments, including subprime loans. RealtyTrac offers data on the number of properties in some stage of the foreclosure process but not data on all active loans. Additionally, federal banking regulators and the government-sponsored enterprises produce free or comparatively low-cost data that are typically aggregated and only cover mortgages within their regulatory jurisdiction. Although the selected data sources that include nonprime mortgages contain important loan, property, and borrower characteristics, the sources have a number of constraints. First, the data sources generally lack information on certain attributes that could help inform policy decisions or regulatory efforts to mitigate risk, including the following: Loan attributes: Although three of the five nonprime data sources provide information on the initial interest rates of the mortgages (and, in some cases, how those interest rates can change over the life of the loan), they do not provide information on other mortgage costs, such as points and fees paid at loan closing. For example, one study that found no evidence of adverse pricing of subprime loans by race, ethnicity, or gender noted that an important caveat to the analysis was the lack of data on points and fees. Consequently, data users have limited ability to evaluate the influence of loan costs on default probabilities or to examine fair lending concerns regarding loan pricing. In addition, while the CoreLogic LP LLS and LPS Loan Level Data databases indicate whether a mortgage was originated by a broker or directly by a lender’s retail branch, the other data sources do not. As we have previously noted, some research has suggested associations between origination channel and mortgage performance. Borrower attributes: A number of borrower characteristics that may be associated with default risk generally does not appear in the nonprime data sources we reviewed. For example, first-time homebuyers are not directly identified in any of the nonprime data sources, limiting the ability of analysts to compare the marginal effect of prior homeownership experience on default probabilities. (By comparison, SFDW identifies first- time homebuyers with FHA-insured mortgages and contains data on loan performance.) In addition, none of the nonprime data sources contain information on borrower wealth (savings and other assets), a factor that could affect a borrower’s ability to continue making mortgage payments in times of economic stress. With the exception of the credit reporting agencies, the data sources also do not always directly provide information on the amount of borrowers’ other mortgage debt (second liens), which may constrain accurate assessment of the relationship between home equity and default. Similarly, data on nonmortgage credit obligations are unavailable, except from the credit reporting agencies, which may limit researchers’ understanding of how borrowers’ total debt burden affects the mortgages they obtain and their ability to meet mortgage obligations. Also, the data sources lack information on borrower life events that may influence the probability of mortgage default, such as job loss or divorce. A second type of constraint is that analysts may not be able to generalize their results to the entire nonprime market because certain data sources do not cover all segments of the market and some mortgage originators, securitizers, or servicers do not contribute information. For example, the CoreLogic LP ABS database contains information on a large majority of nonprime mortgages that were securitized but not those that lenders hold in their portfolios. As we have previously noted, researchers have found that nonprime mortgages that were not securitized may have less risky characteristics than those that were securitized. Private sector databases that contain information on both securitized and nonsecuritized mortgages (CoreLogic LP LLS and LPS Loan Level Data) cover the majority of the market but do not provide complete market coverage because not all servicers contribute information to the databases. Similarly, because mortgage originators located outside of metropolitan areas are not required to report their loan information, the HMDA data do not capture many mortgages made in rural areas. By contrast, the credit reporting agencies have broader market coverage but lack data on key mortgage attributes, such as loan type and purpose. The third constraint we identified is that the existing nonprime data sources cannot readily be combined to create a single database with a more comprehensive set of variables. Merging data sources enables researchers to more thoroughly analyze lending patterns and factors influencing loan performance. However, due to competition and privacy concerns, the selected data sources either elect not to provide or are restricted from providing certain key fields that could be used to merge databases, such as the property address. For example, to match loan records in the CoreLogic LP ABS database and HMDA data, we relied in part on loan origination date fields that are not publicly released due to privacy concerns. Even with the origination date fields, we could not match all of the CoreLogic LP records to HMDA records. Finally, a user of existing data sources may have the ability to track some specific loans over time but may not easily track a specific borrower or property. Tracking a specific borrower or property over time would provide insights into mortgage outcomes throughout a homeownership experience, even if a borrower refinances into a new mortgage. Ongoing federal efforts could provide data on the entire mortgage market that potentially would not have some of the constraints that we identified in the existing sources of mortgage data. First, officials from the Board of Governors of the Federal Reserve System (Federal Reserve Board) and Freddie Mac are collaborating on a pilot project to develop a publicly available National Mortgage Database (NMDB). The officials are exploring the feasibility of developing a federally funded, loan-level, and representative database of first-lien mortgages designed to address mortgage-related policy, finance, and business concerns. NMDB would compile data on a representative sample of outstanding mortgages from a national credit reporting agency, supplement those data by matching records to existing mortgage databases (such as the HMDA data), and obtain data unavailable in any existing databases through a survey of borrowers. Since NMDB would include data from a variety of sources, it would provide more comprehensive data on the first-lien mortgage market than are currently available. If implemented, the combined database would contain loan-level information on (1) mortgage terms; (2) mortgage performance from origination to termination; (3) borrowers’ other credit circumstances over the life of the loan; (4) borrower demographics; and (5) other borrower attributes, such as key life events and shopping behavior. Second, the Dodd-Frank Wall Street Reform and Consumer Protection Act provides for additional compilation of HMDA data, such as borrower age and credit score, loan origination channel, and—as the Bureau of Consumer Financial Protection deems appropriate—a unique identifier for the loan originator and a universal loan identifier. Additionally, the act includes the creation of a publicly available Default and Foreclosure database that would include Census tract-level information on the number and percentage of mortgages delinquent for more than 30 and 90 days, real-estate-owned properties, mortgages in the foreclosure process, mortgages with negative equity, and other information. If implemented, the universal loan identifier could facilitate matching among mortgage databases, and the HMDA data would become more comprehensive. The growth of the nonprime market earlier in this decade was accompanied by a shift toward increasingly risky mortgage products. Nonprime loans provided homeownership and refinancing opportunities that may have benefited many households. However, many nonprime loans had features or were underwritten to standards that made them vulnerable to default and foreclosure, particularly in recent years when house prices began to stagnate and decline and economic conditions eroded more broadly. As a result, millions of nonprime borrowers have lost their homes or are in danger of doing so. These issues have particular salience for minority borrowers, who have experienced particularly high default rates. The persistently weak performance of nonprime mortgages suggests that loan performance problems in the nonprime market will not be resolved quickly, and underscores the importance of federal efforts to assist distressed borrowers and prevent a recurrence of the aggressive lending practices that helped precipitate the foreclosure crisis. As lawmakers seek to reform mortgage lending practices, they will need to consider how their efforts may affect consumer protections, the availability of mortgage credit, and progress toward the goal of sustainable homeownership. Data on the performance of nonprime loans and on the borrowing and lending practices associated with them can help analysts and policymakers assess the potential effects of proposed reforms and evaluate the results of their implementation. Although extensive data are available on nonprime loans, no one data source is comprehensive. Existing data sources can be combined with effort, but even then certain data that could inform understanding of the nonprime market—such as total mortgage costs and first-time homebuyer status—are not readily available. Having access to a more comprehensive set of data might have enhanced the ability of researchers, regulators, and investors to monitor lending practices, evaluate mortgage performance, and assess the mortgage outcomes for different groups of borrowers. Ongoing federal efforts, including the NMDB pilot project, may improve the quality and availability of mortgage market data going forward. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and other interested parties. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This appendix describes the econometric model we developed to examine the relationship between variables representing loan attributes, borrower characteristics, and economic conditions and the probability of a nonprime loan entering default within 24 months after the first loan payment. Certain loan attributes and borrower characteristics have been associated with a higher risk of mortgage default. For example, lower down payments, lower borrower credit scores, and limited documentation of borrowers’ income and assets have been cited as increasing the risk of default. Economic conditions, such as house price changes, have also been associated with default risk. Since minority borrowers have accounted for a larger share of the nonprime mortgage market than the mortgage market as a whole, associations between race and ethnicity and nonprime mortgage performance also are of interest. However, data limitations have complicated efforts to analyze the demographic characteristics of nonprime borrowers, such as race, ethnicity, and income. Existing data sets either provide detailed information about nonprime loans but limited information about the borrowers (e.g., CoreLogic LoanPerformance (CoreLogic LP) data) or provide more extensive information about borrowers but not about loan performance over time (e.g., Home Mortgage Disclosure Act (HMDA) data). To include information on the demographic characteristics of nonprime borrowers in our model, we matched records in the CoreLogic LP data to HMDA records. For securitized first-lien nonprime loans originated from 2004 through 2006, we achieved a match rate of approximately 73 percent, representing about 6.9 million records. (App. II contains a more detailed discussion of our methodology.) Of all the CoreLogic LP records that we matched to HMDA records, we used those for which the associated property was located in an area covered by the Federal Housing Finance Agency’s (FHFA) house price indexes (HPI) for metropolitan areas, approximately 92 percent of loans. Based on each associated property’s state and Census tract, we also incorporated employment data from the Bureau of Labor Statistics (BLS) and data from the 2000 Census to control for various economic conditions and neighborhood characteristics. For each loan, we determined the performance status 24 months after the month of the first payment. We defined a loan as being in default if it was delinquent by at least 90 days, in the foreclosure process (including loans identified as in real-estate-owned status), paid off after being 90-days delinquent or in foreclosure, or already terminated with evidence of a loss. We separately analyzed the three most prevalent types of nonprime loans: short-term hybrid adjustable-rate mortgages (ARM) (ARMs with initial 2- or 3-year fixed-rate periods followed by frequent interest rate adjustments); fixed-rate mortgages; and other longer-term ARMs (ARMs with initial 5-, 7-, or 10-year fixed-rate periods). For each product type, we estimated default probabilities for purchase money loans separately from loans for refinance, and for each product type and loan purpose, we examined separately loans made to owner-occupants and investors. Our primary reason for examining performance by product type, loan purpose, and occupancy status is that borrower incentives and motivations may vary for loans with different characteristics and purposes. For example, because of their early, frequent, and upward interest rate adjustments, short-term hybrid ARMs provide a stronger incentive for a borrower to exit earlier from a mortgage as compared with fixed-rate mortgages or longer-term ARMs. Also, an investor may not react the same way as an owner-occupant may react when facing similar economic circumstances. We estimated separate default models for each mortgage product type, although the general underlying structure of the models was similar. We used a logistic regression model to explain the probability of loan default, based on the observed pattern of actual defaults and the values of variables representing loan attributes, borrower characteristics, and economic conditions (see table 1). Some variables describe conditions at the time of mortgage origination, such as the loan-to-value (LTV) ratio, the borrower’s credit score, and the borrower’s reported income. Other factors influencing loan performance vary over time in ways that can be observed, or at least approximated. For example, greater house price appreciation (HPA) contributes to greater housing equity, thus reducing the probability that a borrower, if facing financial distress, views defaulting on a loan as a better option than prepaying. More generally, greater house price appreciation creates equity that may induce a borrower to prepay, which eliminates any default risk that would remain if the loan were active. Some potentially significant determinants of mortgage default, such as job loss or illness, are not available for inclusion in our model. In addition, we lack data on certain factors—such as borrower wealth and first-time homebuyer status—that could be especially relevant to explaining actual loan performance. Tables 2 through 4 provide information on the number of loans and mean values for each of the product types for which we estimated default probabilities. Short-term hybrid ARMs were the most prevalent type of mortgage, and purchase loans were more prevalent than refinance loans, except among fixed-rate mortgages. Default rates were highest for short- term hybrid ARMs and generally higher for purchase loans than for refinance loans, except for fixed-rate and longer-term ARM loans to investors. The results of our analysis are presented in tables 5 through 8. We ran 12 regressions: separate owner-occupant and investor regressions for purchase and refinance loans of three product types (short-term hybrid ARMs, fixed-rate mortgages, and longer-term ARMs). For short-term hybrid ARMs, the most prevalent product type, we present the results for purchase and refinance loans to owner-occupants (table 5) and investors (table 6). For the other product types, we present the results for purchase and refinance loans to owner-occupants only (tables 7 and 8); the results for investors were substantively similar. We present coefficient estimates as well as a transformation of the coefficients into a form that can be interpreted as the marginal effect of each variable on the estimated probability of default. This marginal effect is the calculation of the change in the estimated probability of default that would result if a variable’s standard deviation were added to that variable’s mean value, while all other variables are held at their mean values. This permits a comparison of the impact of different variables within and across product types. In general, HPA, loan amount, CLTV ratio, and FICO score had substantial marginal effects across different product types and loan purposes. Specifically, lower HPA, higher loan amount, higher CLTV ratio, and lower FICO scores were associated with higher likelihoods of default. The observed effects for DTI ratio were smaller. Documentation of borrower income and assets and a loan’s interest rate spread over the applicable Treasury rate had substantial marginal effects. Limited documentation and higher interest rate spreads were associated with higher default probabilities. To describe the race, ethnicity, and reported income of nonprime borrowers, we matched loan-level records from two primary data sources—CoreLogic LoanPerformance’s (CoreLogic LP) Asset-Backed Securities Database and Home Mortgage Disclosure Act (HMDA) data compiled by the Federal Financial Institutions Examination Council (FFIEC). The CoreLogic LP database provides extensive information about the characteristics and performance of securitized nonprime mortgages. However, it contains relatively little information about borrowers, providing only credit scores and debt-service-to-income ratios. In contrast, HMDA data contain limited information about loan characteristics and nothing about performance, but they do provide information on borrowers’ race, ethnicity, and reported income. HMDA data are estimated to capture about 80 percent of the mortgages funded each year and cover all major market segments, including nonprime loans. HMDA data, therefore, should capture most of the loans in the CoreLogic LP database. While the CoreLogic LP and HMDA data emphasize different kinds of loan and borrower information, they do have some information in common. These common data items—including loan amount, loan purpose, loan origination date, property location, and loan originator—allow the two data sets to be matched on a loan-by-loan basis. Using the methodology that we developed in previous work, we matched records from the CoreLogic LP database for loans that were originated from 2004 through 2006 to HMDA data files for those same years. We focused on loan originations from this period because there were large numbers of nonprime originations in those years. The CoreLogic LP data set that we used for the matching process contained records for 9,292,684 loans. The data set included records for conventional first-lien purchase and refinance loans to owner-occupants, investors, and owners of second homes. The data excluded records for loans for units in multifamily structures, and for manufactured housing; loans in Guam, Puerto Rico, and the Virgin Islands; and loans with terms other than 15, 30, or 40 years. The HMDA data set that we used for the matching process contained records for 24,227,566 loans. As with the CoreLogic LP data, we focused on first-lien purchase and refinance loans. The HMDA data set excluded loans for properties other than one- to four-family residential units. Because the CoreLogic LP database contained only conventional loans in private label securitizations, we also excluded from the HMDA data set loans that involved government programs—such as mortgages guaranteed by the Federal Housing Administration or the Department of Veterans Affairs—and conventional loans that were indicated as sold to Fannie Mae, Freddie Mac, Ginnie Mae, or Farmer Mac. Matching the loan records from the two data sources required us to make the common data items compatible. We were able to use a straightforward process for the loan amount and purpose that required only rounding the CoreLogic LP loan amount to the nearest $1,000 and aggregating the three CoreLogic LP refinance categories into one category. However, the process was more complicated for origination date and property location. We determined that the name of the loan originator was not particularly useful for making initial matches of loan records because this information was missing for a substantial percentage of the CoreLogic LP records. However, the originator’s name was useful in assessing the quality of the matches that we made using other data elements. About 15 percent of the loans in our CoreLogic LP data set had an origination date that was the 1st day of a month. This distribution pattern was inconsistent with the distribution of origination days in HMDA, which showed a much more even pattern throughout the month, with an increase in originations toward the end of each month rather than the beginning of each month. Because of this inconsistency, we relied on the origination month rather than the origination month and day to match loan records. The CoreLogic LP and HMDA data provided different geographic identifiers for loans, with the CoreLogic LP data providing the ZIP code and the HMDA data providing the Census tract. To facilitate record matching on the basis of property location, we related the Census tract information in the HMDA data to a corresponding ZIP code or ZIP codes in the CoreLogic LP data, using 2000 Census files and ZIP code boundary files from Pitney Bowes Business Insight. Using mapping software, we overlaid Census tract boundaries on ZIP code boundaries to determine the proportion of each Census tract’s area that fell within a given ZIP code area. For each Census tract, we kept all ZIP codes that accounted for at least 5 percent of that tract’s area. About 60 percent of the Census tracts were associated with only one ZIP code (meeting the 5 percent threshold), and almost all Census tracts (97.5 percent) included no more than four ZIP codes. When a Census tract was associated with only one ZIP code, all HMDA records in that Census tract were candidates to match CoreLogic LP records in that ZIP code. All HMDA records in tracts with more than one ZIP code were candidates to match CoreLogic LP records in those ZIP codes. We matched loan records in the CoreLogic LP and HMDA data sets as follows. First, for each loan origination year (2004, 2005, and 2006), we made initial matches by identifying CoreLogic LP and HMDA loans with the same property location, origination month, loan amount, and loan purpose. After finding all possible HMDA matches for each CoreLogic LP record, we classified these initial matches as either one-to-one matches (CoreLogic LP records with one corresponding HMDA record), one-to- many matches (CoreLogic LP records with more than one corresponding HMDA record), or nonmatches (CoreLogic LP records with no corresponding HMDA record). One-to-one matches accounted for about 55 percent of the loans in our CoreLogic LP data set, one-to-many matches accounted for about 25 percent, and nonmatches accounted for about 15 percent. Our match rates were highest for 2004 originations, about 85 percent, and lowest for 2006 originations, about 82 percent. The quality of the matches was particularly important because we were examining statistical relationships between borrower characteristics and loan performance. To provide reasonable assurance that the matches were robust, we performed three types of quality checks on our initial one-to- one and one-to-many matches. First, we used information about the loan originator—information that was included in both the CoreLogic LP and HMDA data. The HMDA data clearly identified loan originators—referred to as “HMDA respondents”—using a series of codes that corresponded to a list of standardized originator names. However, in more than 40 percent of the CoreLogic LP records in our data set, the originator name was marked as not available. In other cases, the originator was listed by a generic term, such as “conduit,” or was an entity that appeared to be involved in the securitization process but was not necessarily the originator. Originators that were listed were often referred to in a number of ways—for example, “Taylor Bean,” “Taylor Bean Whitaker,” “Taylor, Bean & Whitaker,” “TaylorBean,” “TBW,” and “TBW Mortgage Corp.” all referred to the HMDA respondent “Taylor, Bean & Whitaker.” For CoreLogic LP loans with originator information, we standardized the originator names in the CoreLogic LP data, and we used these same originator names for the HMDA data. We compared the standardized originator names in matched records and if the standardized names matched, we classified the match as a robust match, and deleted any other HMDA records that might have matched to that CoreLogic LP record. Second, for CoreLogic LP loans with no originator name, we examined the relationship between the HMDA loan originator and the issuer of the securities associated with the loan. Many institutions, such as Countrywide and Ameriquest, originated and securitized large numbers of nonprime loans. While some of these institutions identified themselves as the originator of a loan, others typically did not make the originator information available. In these cases, if the CoreLogic LP securitizer matched the HMDA originator, we classified an initial match as a robust match. If the issuer did not originate substantial numbers of nonprime loans, or also relied on other originators to provide loans for its securitizations, we developed criteria to check for evidence of business relationships between the issuer and various originating institutions. This check had two components. First, if within the CoreLogic LP data set we identified an originator-issuer combination, we defined that combination as a business relationship. Second, we considered combinations of originators from the HMDA data and issuers from the CoreLogic LP data. For an originator-issuer combination to be a business relationship, a combination had to appear at least 250 times in our set of initial one-to-one matches, or meet one of two additional criteria. Specifically, if the combination appeared at least 100 times, the originator must have made 10 percent of the issuer’s securitized loans, or if the combination appeared at least 50 times, the issuer had to have securitized 33 percent of the loans made by the originator. We classified initial matches for which such business relationships existed as robust matches. Additionally, if none of these tests resulted in a robust match, we examined the loan origination day in the CoreLogic LP and HMDA data sets. If the days matched exactly, we classified an initial match as a robust match. Finally, for some one-to-many matches that shared originator, issuer, or business relationship characteristics, we examined the CoreLogic LP and HMDA characterizations of whether the borrower was an owner-occupant. In some cases, we were able to classify an initial match as a robust match if CoreLogic LP and HMDA owner-occupant characteristics matched. Overall, we produced high-quality matches for about 73 percent of the records in our CoreLogic LP data set, including about 75 percent of the loans originated in 2004, about 73 percent of the loans originated in 2005, and about 72 percent of the loans originated in 2006 (see table 9). A potential concern with constructing a data set using a matching process is that records that do not match may differ systematically from records that do match, thereby making it difficult to make inferences from the matched data. However, we believe that the CoreLogic LP records that we were unable to match to HMDA records were similar in important respects to CoreLogic LP records that we could match. For example, loans in subprime pools represented 61.5 percent of the overall CoreLogic LP sample, and 62.3 percent of matched loans. Purchase loans represented 44.8 percent of the overall CoreLogic LP data set, and 46.0 percent of matched loans. In terms of geography, state shares of unmatched and matched loans were similar. Loans in California represented 23.1 percent of the full CoreLogic LP data set and 22.5 percent of matched records. Furthermore, nonprime borrowers with matched and unmatched records had similar FICO scores. For example, subprime borrowers with matched records had median FICO scores of 617, 620, and 617 for loans originated in 2004, 2005, and 2006, respectively; the corresponding scores for subprime borrowers with unmatched records were 617, 617, and 615. Likewise, Alt-A borrowers with matched records had median FICO scores of 708, 709, and 703 for 2004, 2005, and 2006, respectively; the corresponding scores for Alt-A borrowers with unmatched records were 706, 707, and 702. In addition, as shown in table 10, for each loan origination year and mortgage product type, median initial interest rates were identical or similar for borrowers with matched and unmatched records. In addition to the individual named above, Steve Westley (Assistant Director), William Bates, Jan Bauer, Stephen Brown, Julianne Dieterich, DuEwa Kamara, John McGrail, John Mingus, Marc Molino, Bob Pollard, and Jennifer Schwartz made key contributions to this report. | The surge in mortgage foreclosures that began in late 2006 and continues today was initially driven by deterioration in the performance of nonprime (subprime and Alt-A) loans. Nonprime mortgage originations increased dramatically from 2000 through 2006, rising from about 12 percent ($125 billion) of all mortgage originations to about 34 percent ($1 trillion). The nonprime market contracted sharply in mid-2007, partly in response to increasing defaults and foreclosures for these loans. This report (1) provides information on the performance of nonprime loans through December 31, 2009; (2) examines how loan and borrower characteristics and economic conditions influenced the likelihood of default (including foreclosure) of nonprime loans; and (3) describes the features and limitations of primary sources of data on nonprime loan performance and borrower characteristics, and discusses federal government efforts to improve the availability or use of such data. To do this work, GAO analyzed a proprietary database of securitized nonprime loans and Home Mortgage Disclosure Act data, and reviewed information on mortgage data sources maintained by private firms and the federal government. The number of active nonprime loans originated from 2000 through 2007 that were seriously delinquent (90 or more days late or in the foreclosure process) increased from 1.1 million at the end of 2008 to 1.4 million at the end of 2009. Serious delinquency rates were higher for certain adjustable-rate products common in the subprime and Alt-A market segments than they were for fixed-rate products. The number of nonprime loans that were 90 or more days late grew throughout 2009, accounting for most of the overall growth in the number of serious delinquencies. By comparison, the number of active loans in the foreclosure process grew in the first half of the year, and then began to decline somewhat. Additionally, 475,000 nonprime mortgages completed the foreclosure process during 2009. The persistently weak performance of nonprime loans suggests that problems in the nonprime market will not be resolved quickly, and underscores the importance of federal efforts to assist distressed borrowers and prevent a recurrence of the aggressive lending practices that helped precipitate the foreclosure crisis. In addition to performance differences between mortgage products, GAO found across product types that house price changes, loan amount, the ratio of the amount of the loan to the value of the home, and borrower credit score were among the variables that influenced the likelihood of default on nonprime loans originated from 2004 through 2006. In addition, loans that lacked full documentation of borrower income and assets were associated with increased default probabilities, and the influence of borrowers' reported income varied with the level of documentation. GAO found that borrower race and ethnicity were associated with the probability of default, particularly for loans used to purchase rather than to refinance a home. However, these associations should be interpreted with caution because GAO lacks data on factors that may influence default rates and that may also be associated with race and ethnicity, such as borrower wealth and first-time homebuyer status. Existing sources of data on nonprime mortgages contain a range of information to support different uses. While these data sources offer some similar elements, they vary in their coverage of loan, property, and borrower attributes. The data sources generally lack information on certain attributes that could help inform policy decisions or regulatory efforts to mitigate risk. For example, first-time homebuyers are not identified in any of the data sources, limiting the ability of analysts to compare the marginal effect of prior homeownership experience on default probabilities. In addition, most of the data sources do not cover the entire nonprime mortgage market. Ongoing federal efforts have the potential to provide data that may not have some of the constraints of the existing sources. For example, officials from the Board of Governors of the Federal Reserve System and Freddie Mac are collaborating on a pilot project to develop a publicly available National Mortgage Database, which would compile data on a representative sample of outstanding mortgages and provide more comprehensive data than are currently available. GAO makes no recommendations in this report. |
SSA began as an independent agency with a mission of providing retirement benefits to the elderly. A three-member, independent Social Security Board was established in 1935 to administer the Social Security program. The Chairman of the Board reported directly to the President until July 1939, when the Board was placed under the newly established Federal Security Agency (FSA). At that time, the Social Security program was expanded to include Survivors Insurance, which paid monthly benefits to survivors of insured workers. In 1946, the Social Security Board was abolished, and its functions were transferred to the newly established SSA, which was still within FSA. The FSA administrator established the position of Commissioner to head SSA. The FSA was abolished in 1953, and its functions were transferred to the Department of Health, Education and Welfare (HEW). In addition, the position of SSA Commissioner was designated a presidential appointee position requiring Senate confirmation. The Social Security program was expanded in 1956 to include the DI program, providing benefits to covered workers who became unable to work because of disability. In 1965, amendments to the Social Security Act increased SSA’s scope and complexity by establishing the health insurance program known as Medicare. The purpose of Medicare was to help the qualified elderly and disabled pay their medical expenses. SSA administered the Medicare program for about 12 years before Medicare was transferred to a new division within HEW, the Health Care Financing Administration. Further amendments to the Social Security Act created the SSI program, effective in 1974. This program was designed to replace welfare programs for the aged, blind, and disabled administered by the states. The SSI program added substantially to SSA’s responsibilities because the agency began dealing directly with SSI clients by determining recipients’ eligibility based on income and assets. SSA remained a part of HHS (formerly HEW) from 1953 until its independence in 1995. Since 1984, congressional committees responsible for overseeing SSA’s activities had considered initiatives to make SSA an independent agency. Although the reasons for considering independence for SSA have varied over the years, such legislation was introduced in several sessions of the Congress. Statements by committee chairmen have shown a desire to make SSA more accountable to the public for its actions and more responsive to the Congress’ attempts to address SSA’s management and policy concerns. As noted earlier, legislation was enacted that made SSA independent as of March 31, 1995. At this time of heightened attention to the costs and effectiveness of all federal programs, the Congress and the administration have acted to promote a more efficient federal government that stresses managing for results and accountability. These efforts include the Chief Financial Officers Act of 1990 (CFO Act), the Government Performance and Results Act of 1993 (GPRA), and the Government Management Reform Act of 1994 (GMRA). In addition, the administration has undertaken, under the Vice President’s direction, the National Performance Review (NPR), aimed at making government work better and cost less. We strongly support these efforts as important and necessary steps to improving federal management. SSA has surpassed many other federal agencies in these areas. For example, as a pilot agency under GPRA, SSA has worked to strengthen its strategic management process and to identify and develop performance measures that help its managers, the Congress, and the public assess its performance. In addition, for several years now, it has measured satisfaction levels among some of its customers and used focus groups to understand its customers’ and employees’ views, reflecting the customer service focus promoted by NPR. SSA is also a leader among federal agencies in producing complete, accurate, and timely financial statements as required by the CFO Act and GMRA. For example, for fiscal year 1995, SSA issued audited financial statements 3 months before its legal mandate. Moreover, SSA was among the first federal agencies to produce an accountability report, which is designed to consolidate current reporting requirements under various laws and provide a comprehensive picture of an agency’s program performance and its financial condition. To be most effective, SSA’s ongoing efforts in strategic management, performance measurement, and accountability reporting will need to be continually improved and integrated into the agency’s daily operations and management. SSA has a foundation in place on which it can build to manage the significant policy and program challenges it faces in the future. As the baby boom generation ages, growing numbers of people will receive Social Security retirement and survivors benefits in the years to come, as shown in figure 1. By the year 2015—as baby boomers begin entering their mid-60s—the numbers of individuals receiving benefits will reach an estimated 50.4 million: more than one-third greater than the 37.4 million people receiving Social Security retirement and survivors benefits in 1995. Once on the rolls, retirees can be expected to receive benefits for longer time periods than past recipients. A 65-year-old male who began receiving Social Security benefits in 1940—the first year SSA began paying monthly benefits—was expected to live, on average, about an additional 12 years. By 2015, a 65-year-old male will have a life expectancy of 16 years—a 35-percent increase. During that same time period, the life expectancy for women aged 65 will increase by almost 50 percent—from an average of over 13 years to an average of nearly 20 years. Meanwhile, the ratio of contributing workers to beneficiaries will decline. By 2015, an estimated 2.6 workers will be paying taxes into the Social Security system per beneficiary; in 1950, 16.5 workers were paying Social Security taxes per beneficiary. This retirement explosion threatens the long-term solvency of the Social Security system. Beginning in 2012—16 years from now—program expenditures are projected to exceed tax income. By 2029, without corrective legislation, the trust funds are expected to be depleted, leaving insufficient funds to pay the expected level of retirement, survivors, and Disability Insurance (DI) benefits. Concerns about the long-term solvency of the Social Security system are fueling a public debate about the fundamental structure of this system. The Advisory Council on Social Security, for example, has put forth three different approaches to addressing the Social Security system’s long-term deficit. All three approaches call for some portion of Social Security payroll taxes to be invested in the stock market. Two of these approaches, however, call for allowing individuals to invest some portion of their payroll taxes in individual retirement accounts. This would be a significant departure from the original program design, in which all trust fund monies are invested and managed centrally. Given the magnitude of the financial problems facing the Social Security system and the nature of the proposals for changing the system, we can expect the debate over the financing and structure of the Social Security system to continue and intensify in the coming years. In our report on SSA’s transition to independence, we noted that the agency’s independence would heighten the need for it to work with the Congress in developing options for ensuring that revenues are adequate to make future Social Security benefit payments. More than a year after gaining independence, however, SSA is not yet ready to fully support policymakers in the current public debate on financing issues. SSA has been involved in financing issues through its Office of the Actuary, which has provided data and analyses to the Advisory Council and policymakers developing financing options. The Office of the Actuary plays a unique role within the agency because it serves both the Congress and the administration. SSA will also be providing assistance to the Social Security Advisory Board, which was established by the independence legislation to advise the Commissioner and make recommendations to the Congress and the President on SSA program policy. These supportive roles represent SSA’s major activities related to long-term financing issues. SSA has acknowledged that it has not undertaken the policy and research activities it needs to examine critical issues affecting its programs, including long-term financing, and to provide additional support to policymakers. The agency recognizes the need to be more active in these areas and in May 1996 took steps to reorganize and strengthen its policy analysis, research, and evaluation offices. It believes this reorganization will better position it to take a leadership role in critical policy and research issues related to its programs. At the time of our review, however, the reorganization had just begun, and the office responsible for coordinating all policy planning activities was only partially staffed. Although SSA did not have a specific time frame for when the reorganized policy office would be fully staffed and operational, it did expect to be better prepared to join the public debate over the next year. SSA is in a unique position to inform policymakers and the public about the critical nature of long-term financing issues. Focus groups conducted by SSA have demonstrated that the public’s knowledge of Social Security programs is generally low and the public’s confidence in the Social Security system is undermined by its future financing problems. To address these issues, SSA is conducting a public education campaign that discusses what the current system offers in disability, retirement, and survivors benefits. It also emphasizes that the Social Security system can pay benefits for many more years and that the Congress has time to act before the trust funds are depleted. SSA, however, is not discussing options for maintaining or changing the current system. Feedback SSA has received from its focus groups indicates that addressing the public’s lack of knowledge without also discussing possible options for ensuring the system’s future solvency does not instill confidence and weakens the agency’s credibility with the public. We are concerned that SSA has not seized the opportunity as an independent agency to speak out on the importance of addressing the long-term financing issues sooner rather than later. As we have noted in our previous work, the sooner action is taken to resolve the future funding shortfall, the smaller the changes to the system need to be and the more time individuals will have to adjust their financial and retirement plans. In recent years, disability caseloads have faced unprecedented growth. To manage this caseload growth and the resulting slow processing times, SSA plans to redesign and dramatically improve its disability claims process. However, SSA’s redesign effort has encountered serious implementation problems. Moreover, while SSA is taking steps to improve the process for moving eligible individuals onto the disability rolls more quickly, it has not sufficiently emphasized helping beneficiaries return to work and leave the disability rolls. During the past decade, SSA has faced significant increases in caseloads and expenditures for its two disability programs—DI and SSI. DI, enacted in 1956 and funded through payroll taxes, provides monthly cash benefits to severely disabled workers and their families; SSI was enacted in 1972 and provides assistance to needy individuals with insufficient work histories to qualify for DI. Unlike DI, SSI is funded through general revenues. DI and SSI caseloads and expenditures increased dramatically between 1986 and 1995, and the pace of this growth accelerated in the early 1990s. In 1986, 4.4 million blind and disabled people under age 65 received DI or SSI benefits; by 1995, this number had soared to 7.5 million—a 69-percent increase. As the number of DI and SSI beneficiaries increased, so did the amount paid in cash benefits. The combined DI and SSI cash benefits increased from $25 billion to $57 billion in 10 years. Adjusted for inflation, the increase in the value of these cash benefits was 66 percent. As these programs have grown, the characteristics of new beneficiaries have changed in ways that pose additional challenges for SSA. Beneficiaries are, on average, younger and more likely to have longer lasting impairments. Increases in beneficiaries with mental illness or mental retardation, especially, have driven this trend. Between 1982 and 1992, for example, mental impairment awards to younger workers increased by about 500 percent. This growing proportion of younger beneficiaries with longer lasting impairments means that the beneficiary population, on average, is likely to spend more time on the disability rolls. In 1992, for example, new DI awardees were, on average, 48 years old. Depending on the type of impairment that qualified them for benefits, these beneficiaries could spend nearly one-third of their adult lives on disability before reaching age 65. As more and more people have filed for disability benefits, SSA has been slow to process initial claims, and appeals and backlogs have grown. To manage the disability caseload growth, increase efficiency, and improve service to its customers, SSA has started a massive effort to fundamentally change how disability decisions are made. Making disability decisions is one of the agency’s most important tasks; it accounted for more than half of SSA’s total administrative budget—about $3 billion—in fiscal year 1995. Even so, claimants face long waits for disability decisions. As of June 1996, the wait for initial decisions averaged 78 days for DI claims and 94 days for SSI claims, with an additional 373-day wait for appealed decisions. Overall, the current disability claims process is not meeting the needs of claimants, the agency, or taxpayers. To deal with these problems, in 1993 SSA formed a team to fundamentally rethink and develop a proposal to redesign the disability claims process. This labor-intensive and paper-reliant process has changed little since the DI program began in the 1950s. Efforts like SSA’s—business process reengineering—have been used successfully by leading private-sector organizations to dramatically improve their operations. In April 1994, we informed the Congress that the agency’s redesign proposal was its first valid attempt to address the fundamental changes needed to cope with disability workloads. At that time, however, we also acknowledged that implementing this needed change would be difficult and that we would be monitoring SSA’s progress. During this past year, we have been reviewing various aspects of SSA’s redesign effort for this Subcommittee and have identified several implementation problems. SSA’s redesign plan includes 83 initiatives to be started during a 6-year period (1995-2000), with 40 of these to be completed or under way in the first 2 years. On the basis of our ongoing work, we have found that the scope and complexity of many initiatives have limited SSA’s progress toward its redesign goals. Although SSA has begun many of its planned initiatives, none is far enough along for SSA to know whether specific proposed process changes will achieve the desired results. Moreover, we are concerned that SSA has undertaken too many complex tasks and has not given sufficient priority to those redesign initiatives most likely to reduce processing times and administrative costs. Some of its planned initiatives require extensive design and years of development before full implementation can begin. For example, a key initiative of the redesign involves consolidating the duties, skills, and knowledge of at least two current positions into a new Disability Claim Manager (DCM) position. SSA plans to establish over 11,000 DCM positions in about 1,350 federal and state locations, recruiting these DCMs from its current workforce of federal claims representatives and state disability examiners. SSA is currently struggling to resolve stakeholder disagreements among representatives of federal and state employees about how to proceed with testing this new position. SSA must also develop training plans, conduct tests at pilot sites, post vacancy announcements for positions, and select and train DCMs. Developing software designed to allow SSA to move from its current manual, labor-intensive process to an automated process is critical to the success of SSA’s disability redesign. The scheduled implementation of this new software, however, has been delayed by more than 2 years. Moreover, although SSA has separate implementation schedules for its various redesign initiatives and for its systems development activities, these two schedules are not linked. In addition, although SSA has developed individual plans for its redesign initiatives and for its system development activities, it has not developed a comprehensive detailed plan that integrates these two efforts. Such a plan should reflect priorities, resource allocations needed, key milestones, and decision points and identify relationships among ongoing and planned process and systems changes. For example, SSA cannot effectively develop software to support its key DCM position until it has completed a pilot for this position and determined in more detail what its duties will be and what information will be needed by the new claims manager. Although SSA officials recognize the need to develop such a plan, in June 1996 they noted that the testing of process redesign features involved too many uncertainties for SSA to develop an integrated plan. Although SSA has focused on improving its processes for moving eligible claimants onto the disability rolls, it has placed little priority on helping them move off the rolls by obtaining employment. This spring, we reported that policies guiding SSA’s disability programs are out of sync with today’s view of the capabilities of individuals with disabilities. At one time, the common business practice was to encourage someone with a disability to leave the workforce. Today, however, a growing number of private companies have been focusing on enabling people with disabilities to return to work. In contrast, SSA’s programs lack a focus on providing the support and assistance that many people with disabilities need to return to work. Eligibility requirements, for example, focus on applicants’ inabilities, not their abilities; once on the rolls, beneficiaries receive little encouragement to use rehabilitation services. A greater emphasis on beneficiaries’ returning to work is needed to identify and encourage the productive capacities of those who might benefit from rehabilitation and employment assistance. Although the main reason for emphasizing returning to work is so that people maximize their productive potential, it is also true that an estimated $3 billion could be saved in subsequent years if only an additional 1 percent of the 6.6 million working-age people receiving disability benefits in 1995 were to leave the rolls by returning to work. SSA needs to develop a comprehensive return-to-work strategy that includes providing return-to-work assistance to applicants and beneficiaries and changing the structure of cash and medical benefits. As part of an effort to place greater priority on beneficiaries’ returning to work, we recommended that SSA identify legislative changes required to implement such a strategy. Although evaluating any SSA response to our recommendations would be premature, we will be assessing SSA’s efforts to help beneficiaries return to work. SSA has also missed opportunities to promote work among disabled beneficiaries where it has the legislative authority to do so. In 1972, the Congress created the Plan for Achieving Self-Support (PASS) program as part of SSI to help low-income individuals with disabilities return to work.However, SSA has not translated the Congress’ broad goals for the PASS work incentive into a coherent program design. We recently reported that SSA needs to improve PASS program management, and it has taken steps to better manage the program in accordance with our recommendations. Limiting opportunities for fraud, waste, and abuse in government programs is an important goal and essential to promoting public confidence in the government’s ability to wisely use taxpayer dollars. Moreover, problems in any one of the programs that SSA administers can undermine confidence in all of its programs. Recent media reports on SSI fraud and abuse have focused attention on SSA’s management of this program. Several of our recent reviews of the SSI program have shown that SSA’s oversight and management of SSI have been inadequate and that the agency is not aggressively pursuing opportunities to increase program efficiencies. Although quantifying the extent of fraud, waste, and abuse is difficult, we have repeatedly identified program weaknesses that SSA needs to address either on its own or with the Congress to better control these problems. For example, the media have reported allegations that some parents coach their children to fake mental impairments so that they can qualify for cash benefits. These benefits can total almost $5,500 per year for each disabled child. Our review of SSI for children with disabilities found that part of the process for determining eligibility is overly subjective and susceptible to manipulation. We asked the Congress to consider legislation to improve eligibility determinations for children with disabilities. Recently enacted legislation incorporates changes addressing this problem. In addition, in our review of the fraudulent use of translators to help individuals become eligible for SSI, we reported that SSA could reduce this type of fraud if it had a more comprehensive, programwide strategy for keeping ineligible applicants from ever receiving benefits. Moreover, we have several reviews under way of other program weaknesses. For example, in our ongoing work for the Subcommittees on Human Resources and Oversight of the House Committee on Ways and Means, we have found that even though prisoners are ineligible for SSI if they have been in jail for 1 calendar month or longer, prisoners in many large county and local jail systems have received millions of dollars in cash benefits. This means that taxpayers have been paying twice to support these individuals—both for SSI benefits and the cost of imprisonment. SSA has taken steps to review information on current prisoners to stop inappropriate payments; however, it is not taking action to develop information that would allow it to recover benefits paid to those who may have been incarcerated and received benefits in prior years, although this information is available. SSA acknowledges that it needs to do more to prevent and detect fraud, waste, and abuse. It has several initiatives under way to accomplish this, and we will be monitoring these efforts. In addition, the new SSA Inspector General’s Office, created when SSA gained independence from HHS, is increasing its emphasis on fraud and abuse. In addition to its policy and program challenges, SSA will need to meet customer expectations in the face of growing workloads and reduced resources. SSA expects to redesign inefficient work processes and modernize its information systems to increase productivity, knowing that its customer service will deteriorate to unacceptable levels if it continues to conduct business as in the past. In addition, it faces the urgent need to complete year 2000 software conversion to avoid major service disruption at the turn of the century. SSA will also need to effectively manage its workforce and consider what service delivery structure will work best in the future. As the baby boom generation ages, more and more people will be applying for and receiving SSA program benefits. In addition to increasing retirement and disability caseloads, SSA’s other workloads will grow because of increasing responsibilities. For example, SSA must meet a legislative requirement that most workers be mailed annual statements of their earnings and estimated retirement benefits, called Personal Earnings and Benefit Estimate Statements. The creation and mailing of these annual statements to all workers aged 60 and older, begun in 1995, must be expanded to those aged 25 and older—about 123 million individuals—by the year 2000. We are currently reviewing whether the usefulness of these statements can be improved and what impact they will have on SSA’s workloads. Moreover, SSA has been unable to fully meet legislative requirements to periodically review the status of disabled beneficiaries to ensure that those who are no longer disabled are removed from the rolls. SSA now has plans to review the status of more than 8 million beneficiaries in the next 7 years. To accomplish this, SSA would have to conduct about twice as many reviews as it has conducted over the past 20 years combined. SSA knows that it must meet these increasing demands in an era of federal downsizing and spending reductions. SSA has estimated that it would need the equivalent of about 76,400 workers to handle its workloads by the end of the century if it conducted business as usual. Instead, it expects to handle this work with about 62,000 workers—fewer than it has today. To handle increasing workloads and improve public service, SSA has begun to redesign inefficient work processes and develop supporting modernized information systems. SSA is in the process of a multiyear, multibillion dollar systems modernization effort expected to support new ways of doing business and improve productivity. SSA’s Automation Investment Fund of $1.1 billion supports its 5-year plan, from fiscal years 1994 to 1998, of moving from reliance on computer terminals hooked to mainframe computers in its Baltimore headquarters to a state-of-the-art, nationwide network of personal computers. The new network is expected to improve productivity and customer service in field offices and teleservice centers and allow for further technology enhancements. Although this new computer network environment may yield productivity improvements, it poses significant challenges for SSA. The usefulness of new computer systems will depend on the software developed for them. Software development has been identified by many experts as one of the most risky and costly aspects of systems development. To mitigate the risk of failing to deliver high-quality software on time and within budget, SSA must have a disciplined and consistent process for developing software. SSA has already experienced problems, however, in developing its first major software application for use in its new network. These problems include (1) using programmers with insufficient experience, (2) using software development tools that have not performed effectively, and (3) developing initial schedules that were too optimistic. As we noted earlier, these problems have collectively contributed to a delay of over 2 years in implementing this new software. Although SSA has begun to take steps to better position itself to successfully develop and maintain its software, it faces many challenges as it works to develop software in its new computer network environment. SSA faces another systems challenge—one of the highest priority—that affects not only its new network but computer programs that exist for both its mainframe and personal computers. Most computer software in use today is limited to two-digit date fields, such as 96 for 1996. Consequently, at the turn of the century, computer software will be unable to distinguish between 1900 and 2000 because both would be designated “00.” By the end of this century, SSA must review all of its computer software—about 30 million lines of computer code—and make the changes needed to ensure that its systems can handle the first change to a new century since the computer age began. This year 2000 software conversion must be completed to avoid major service disruption such as erroneous payments or failure to process benefits at the turn of the century. Errors in SSA programs could also cause difficulties in determining who is eligible for retirement benefits. For example, an individual born in 1920 could be seen as being 20 years old—not 80—and therefore ineligible for benefits. Similarly, someone born in 1980 could be seen as 80 years old—not 20—and therefore entitled to receive Social Security benefits. Beginning work on this problem as early as 1989, SSA has reviewed and corrected about 20 percent of the computer code that must be checked, according to its Deputy Commissioner for Systems. To complete the job, SSA estimates that it will take 500 work-years, the equivalent of about $30 million. Agency officials reported that the amount of resources dedicated to the year 2000 effort could impact staff availability for lower priority projects and SSA’s ability to tackle new systems development work. SSA recognizes that to maximize the effectiveness of its reengineered work processes and investments in technology, it must invest in ongoing employee training and career development. Ultimately, SSA envisions a less specialized workforce with a broader range of technical skills that can be flexibly used in areas of greatest need. In addition, SSA has taken steps to reduce its number of supervisors, as part of the administration’s efforts to eliminate unnecessary bureaucracy by working with fewer supervisory layers. To manage these changes, SSA is training some of its headquarters employees on the concepts and techniques of teamwork. To manage with fewer supervisors in its field operations, SSA also plans to work with its unions to test a number of team concepts. Complicating SSA’s efforts is its aging workforce: 47 percent of SSA’s senior executives and 30 percent of its grade 13 to 15 personnel are eligible to retire over the next 5 years. In the 2 fiscal years ending this September alone, SSA will have lost, and have needed to replace, two of its seven Deputy Commissioners to retirement. SSA has acknowledged the importance of having skilled managers to prepare for the demands of heavier workloads, new technology, and expected changes in its employee and client base. However, it has been 4 years since SSA participated in HHS’ executive-level management development program, and it has not announced its own program since becoming an independent agency. SSA also has not selected candidates for its mid-level management development program since 1993. The agency recognizes the need for management development programs but has not yet scheduled future programs. Although SSA has begun to discuss its use of improved technology and a more flexible workforce to conduct its business in new ways in the future, it has maintained its traditional service delivery structure, including 1,300 field offices. Given the significant changes facing SSA, it has not adequately considered whether its current service delivery structure is really what is needed for the future. This important issue needs serious consideration. According to SSA officials, the agency has not developed specific plans for restructuring its organization and redeploying staff in response to demographic and workforce changes and shifting customer expectations. The demand for SSA’s 800-number telephone service continues to grow, and SSA’s surveys show that callers prefer to use the telephone for more and more of their business. Customer feedback also indicates that customers would like to complete their business in a single contact. Over time, SSA will likely need to restructure how it does business to cost- effectively meet changing customer preferences; this may ultimately involve office closures. Issues of where, how, and by whom work will be done entail sensitive human resources issues and may have negative impacts on local communities; to resolve these, SSA will need to work closely with its unions, employee groups, and the Congress. To improve its 800-number service, for example, SSA has many initiatives under way, which we are reviewing at your request. SSA currently has 39 teleservice centers. Studies indicate that this is far too many teleservice centers to operate SSA’s 800-number system in the most cost-effective way. A 1990 report from HHS’ Inspector General, for example, indicates that SSA could operate more efficiently and cost-effectively with one-third the number of centers it currently has. SSA is studying and plans to work with employee groups on this issue but has not developed specific plans for reducing the number of teleservice centers. As the 21st century approaches, SSA faces dramatic challenges: funding future retirement benefits, rethinking disability processes and programs, combating fraud and abuse, and restructuring how work is performed and services delivered. How SSA performs in these areas can have a powerful effect on its success in fulfilling its mission and on the public’s confidence in this agency and the federal government. To help SSA meet these challenges, the Congress took steps through the independence legislation to build public confidence in and strengthen the agency. The independence legislation provides that SSA’s Commissioner be appointed by the President with the advice and consent of the Senate for a fixed 6-year term, with removal from office by the President only for a finding of neglect of duty or malfeasance in office. As the Congress was considering the legislation, we testified that a fixed term of several years for Commissioner would help stabilize and strengthen SSA’s leadership. We continue to support the need for a fixed term. The legislation also calls for a fixed 6-year term for a Deputy Commissioner, also to be appointed by the President with the Senate’s advice and consent. The Commissioner and Deputy Commissioner head the leadership team needed to address the agency’s existing problems and manage its future challenges. SSA’s efforts to maintain an effective cadre of leaders are complicated by the impending retirement of many of its executives and managers and by the absence of a Commissioner and Deputy Commissioner with the stability of fixed terms. This leadership must be in place for SSA to progress on the four fronts we have highlighted. First, SSA must step up to its role as the nation’s expert on Social Security issues; it is uniquely positioned to inform the public policy debate on the future financing and structure of Social Security. Second, SSA must redesign the disability claims process and place greater emphasis on return to work in its disability programs. To increase the redesign project’s likelihood of success, SSA needs to ensure that those initiatives most likely to save significant costs and time are implemented. Because of the scope and duration of SSA’s redesign, it should report on an annual basis the extent to which it is meeting its processing time reduction goals. It must also sustain its efforts to build and maintain stakeholder support. In addition, SSA must develop a comprehensive detailed plan that integrates its redesign initiatives and systems development activities. The Commissioner also needs to act immediately to place greater emphasis on return to work by changing both the design and the administration of the disability programs. Third, SSA must better protect taxpayer dollars. As the administrator of the nation’s largest cash welfare program, SSA must ensure program integrity in SSI. Reports of fraud and abuse trigger public perceptions that SSA is not making cost-effective and efficient use of taxpayer dollars. Finally, SSA must manage technology investments and its workforce, and—when needed—make difficult decisions about handling increasing workloads with reduced resources. It must also continue to focus on and closely manage its year 2000 conversion to help ensure that SSA will move into the 21st century with systems that function correctly. Moreover, as SSA prepares to meet greater demands and changes in its employee and client base, it may have to make difficult workforce decisions to better respond to customer needs. For example, SSA may need to close offices and move its workers to different locations to better meet growing demand. In an environment of shrinking budgets and increased expectations for government agency performance, ensuring that agency decisions are based on comprehensive planning and sound analyses will be even more essential. SSA’s success in meeting these challenges is critical. The agency is all important, accounting for one-fourth of federal spending and touching the lives of almost all Americans. How it meets its challenges as it moves into the next century can make a significant difference in the well-being of America’s vulnerable populations—the aged, disabled, and poor—and in how the public feels about its government. We requested but did not receive comments from SSA on a draft of this report. However, in testimony delivered before the Subcommittee, SSA acknowledged the importance of addressing the programmatic problems we identified. SSA also agrees that it must be better prepared for future challenges. SSA stressed, for example, the importance of intensifying its focus on policy planning and development to better enable the agency to develop long-range policy options. It also stated that it has made progress in redesigning its disability claims process and in returning disabled beneficiaries to work. We believe, however, that SSA has overstated its achievements to date in improving its disability programs. SSA concluded that it has made major strides in implementing several initiatives for the redesign of its disability claims process and setting the stage for future implementation efforts. As we noted in this report, however, while SSA has begun many of its planned initiatives, none is far enough along for SSA to know whether specific proposed process changes will achieve the desired results. SSA also believes it has made major progress in returning disabled beneficiaries to work. Our recent work clearly shows, however, that SSA’s disability programs lack a focus on providing the support and assistance that many people with disabilities need to return to work. We are sending copies of this report to the Commissioner of the Social Security Administration and other interested parties. Copies also will be available to others on request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215 or Cynthia M. Fagnoni, Assistant Director, at (202) 512-7202. Other major contributors include Patricia T. Taylor, Director, Information Resource Management for Health, Education, and Human Services, Accounting and Information Management Division, and Gale C. Harris and Daniel Bertoni, Senior Evaluators. In addition to the issues covered in the letter, the Subcommittee asked us to assess SSA’s progress in specific areas related to its transition to independence. To function as an independent agency, SSA has taken several actions, including establishing new offices, that the independence legislation required or that relate to the agency’s ability to operate independently. This appendix summarizes SSA’s progress in these areas. Under the independence legislation, the Office of Personnel Management (OPM) was to authorize a substantially greater number of SES positions than authorized for SSA immediately before the legislation’s date of enactment (Aug. 15, 1994). Authorized SES positions numbered 91 before enactment; they numbered 104 upon SSA’s gaining independence, an increase of 14 percent. In May 1995, SSA identified a need for a total of 113—rather than 104—SES positions. SSA concluded, however, that at this early stage of its independent status, five additional positions would be minimally sufficient for successful performance and still in keeping with the administration’s efforts to streamline management and supervisory positions. OPM denied the request, citing a firm commitment to governmentwide SES reduction goals. As the result of recent legislation, the Office of the Actuary will report directly to the Commissioner of Social Security rather than to a Deputy Commissioner. The Chief Actuary told us that, although the office is extremely busy, it will have enough staff and automation resources to carry out its responsibilities. He anticipated that because of the organizational change, the office would have its own budget in the next fiscal year, providing an opportunity to acquire additional staff and automation resources to handle its heavy workloads. In gaining independence from HHS, SSA had to create its own Offices of Inspector General and General Counsel. SSA’s Office of Inspector General (OIG), charged with conducting investigations and audits of SSA’s programs and operations, is operational and assessing whether additional resources are needed to accomplish its mission. The OIG is building its capacity to better detect and prevent fraud, waste, and abuse; it plans to increase by 25 percent the number of criminal investigators in its office by September 30, 1996. The OIG is also studying the need for additional resources and assessing whether OIG and other SSA offices are duplicating some audit functions, but no final decisions have been made. Identifying and eliminating duplication of audit efforts throughout the agency could free up resources for other uses. Throughout the federal government, Inspectors General play a key role in ensuring financial accountability and program integrity. In addition to investigative staff, it is important that SSA’s OIG have in place a sufficient number of technically qualified personnel to conduct financial audits and to evaluate computer controls and assess system development efforts. SSA’s Office of General Counsel (OGC), charged with providing legal advice and litigation services for the programs SSA administers, is also operational and in the process of acquiring additional staff and organizing its operations. Since SSA’s independence, OGC officials have noted improved coordination with agency officials and greater involvement in agency issues, which have improved SSA’s ability to address policy and operational issues. The independence legislation had a provision affecting SSA’s annual budget process; however, OMB continues its budgetary oversight role. The legislation states that “the Commissioner shall prepare an annual budget for the Administration which shall be submitted by the President to the Congress without revision, together with the President’s annual budget for the Administration.” Traditionally, executive agencies—including SSA—receive budget guidance from OMB and prepare budget proposals for OMB. OMB reviews the proposals, and agencies revise them by incorporating OMB’s input and changes. Once approved by OMB, the budgets are sent to the Congress as part of the President’s budget for executive agencies. Presumably, the new budget provision for SSA was intended to illuminate differences between the budget SSA proposes and the President’s budget for the agency. As noted in our report on SSA’s transition to independence, this new budget provision does not restrict OMB from continuing to exercise its traditional budgetary oversight role. SSA and OMB officials have reported no substantive changes in OMB’s oversight role in the budget process under independence. To be in compliance with the new budget provision, the President’s fiscal year 1997 budget included, in addition to OMB’s budget totals for SSA, SSA’s budget proposal to OMB. SSA’s budget proposal was for $6.3 billion; the President’s budget proposal for SSA was for $6.6 billion. According to SSA officials, the difference was due to additional funds allocated to SSA for handling increasing workloads associated with welfare reform and required reviews of disabled beneficiaries’ status. In addition to its role in SSA’s budget process, OMB continues to review SSA’s legislative and policy proposals. Although OMB’s role in SSA’s budget, legislative, and policy matters remains substantively unchanged, SSA officials noted that as an independent agency, SSA now works directly with OMB rather than through HHS. These officials believe that removing HHS as a layer of approval is an important outcome of independence, saving time and heightening attention to SSA issues. The newly established Social Security Advisory Board is in the early stages of operation. Created by the independence legislation, the Board— composed of four members appointed by the Congress and three members appointed by the President—will advise the Commissioner of Social Security on policies concerning OASI, DI, and SSI. All members of the Board have been appointed, and a staff director, selected in May 1996, manages its day-to-day operations. Board members are currently receiving briefings on SSA program and operational issues. In addition to Board members and the staff director, plans call for Board personnel to include three SES-level professional staff, clerical personnel, and two SSA employees on detail to the Board. Board staff are currently working with the Commissioner’s office to determine the types and levels of resources the Board needs. SSA has provided approximately $350,000 to the Board for its expenses from its 1996 appropriation; the staff director does not believe the Board will use the entire amount before the end of this fiscal year. SSA estimated that funding for the Advisory Board would total $300,000 in fiscal year 1997. The staff director noted, however, that in SSA’s 1997 legislative appropriation, $1.5 million has been allocated to the Board. She believes that this amount will be adequate for the Board to carry out its responsibilities. Social Security Administration: Effective Leadership Needed to Meet Daunting Challenges (GAO/T-OCG-96-7, July 25, 1996). Social Security: Disability Programs Lag in Promoting Return to Work (GAO/T-HEHS-96-147, June 5, 1996). Social Security: Union Activity at the Social Security Administration (GAO/T-HEHS-96-150, June 4, 1996). Supplemental Security Income: Some Recipients Transfer Valuable Resources to Qualify for Benefits (GAO/HEHS-96-79, Apr. 30, 1996). SSA Disability: Program Redesign Necessary to Encourage Return to Work (GAO/HEHS-96-62, Apr. 24, 1996). PASS Program: SSA Work Incentive for Disabled Beneficiaries Poorly Managed (GAO/HEHS-96-51, Feb. 28, 1996). Deficit Reduction: Opportunities to Address Long-Standing Government Performance Issues (GAO/T-OCG-95-6, Sept. 13, 1995). Supplemental Security Income: Disability Program Vulnerable to Applicant Fraud When Middlemen Are Used (GAO/HEHS-95-116, Aug. 31, 1995). The Deficit and the Economy: An Update of Long-Term Simulations (GAO/AIMD/OCE-95-119, Apr. 26, 1995). Social Security: New Functional Assessments for Children Raise Eligibility Questions (GAO/HEHS-95-66, Mar. 10, 1995). Social Security Administration: Leadership Challenges Accompany Transition to an Independent Agency (GAO/HEHS-95-59, Feb. 15, 1995). Social Security Administration: Major Changes in SSA’s Business Processes Are Imperative (GAO/T-AIMD-94-106, Apr. 14, 1994). Social Security: Sustained Effort Needed to Improve Management and Prepare for the Future (GAO/HRD-94-22, Oct. 27, 1993). The Budget Deficit: Outlook, Implications, and Choices (GAO/OCG-90-5, Sept. 12, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Social Security Administration's (SSA): (1) progress in meeting challenges; and (2) status as an independent agency. GAO found that: (1) while SSA is starting to manage for results and improve financial accountability, it has not taken steps to contribute to the public debate on the future financing of the Social Security system; (2) unless changes are made to the system, the Social Security trust funds will be depleted by 2029; (3) SSA disability caseloads are growing, and SSA is trying to fundamentally redesign the disability claims process; (4) SSA has not adequately promoted return-to-work programs in redesigning its disability processes; (5) SSA should do more to combat fraud and abuse in the Supplemental Security Income Program; (6) SSA is reducing its staff, and nearly half of its senior executives will be eligible to retire in the next 5 years; (7) these workload challenges will complicate the tasks SSA faces; (8) SSA is working to modernize its information systems, but this effort will be costly and complex; and (9) effective leadership is critical to SSA efforts to modernize and meet future challenges. |
Prior to the terrorist attacks of September 11, national security, including counterterrorism, was a top-tier priority for the FBI. However, this top tier combined national security responsibilities with other issues, and the FBI’s focus and priorities were not entirely clear. According to a Congressional Research Service report, the events of September 11 made clear the need to develop a definitive list of priorities. In June 2002, the FBI’s director announced 10 priorities. The top 3 priorities were to (1) protect the United States from terrorist attack (counterterrorism), (2) protect the United States against foreign intelligence operations and espionage (counterintelligence), and (3) protect the United States against cyber-based attacks and high-technology crimes (cyber crime). White- collar crime ranked seventh in the priority list, and violent crime ranked eighth. Drug crimes that were not part of transnational or national criminal organizations were not specifically among the FBI’s top 10 priorities. In June 2003 and March 2004, we testified that a key element of the FBI’s reorganization and successful transformation is the realignment of resources to better ensure focus on the highest priorities. Since September 11, the FBI has permanently realigned a substantial number of its field agents from traditional criminal investigative programs to work on counterterrorism and counterintelligence investigations. The FBI’s staff reprogrammings carried out since September 11 have permanently shifted 674 field agent positions from the drug, white-collar, and violent crime program areas to counterterrorism and counterintelligence. About 550 of these positions (more than 80 percent of the permanently shifted positions) came from the FBI’s drug program, with substantially smaller reductions from the white-collar and violent crime programs. In addition, the FBI established the cyber program. As figure 1 shows, about 25 percent of the FBI’s field agent positions were allocated to counterterrorism, counterintelligence, and cyber crime programs prior to the FBI’s change in priorities. As a result of the staff reprogrammings and funding for additional special agent positions received through various appropriations between 2002 and 2004, the FBI staffing levels allocated to the counterterrorism, counterintelligence, and cyber program areas have increased to about 36 percent and now represent the single largest concentration of FBI resources. Figure 1 also notes that the number of nonsupervisory FBI field agent positions has increased from 10,292 in fiscal year 2002 to 11,021 in fiscal year 2004, about 7 percent. Additionally, the FBI has had a continuing need to temporarily redirect special agent resources from other criminal investigative programs to address counterterrorism and other higher-priority needs. The FBI continues to redirect agents from drug, white-collar, and violent crime programs to address the counterterrorism-related workload demands. These moves are directly in line with the FBI’s priorities and in keeping with its policy that no counterterrorism leads will go unaddressed. Figure 2 shows that the counterterrorism program continues to rely on resources that are temporarily redirected from other crime programs. Appendix II contains figures that show the reductions in FBI nonsupervisory field agent positions and work years charged to the drug program and, to a lesser extent, the white-collar and violent crime programs after September 11. As one might expect, the reallocation of resources to align with post- September 11 priorities resulted in a significant increase in newly opened FBI counterterrorism matters, while the number of newly opened FBI drug, white-collar, and violent crime matters declined between fiscal year 2001 and the third quarter of fiscal year 2004. As shown in figure 3, the FBI’s newly opened counterterrorism matters increased by about 183 percent, from 1,006 matters in the fourth quarter of fiscal year 2001 to 2,850 matters in the fourth quarter of fiscal year 2003. In contrast, the number of newly opened FBI drug matters declined from 1,447 in fiscal year 2001 to 587 in fiscal year 2003—a decrease of about 60 percent. FBI newly opened white-collar and violent crime matters also remained below pre-September 11 levels, though the decreases have not been as dramatic as those in the drug crime program. The decreases from fiscal year 2001 to fiscal year 2003 were about 32 percent in the number of newly opened white-collar crime matters and about 40 percent in newly opened violent crime matters. See appendix III for figures showing the pre- and post-September 11 changes in the FBI’s newly opened drug, white-collar, and violent crime matters. Also, as expected with the significant shift in resources to address national security priorities, the FBI’s referrals of counterterrorism matters to U.S. Attorneys’ Offices for prosecution have increased since September 11, while referrals of drug, white-collar, and violent crime matters have decreased. In fiscal year 2001, which ended just after September 11, 2001, the FBI referred 236 counterterrorism matters to U.S. Attorneys for prosecution. In fiscal year 2003, the FBI referred 1,821 of these matters to U.S. Attorneys, an increase of about 671 percent. At the same time, FBI referrals of drug, white-collar, and violent crime matters decreased about 39 percent, 23 percent, and 10 percent respectively. We could not conclusively identify an effect on federal drug enforcement resulting from the FBI’s shift in resources after September 11, because results of our analyses were mixed and the data we used had limitations. While the number of FBI nonsupervisory field agents assigned to the drug program decreased by more than 40 percent after September 11, the decrease in the number of combined FBI and DEA field agents assigned to drug work was about 10 percent because the number of DEA field agent positions increased slightly. Further, DEA, the lead agency for federal drug enforcement, is continuing to increase its resources as positions appropriated by Congress in prior fiscal years are filled. The combined number of newly opened FBI and DEA drug matters has declined by about 10 percent since September 11. However, the combined number of referrals of drug matters to U.S. Attorneys from all federal sources decreased about 2 percent. Finally, law enforcement officials from the FBI, DEA, U.S. Attorneys’ Offices, and local police departments that we interviewed had mixed views on whether the FBI’s shift of resources had an impact on drug enforcement in their communities. The data we analyzed and interviews with law enforcement officials should be considered short-term indicators with some limitations in their ability to depict the complete impact of FBI priority changes. As figure 4 shows, the combined number of FBI and DEA nonsupervisory field agent positions has decreased about 10 percent since the terrorist attacks, from about 4,500 nonsupervisory field agents at the end of fiscal year 2001 to about 4,000 field agent positions in the second quarter of fiscal year 2004. The decrease has not been more pronounced because DEA, as the nation’s single-mission drug enforcement agency, has devoted more resources to domestic drug enforcement than has the FBI in both pre- and post-September 11 periods. As the number of FBI nonsupervisory field special agents assigned to drug program investigations has decreased from about 1,400 in fiscal year 2001 to about 800 in fiscal year 2004, the number of DEA nonsupervisory field agents has increased slightly. These DEA positions increased from a little less than 3,100 positions in fiscal year 2001 to a little more than 3,200 positions in the second quarter of fiscal year 2004. DEA devoted about twice as many agent resources as the FBI did to domestic drug enforcement before the terrorist attacks, and the DEA share of the combined FBI and DEA domestic drug enforcement agent resources has continued to increase since then. A Department of Justice official noted that Justice has pursued the goal of increasing agent strength, and the DEA domestic drug operations chief said that he expects the number of DEA domestic drug agents to increase significantly over the next 2 fiscal years, when positions already appropriated by Congress are filled. According to the chief, DEA expects to fill 216 new special agent positions appropriated in fiscal year 2003 by the end of fiscal year 2004. The agency plans to fill 365 additional positions appropriated in fiscal year 2004 during fiscal year 2005. Thus, in fiscal year 2005, the combined number of FBI and DEA drug enforcement agent resources should exceed the pre-September 11 workforce strength, and in fiscal year 2006, the total should continue to increase. In fiscal year 2005, DEA is requesting 111 additional agent positions for domestic enforcement. The chief said that he has worked with FBI and Department of Justice officials to determine where to deploy new special agent positions allocated since September 11 and that DEA has put additional resources in high-threat areas where the FBI had shifted resources out of drug enforcement. The chief said that DEA is pacing the hiring of new agents in an effort to manage its growth so that as new special agents come on board, DEA has the necessary infrastructure, including office space, cars, equipment, and training resources, to support them. In contrast, the chief of the FBI’s drug section said that FBI officials do not foresee a significant increase in the number of agents assigned to drug investigations. However, he was not aware of any plans to withdraw additional agent resources from the drug program. He also said he was hopeful that in future years, as the FBI gained experience and resources for its national security-related priorities, fewer temporary diversions of special agents from drug work to higher priorities would be necessary. In addition, a Justice Department official noted that in fiscal year 2004, FBI received additional agent positions funded under the Organized Crime Drug Enforcement Task Force (OCDETF) program, and that additional OCDETF field positions were requested for the FBI in the 2005 budget. Since FBI officials do not foresee a significant increase in the number of special agents deployed to its drug program, if the trends continue, DEA would have an even larger portion of the combined FBI and DEA domestic drug program agent resources in the future than it currently has. Although the number of the FBI’s newly opened drug matters decreased about 60 percent after September 11, the combined decrease in the number of FBI and DEA newly opened matters is much smaller—about 10 percent—because DEA has a much larger drug caseload than the FBI. As shown in Figure 5, the FBI and DEA together opened 22,736 domestic drug matters in fiscal year 2001, compared with a combined total of 20,387 domestic drug matters in fiscal year 2003. Assuming that the FBI and DEA open new matters at about the same pace in the last two quarters of fiscal year 2004 as they did during the first two quarters, fiscal year 2004 levels of newly opened drug crime matters will be similar to those in fiscal year 2003. The DEA Chief of Domestic Drug Operations said that he thought the decrease in the number of newly opened matters was due in part to an increased Department of Justice emphasis on cases targeting major drug organizations in its Consolidated Priority Organization Targeting (CPOT) initiative rather than reduced federal resources for drug enforcement. He said that the policy has resulted in DEA opening fewer cases but that those cases have potential to dismantle or disrupt the operations of major drug cartels. In commenting on a draft of this report, a Department of Justice official more broadly stated that the decrease in newly opened drug matters was due to the Justice strategy, which directs resources on complex, nationwide investigation of entire drug trafficking networks. The networks involve major international sources of supply, including those on the Consolidated Priority Organization Target list. While the FBI’s referrals of drug matters to U.S. Attorneys for prosecution have decreased about 40 percent, from 2,994 matters to 1,840 matters between fiscal year 2001 and fiscal year 2003, DEA referrals increased over the same period by about 7 percent, from 9,907 matters in fiscal year 2001 to 10,596 matters in fiscal year 2003. Almost half of the total number of drug matters that were referred to U.S. Attorneys’ Offices came from federal agencies other than the FBI and DEA in fiscal year 2003. The number of referrals from all federal agencies and departments other than the FBI and DEA was almost unchanged over the period, with 9,793 referrals in fiscal year 2001 and 9,816 referrals in fiscal year 2003. As figure 6 shows, U.S. Attorneys’ Offices received 22,694 drug offense referrals in fiscal year 2001 and 22,252 drug offense referrals in fiscal year 2003, a decrease of about 2 percent from all federal agencies. The FBI and DEA referred 12,901 drug matters to U.S. Attorneys in fiscal year 2001 and 12,436 drug matters in fiscal year 2003, a decrease of about 4 percent. Law enforcement practitioners we interviewed had mixed views about the impact of the FBI’s shift in resources on drug enforcement efforts. While many interviewees representing each of the locations and criminal justice organizations we visited generally described the FBI as a valuable law enforcement partner, some of them said that they did not think the FBI’s shift in resources had a significant impact on drug enforcement efforts in their communities. Other interviewees said that drug investigations have suffered as a result of the FBI’s shift in resources to new priority areas. For example, officials from 9 of the 14 law enforcement agencies we visited said that the FBI did not bring any specialized drug program expertise that in most cases could not be supplied by other agencies. However, 7 of the 14 interviewees said that there was a significant impact on overall drug enforcement efforts in the locations we visited as a result of the FBI’s shift in resources to new priorities. It is important to keep in mind that the interviewees, although working in locations that experienced some of the sharpest reductions in the FBI drug program resources, are not representative of all locations or even of all of those locations that experienced similar reductions in resources. The following are examples of some of the comments we received from law enforcement practitioners who did not think that the shift in the FBI’s resources had an impact on drug crime enforcement efforts in their communities. On the other hand, some law enforcement officials said that drug investigations have suffered as a result of the FBI shift in resources to new priority areas. The following are several of the other comments from law enforcement practitioners we interviewed who thought that the FBI’s shift in resources had affected efforts to combat crime. The FBI’s shift out of drug enforcement is having an impact in this city. We are receiving fewer drug referrals from the FBI since September 11, 2001, and, consequently, we are receiving fewer drug referrals overall. The FBI is the best agency at understanding the relationship between drugs and violence. —U.S. Attorney’s Office Criminal Division Chief Referrals have dropped off since the shift in priorities because there are fewer agents working in the FBI criminal divisions. The FBI is not working the long-term drug cases like they did in the past because the bureau cannot afford to keep cases open for a year or two. —First Assistant U.S. Attorney The FBI’s shift out of drug work has layered more responsibilities on state and local police. The organizations are responding by juggling resources, requiring officers to work more hours, and attempting to work smarter by improving information systems, using technology, and communicating more effectively with one another. State and local police agencies are more efficient now than they have ever been. As a result, FBI involvement is perhaps not as critical as it may have been in the past. —International Association of Chiefs of Police representative. Our analyses provide perspectives in the short term (less than 3 years) and are not necessarily indicative of long-term trends. With respect to the short-term perspective, a U.S. Attorneys’ Office Criminal Division chief noted that cases can take many years to develop, and the full impact of the FBI’s shift in priorities may not be apparent for several years. He said that cases are being referred to his office for prosecution now that began long before the September 11 attacks. We also determined that it was too early to assess possible changes in drug price, purity, use, and availability, as well as any drug-related crime trends that have occurred since September 11. Key statistical studies that track the price and purity of illegal drugs and reports on hospital emergency department drug episodes and drug abuse violations were not current enough to provide more than a year of trend data after September 11. Data over several more years are needed to determine whether changes in drug use and availability have occurred, and even when data are available, it will be very difficult to determine whether changes are specifically attributable to the FBI’s shift in priorities or to other factors (such as improved drug prevention programs or new methods of drug importation). There are also other limitations to the data we analyzed. It is important to note that while we looked at numbers of agent resources, matters opened, and matters referred for prosecution, we could not fully assess the less tangible factors of the quality of agent resources and investigations and the complexity of investigations. Neither could we determine what drug investigations the FBI might have pursued had it had additional drug program agent resources. We did ask interviewees their opinions on whether the quality of drug agent resources and the quality and complexity of drug investigations had changed since September 11. Some FBI officials said that experienced agents were lost to the drug program when they were assigned to work in higher-priority areas and that these agents were unlikely to return to drug investigations. A top DEA official said that drug investigations are more complex now than they were prior to September 11. He said that the reason for the increased complexity is unrelated to counterterrorism efforts; instead it is the result of a Department of Justice strategy to target major drug organizations. We did not conclusively identify an effect on federal white-collar and violent crime enforcement resulting from the FBI’s shift in priorities after September 11. Our analysis was limited to only the number of these matters referred to U.S. Attorneys’ Offices for prosecution from the FBI and all other federal agencies and impacts observed by law enforcement officials we interviewed. Overall, all federal agencies referred about 6 percent fewer white-collar crime matters to U.S. Attorneys—-down from 12,792 matters in fiscal year 2001 to 12,057 matters in fiscal year 2003. However, violent crime referrals increased about 29 percent during this period—from 14,546 matters in fiscal year 2001 to 18,723 matters in fiscal year 2003. Headquarters and field law enforcement officials we interviewed had mixed views on whether the FBI’s shift of resources had an effect on white-collar and violent crime enforcement in their communities. Caveats to the results we reported on impacts of the FBI’s shift in priorities on drug enforcement apply to white-collar and violent crime enforcement, as well. The data we analyzed and interviews with law enforcement officials should be considered as short-term indicators with limitations in their ability to determine the impact of the FBI priority shifts. All federal agencies referred about 6 percent fewer white-collar crime matters to U.S. Attorneys, down from 12,792 matters in fiscal year 2001 to 12,057 matters in fiscal year 2003. However, violent crime referrals increased about 29 percent during this period, from 14,546 matters in fiscal year 2001 to 18,723 matters in fiscal year 2003. Figures 7 and 8 show changes in the number of referrals of white-collar and violent crime matters to U.S. Attorneys from all federal enforcement agencies since September 11, 2001. Of all the federal agencies and departments, the FBI refers the greatest number of white-collar crime matters to U.S. Attorneys. FBI referrals decreased about 23 percent, from 6,941 matters in fiscal year 2001 to 5,331 matters in fiscal year 2003. At the same time, referrals by all other agencies increased by about 15 percent, from 5,851 matters in fiscal year 2001 to 6,726 in fiscal year 2003. Other lead agencies and departments for referring white-collar crime cases included the Department of Health and Human Services and the Social Security Administration, with health care and federal program fraud and other white collar crime referrals; the U.S. Postal Service, with referrals of tax and bank fraud, and other white-collar crime matters; the U.S. Secret Service; and the Internal Revenue Service, with securities and other fraud referrals. FBI violent crime referrals decreased about 10 percent from 5,003 matters in fiscal year 2001 to 4,491 matters in fiscal year 2003. However, over the same period ATF’s violent crime referrals increased from 6,919 to 10,789. Several other agencies and departments, including all of the military services and the Departments of the Interior and Housing and Urban Development, also referred violent crime matters to U.S. Attorneys for prosecution. The Chief of the FBI’s Violent Crime Section noted that violent crime referrals by all federal agencies have increased because efforts are under way nationwide to prosecute gang violence. The Department of Justice is targeting cities nationwide where high murder and violence rates persist despite an overall reduction in violent crime rates to the lowest level in 30 years. Law enforcement officials we interviewed had mixed views on whether the FBI’s shift of resources had a negative impact on white-collar and violent crime enforcement in their communities. For example, police and federal prosecutors in two locations noted that the FBI had continued to provide necessary resources for critical white-collar and violent crime concerns, while prosecutors in another location expressed concern that white-collar crime enforcement was suffering because of the reduced FBI involvement. The following are comments we received from the local police supervisors and officials of several U.S. Attorneys’ Offices who did not think that the FBI’s shift in resources had an impact on white-collar and violent crime enforcement efforts in their communities. The following are two of the other comments from U.S. Attorneys’ Office and FBI managers and supervisors who thought that the FBI’s shift in resources had affected efforts to combat white-collar and violent crime. Caveats to the results we reported on impacts of the FBI’s shift in priorities on drug enforcement apply to indicators of possible impacts on white-collar and violent crime enforcement as well. The data we analyzed and interviews with law enforcement officials should be considered as short-term indicators with limitations in their ability to determine the full impact of the FBI priority shifts. The data provide perspectives in the short term of less than 3 years, and they do not consider important factors such as whether changes have occurred in the quality or complexity of white- collar and violent crime matters being referred from federal law enforcement agencies to U.S. Attorneys for prosecution. Also, our analysis does not consider the number and quality of cases that could have been referred for prosecution by the FBI had additional white-collar and violent crime program resources been available. We are providing copies of this report to the Department of Justice and interested congressional committees We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Major contributors to this report are listed in appendix IV. If you or your staffs have any questions about this report, please contact me on (202) 512-8777 or by email at ekstrandl@gao.gov, or Charles Michael Johnson, Assistant Director on (202) 512-7331 or johnsoncm@gao.gov. Key contributors to this report are listed in Appendix IV. To examine the effect of the Federal Bureau of Investigation’s (FBI) post- September 11 priority shifts on federal efforts to combat domestic drug crime, we analyzed (1) the impact of resource shifts on the combined FBI and Drug Enforcement Administration (DEA) nonsupervisory field special agent resources devoted to drug enforcement; (2) changes in the number of newly opened FBI and DEA drug crime matters; (3) changes in the number of drug crime matters referred from all federal agencies to U.S. Attorneys’ Offices for prosecution; and (4) impacts, if any, observed by headquarters and field law enforcement officials we interviewed. Specifically, to determine changes in the combined level of FBI and DEA nonsupervisory field special agent resources devoted to domestic drug enforcement, we analyzed FBI time utilization data and DEA data on funded staff levels for fiscal year 2001, which ended September 30, 2001, through the second quarter of fiscal year 2004, ending on March 31, 2004. We also analyzed FBI and DEA budget and resource allocation information. We focused our analysis primarily on nonsupervisory field special agent positions because these positions are directly involved in investigations, while supervisors, managers, and headquarters agents often have noninvestigative responsibilities in addition to their investigative duties. To determine the number of newly opened FBI and DEA drug matters, we analyzed case management system data for the period from fiscal year 2001 through the second quarter of fiscal year 2004. We also reviewed the Department of Justice’s Domestic Drug Enforcement Strategy and discussed resource allocation issues and concerns with FBI and DEA officials. We did not attempt to analyze data on drug enforcement resource allocations and drug matters opened for federal agencies and departments other than the FBI and DEA that are involved in investigating drug-related crimes. To determine the number of drug referrals from the FBI, DEA, and other federal agencies to U.S. Attorneys, we analyzed Executive Office for U.S. Attorneys (EOUSA) case management data on the number of referrals received by referring agency and type of referral for fiscal year 2001 through 2003. We did not have access to criminal case files and thus did not provide any assessment of changes in the quality or complexity of traditional criminal investigations in pre- and post-September 11 periods. To provide perspectives of law enforcement officials on impacts, if any, they observed on traditional FBI criminal enforcement areas as a result of the FBI’s shift of resources to new priorities, we interviewed selected federal headquarters and field officials, police department supervisors, and representatives of the International Association of Chiefs of Police. At FBI headquarters we interviewed the chief of the drug section, and at DEA headquarters we interviewed the chief of the domestic drug program. The field locations we visited, as shown in table 1, had among the greatest shifts of FBI resources from traditional criminal programs into new priorities. Using a semistructured interview and a data collection instrument, at each location we asked FBI and DEA field office managers and U.S. Attorney’s Office supervisory prosecutors and local police department supervisors responsible for drug enforcement about any impacts they had observed on their caseloads, workloads, and crime in their communities as a result of the FBI shift in priorities. The results of these interviews cannot be generalized to any broader community of law enforcement agencies or officials or to other geographic locations. To examine the effect of the FBI’s post-September 11 priority shifts on federal efforts to combat white-collar and violent crime, we considered (1) the number of matters referred to U.S. Attorneys’ Offices for prosecution from the FBI and all other federal agencies and (2) the effects, if any, observed by law enforcement officials we interviewed. To determine how the numbers of white-collar and violent crime matters referred to U.S. Attorneys’ Offices for prosecution from the FBI and all other federal agencies have changed, we analyzed case-management system data from EOUSA. To provide perspectives from selected law enforcement officials on any changes in federal white-collar and violent crime enforcement activities as a result of the FBI’s shift in priorities, we included in our structured interview questions about effects federal and local law enforcement officials we visited might have observed on white- collar and violent crime caseloads, workloads, and law enforcement activities in their communities. We did not attempt to collect information on the number of newly opened white-collar and violent crime matters from all federal sources because there is no agency comparable to DEA, with single-mission dedication to white-collar crime or violent crime investigations. Many different federal agencies and offices, including criminal investigative agencies and inspectors general, investigate these matters, but our time frames did not allow us to obtain resource allocation data from them. The data should be viewed as short-term indicators with limitations. The full impact of the FBI’s shift in priorities may not be apparent for several years. While we looked at numbers of agent resources, matters opened, and matters referred for prosecution, we could not fully assess the less tangible factors of the quality of agent resources and investigations and the complexity of investigations. Neither could we determine what investigations the FBI might have pursued had it had additional agent resources. Because the reliability of FBI, DEA, and EOUSA information management systems data is significant to the findings of this review, we interviewed FBI, DEA, and U.S. Attorney personnel to determine what steps they take to assess the accuracy of data elements and what limitations, if any, they have identified with the data elements used for our review. As a result of our assessment, we determined that the required data elements are sufficiently reliable for the purposes of this review. We performed our work from December 2003 to July 2004 in accordance with generally accepted government auditing standards. FBI Nonsupervisory Field Special Agent Positions and Work Years Charged to the Drug, White-Collar, and Violent Crime Programs (fiscal year 2001 to second quarter, fiscal year 2004). Non-Organized Crime Drug Enforcement Task Force (OCDETF) David Alexander, Leo Barbour, William Bates, Geoffrey Hamilton, Benjamin Jordan, Deborah Knorr, Jessica Lundberg, Andrew O’Connell, and Kathryn Young made significant contributions to this report. | As a result of the terrorist attacks of September 11, 2001, the Federal Bureau of Investigation (FBI) has committed to a transformation to increase its focus on national security. The FBI has shifted agent resources to its top priorities of counterterrorism, counterintelligence, and cyber crime. Some of these agent resources were shifted away from drug, white-collar, and violent crime enforcement programs. The FBI's drug program has sustained, by far, the largest reduction in FBI agent workforce--about 550 positions, or more than 80 percent of the nonsupervisory field agents who were permanently reprogrammed. In addition, the FBI has had a continuing need to temporarily redirect agents from drug, white-collar, and violent crime enforcement to address counterterrorism-related workload demands. While GAO and other organizations have focused considerable attention on the progress of the FBI's transformation, this report addresses questions about the extent to which the shift in resources has affected federal efforts to combat drug, white-collar, and violent crime and whether other agencies, including the Drug Enforcement Administration (DEA) in the drug enforcement area, are filling gaps created by FBI resource shifts. The data GAO examined are inconclusive about the effect of the shifts in the FBI's priorities after September 11 on federal efforts to combat drug, white-collar, and violent crime. Indicators are mixed on the effect of the FBI shift on federal drug, white-collar, and violent crime enforcement. Further, GAO's analyses should be cautiously viewed as short-term indicators that are not necessarily indicative of long-term trends. Data GAO examined on federal drug enforcement efforts did not show a conclusive effect of the FBI's shift in agent resources to priority areas. GAO found that combined FBI and DEA nonsupervisory field agent resources decreased by about 10 percent since September 11 but that DEA is expecting significant increases in positions over the next 2 fiscal years. The combined number of newly opened FBI and DEA drug matters has declined by about 10 percent since 2001, from 22,736 matters in fiscal year 2001, which ended just after September 11, to 20,387 matters in fiscal year 2003. This decline may be attributed, at least in part, to an increased emphasis on cases targeting major drug organizations rather than to fewer investigative resources. In addition, referrals of drug matters to U.S. Attorneys from all federal sources decreased about 2 percent. Similarly, data do not show a conclusive impact on federal efforts to combat white-collar and violent crime resulting from the FBI's shift in priorities. For example, while the number of white-collar crime referrals from federal agencies to U.S. Attorneys declined by about 6 percent, from 12,792 in fiscal year 2001 to 12,057 in fiscal year 2003, violent crime referrals from all federal sources have increased by about 29 percent, from 14,546 in fiscal year 2001 to 18,723 in fiscal year 2003. Views of law enforcement practitioners GAO interviewed were mixed on the effect of the FBI's shift in resources on drug, white-collar, and violent crime enforcement efforts. Although these views are not representative of all practitioners, some did not think the FBI's shift had a significant impact on these crime enforcement efforts in their communities, while others said that drug, white-collar and violent-crime investigations had suffered. |
On October 25, 1995, Americans were reminded of the dangers that drivers/passengers often face when they travel over railroad crossings in the United States. On that day, in Fox River Grove, Illinois, seven high school students were killed when a commuter train hit a school bus. The potential for tragedies like the one at Fox River Grove is significant—the United States has over 168,000 public highway-railroad intersections. The types of warning for motorists at these crossings range from no visible devices to active devices, such as lights and gates. About 60 percent of all public crossings in the United States have only passive warning devices—typically, highway signs known as crossbucks. In 1994, this exposure resulted in motor vehicle accidents at crossings that killed 501 people and injured 1,764 others. Many of these deaths should have been avoided, since nearly one-half occurred at crossings where flashing lights and descended gates had warned motorists of the approaching danger. In August 1995, we issued a comprehensive report on safety at railroad crossings. We reported that the federal investment in improving railroad crossing safety had noticeably reduced the number of deaths and injuries. Since the Rail-Highway Crossing Program—also known as the section 130 program—was established in 1974, the federal government has distributed about $5.5 billion (in 1996 constant dollars) to the states for railroad crossing improvements. This two-decade investment, combined with a reduction in the total number of crossings since 1974, has significantly lowered the accident and fatality rates—by 61 percent and 34 percent, respectively. However, most of this progress occurred during the first decade, and since 1985, the number of deaths has fluctuated between 466 and 682 each year (see app. 1). Since 1977, the federal funding for railroad crossing improvements has also declined in real terms. Consequently, the question for future railroad crossing safety initiatives will be how best to target available resources to the most cost-effective approaches. Our report discussed several strategies for targeting limited resources to address railroad crossing safety problems. The first strategy is to review DOT’s current method of apportioning section 130 funds to the states. Our analysis of the 1995 section 130 apportionments found anomalies among the states in terms of how much funding they received in proportion to three key risk factors: accidents, fatalities, and total crossings. For example, California received 6.9 percent of the section 130 funds in 1995, but it had only 4.8 percent of the nation’s railroad crossings, 5.3 percent of the fatalities, and 3.9 percent of the accidents. Senators Lugar and Coats have proposed legislation to change the formula for allocating section 130 funds by linking the amounts of funding directly to the numbers of railroad crossings, fatalities, and accidents. Currently, section 130 funds are apportioned to each state as a 10-percent set-aside of its Surface Transportation Program funds. The second means of targeting railroad crossing safety resources is to focus the available dollars on the strategies that have proved most effective in preventing accidents. These strategies include closing more crossings, using innovative technologies at dangerous crossings, and emphasizing education and enforcement. Clearly, the most effective way to improve railroad crossing safety is to close more crossings. The Secretary of Transportation has restated FRA’s goal of closing 25 percent of the nation’s railroad crossings, since many are unnecessary or redundant. For example, in 1994, the American Association of State Highway and Transportation Officials found that the nation had two railroad crossings for every mile of track and that in heavily congested areas, the average approached 10 crossings for every mile. However, local opposition and localities’ unwillingness to provide a required 10-percent match in funds have made it difficult for the states to close as many crossings as they would like. When closing is not possible, the next alternative is to install traditional lights and gates. However, lights and gates provide only a warning, not positive protection at a crossing. Hence, new technologies such as four-quadrant gates with vehicle detectors, although costing about $1 million per crossing, may be justified when accidents persist at signalled crossings. The Congress has funded research to develop innovative technologies for improving railroad crossing safety. Although installing lights and gates can help to prevent accidents and fatalities, it will not preclude motorists from disregarding warning signals and driving around descended gates. Many states, particularly those with many railroad crossings, face a dilemma. While 35 percent of the railroad crossings in the United States have active warning devices, 50 percent of all crossing fatalities occurred at these locations. To modify drivers’ behavior, DOT and the states are developing education and enforcement strategies. For example, Ohio—a state with an active education and enforcement program—cut the number of accidents at crossings with active warning devices from 377 in 1978 to 93 in 1993—a 75-percent reduction. Ohio has used mock train crashes as educational tools and has aggressively issued tickets to motorists going around descended crossing gates. In addition, DOT has inaugurated a safety campaign entitled “Always Expect a Train,” while Operation Lifesaver, Inc., provides support and referral services for state safety programs. DOT’s educational initiatives are part of a larger plan to improve railroad crossing safety. In June 1994, DOT issued a Grade Crossing Action Plan, and in October 1995, it established a Grade Crossing Safety Task Force. The action plan set a national goal of reducing the number of accidents and fatalities by 50 percent from 1994 to 2004. As we noted in our report, whether DOT attains the plan’s goal will depend, in large part, on how well it coordinates the efforts of the states and railroads, whose contributions to implementing many of the proposals are critical. DOT does not have the authority to direct the states to implement many of the plan’s proposals, regardless of how important they are to achieving DOT’s goal. Therefore, DOT must rely on either persuading the states that implementation is in their best interests or providing them with incentives for implementation. In addition, the success of five of the plan’s proposals depends on whether DOT can obtain the required congressional approval to use existing funds in ways that are not allowable under current law. The five proposals would (1) change the method used to apportion section 130 funds to the states, (2) use Surface Transportation Program funds to pay local governments a bonus to close crossings, (3) eliminate the requirement for localities to match a portion of the costs associated with closing crossings, (4) establish a $15 million program to encourage the states to improve rail corridors, and (5) use Surface Transportation Program funds to increase federal funding for Operation Lifesaver. Finally, the action plan’s proposals will cost more money. Secretary Pena has announced a long-term goal of eliminating 2,250 crossings where the National Highway System intersects Principal Rail Lines. Both systems are vital to the nation’s interstate commerce, and closing these crossings is generally not feasible. The alternative is to construct a grade separation—an overpass or underpass. This initiative alone could cost between $4.5 billion and $11.3 billion—a major infrastructure investment. DOT established the Grade Crossing Safety Task Force in the aftermath of the Fox River Grove accident, intending to conduct a comprehensive national review of highway-railroad crossing design and construction measures. On March 1, 1996, the task force reported to the Secretary that “improved highway-rail grade crossing safety depends upon better cooperation, communication, and education among responsible parties if accidents and fatalities are to be reduced significantly.” The report provided 24 proposals for five problem areas it reviewed: (1) highway traffic signals that are supposed to be triggered by oncoming trains; (2) roadways where insufficient space is allotted for vehicles to stop between a road intersection and nearby railroad tracks; (3) junctions where railroad tracks are elevated above the surface of the roadway, exposing vehicles to the risk of getting hung on the tracks; (4) light rail transit crossings without standards for their design, warning devices, or traffic control measures; and (5) intersections where slowly moving vehicles, such as farm equipment, frequently cross the tracks. Under the Federal Railroad Safety Act of 1970, as amended, FRA is responsible for regulating all aspects of railroad safety. FRA’s safety mission includes 1) establishing federal rail safety rules and standards; 2) inspecting railroads’ track, signals, equipment, and operating practices; and 3) enforcing federal safety rules and standards. The railroads are primarily responsible for inspecting their own equipment and facilities to ensure compliance with federal safety regulations, while FRA monitors the railroads’ actions. We have issued many reports identifying weaknesses in FRA’s railroad safety inspection and enforcement programs. For example, in July 1990, we reported on FRA’s progress in meeting the requirements, set forth in the Federal Railroad Safety Authorization Act of 1980, that FRA submit to the Congress a system safety plan to carry out railroad safety laws. The act directed FRA to (1) develop an inspection methodology that considered carriers’ safety records, the location of population centers, and the volume and type of traffic using the track and (2) give priority to inspections of track and equipment used to transport passengers and hazardous materials. The House report accompanying the 1980 act stated that FRA should target safety inspections to high-risk track—track with a high incidence of accidents and injuries, located in populous urban areas, carrying passengers, or transporting hazardous materials. In our 1990 report, we found that the inspection plan that FRA had developed did not include data on passenger and hazardous materials routes—two important risk factors. In an earlier report, issued in April 1989, we noted problems with another risk factor—accidents and injuries. We found that the railroads had substantially underreported and inaccurately reported the number of accidents and injuries and their associated costs. As a result, FRA could not integrate inspection, accident, and injury data in its inspection plan to target high-risk locations. In our 1994 report on FRA’s track safety inspection program, we found that FRA had improved its track inspection program and that its strategy for correcting the weaknesses we had previously identified was sound. However, we pointed out that FRA still faced challenges stemming from these weaknesses. First, it had not obtained and incorporated into its inspection plan site-specific data on two critical risk factors—the volume of passenger and hazardous materials traffic. Second, it had not improved the reliability of another critical risk factor—the rail carriers’ reporting of accidents and injuries nationwide. FRA published a notice of proposed rulemaking in August 1994 on methods to improve rail carriers’ reporting. In February 1996, FRA reported that it intended to issue a final rule in June 1996. To overcome these problems, we recommended that FRA focus on improving and gathering reliable data to establish rail safety goals. We specifically recommended that FRA establish a pilot program in one FRA region to gather data on the volume of passenger and hazardous materials traffic and correct the deficiencies in its accident/injury database. We recommended a pilot program in one FRA region, rather than a nationwide program, because FRA had expressed concern that a nationwide program would be too expensive. The House and Senate Appropriations Conference Committee echoed our concerns in its fiscal year 1995 report and directed the agency to report to the Committees by March 1995 on how it intended to implement our recommendations. In its August 1995 response to the Committees, FRA indicated that the pilot program was not necessary, but it was taking actions to correct the deficiencies in the railroad accident/injury database. For example, FRA had allowed the railroads to update the database using magnetic media and audited the reporting procedures of all the large railroads. We also identified in our 1994 report an emerging traffic safety problem—the industry’s excessive labeling of track as exempt from federal safety standards. Since 1982, federal track safety standards have not applied to about 12,000 miles of track designated by the industry as “excepted;” travel on such track is limited to 10 miles per hour, no passenger service is allowed, and no train may carry more than five cars containing hazardous materials. We found in our 1994 report that the number of accidents on excepted track had increased from 22 in 1988 to 65 in 1992—a 195-percent increase. Similarly, the number of track defects cited in FRA inspections increased from 3,229 in 1988 to 6,057 in 1992. However, with few exceptions, FRA cannot compel railroads to correct these defects. According to FRA, the railroads have applied the excepted track provision far more extensively than envisioned. For example, railroads have transported hazardous materials through residential areas on excepted track or intentionally designated track as excepted to avoid having to comply with minimum safety regulations. In November 1992, FRA announced a review of the excepted track provision with the intent of making changes. FRA viewed the regulations as inadequate because its inspectors could not write violations for excepted track and railroads were not required to correct defects on excepted track. FRA stated that changes to the excepted track provision would occur as part of its rulemaking revising all track safety standards. In February 1996, FRA reported that the task of revising track safety regulations would be taken up by FRA’s Railroad Safety Advisory Committee. FRA noted that this committee would begin its work in April 1996 but did not specify a date for completing the final rulemaking. The Congress had originally directed FRA to complete its rulemaking revising track safety standards by September 1994. In September 1993, we issued a report examining whether Amtrak had effective procedures for inspecting, repairing, and maintaining its passenger cars to ensure their safe operation and whether FRA had provided adequate oversight to ensure the safety of passenger cars. We found that Amtrak had not consistently implemented its inspection and preventive maintenance programs and did not have clear criteria for determining when a passenger car should be removed from service for safety reasons. In addition, we found that Amtrak had disregarded some standards when parts were not available or there was insufficient time for repairs. For example, we observed that cars were routinely released for service without emergency equipment, such as fire extinguishers. As we recommended, Amtrak established a safety standard that identified a minimum threshold below which a passenger car may not be operated, and it implemented procedures to ensure that a car will not be operated unless it meets this safety standard. In reviewing FRA’s oversight of passenger car safety (for both Amtrak and commuter rail), we found that FRA had established few applicable regulations. As a result, its inspectors provided little oversight in this important safety area. For more than 20 years, the National Transportation Safety Board has recommended on numerous occasions that FRA expand its regulations for passenger cars, but FRA has not done so. As far back as 1984, FRA told the Congress that it planned to study the need for standards governing the condition of safety-critical passenger car components. Between 1990 and 1994, train accidents on passenger rail lines ranged between 127 and 179 accidents each year (see app. 2). In our 1993 report, we maintained that FRA’s approach to overseeing passenger car safety was not adequate to ensure the safety of the over 330 million passengers who ride commuter railroads annually. We recommended that the Secretary of Transportation direct the FRA Administrator to study the need for establishing minimum criteria for the condition of safety-critical components on passenger cars. We noted that the Secretary should direct the FRA Administrator to establish any regulations for passenger car components that the study shows to be advisable, taking into account any internal safety standards developed by Amtrak or others that pertain to passenger car components. However, FRA officials told us at the time that the agency could not initiate the study because of limited resources. Subsequently, the Swift Rail Development Act of 1994 required FRA to issue initial passenger safety standards within 3 years of the act’s enactment and complete standards within 5 years. In 1995, FRA referred the issue to its Passenger Equipment Safety Working Group consisting of representatives from passenger railroads, operating employee organizations, mechanical employee organizations, and rail passengers. The working group held its first meeting in June 1995. An advance notice of proposed rulemaking is expected in early 1996, and final regulations are to be issued in November 1999. Given the recent rail accidents, FRA could consider developing standards for such safety-critical components as emergency windows and doors and safety belts as well as the overall crashworthiness of passenger cars. In conclusion, safety at highway-railroad crossings, the adequacy of track safety inspections and enforcement, and the safety of passenger cars operated by commuter railroads and Amtrak will remain important issues for Congress, FRA, the states, and the industry to address as the nation continues its efforts to prevent rail-related accidents and fatalities. Note 1: Analysis includes data from Amtrak, Long Island Rail Road, Metra (Chicago), Metro-North (New York), Metrolink (Los Angeles), New Jersey Transit, Northern Indiana, Port Authority Trans-Hudson (New York), Southeastern Pennsylvania Transportation Authority and Tri-Rail (Florida). Note 2: Data for Amtrak include statistics from several commuter railroads, including Caltrain (California), Conn DOT, Maryland Area Rail Commuter (excluding those operated by CSX), Massachusetts Bay Transportation Authority, and Virginia Railway Express. Railroad Safety: FRA Needs to Correct Deficiencies in Reporting Injuries and Accidents (GAO/RCED-89-109, Apr.5,1989). Railroad Safety: DOT Should Better Manage Its Hazardous Materials Inspection Program (GAO/RCED-90-43, Nov.17, 1989). Railroad Safety: More FRA Oversight Needed to Ensure Rail Safety in Region 2 (GAO/RCED-90-140, Apr. 27, 1990). Railroad Safety: New Approach Needed for Effective FRA Safety Inspection Program (GAO/RCED-90-194, July 31, 1990). Financial Management: Internal Control Weaknesses in FRA’s Civil Penalty Program (GAO/RCED-91-47, Dec.26, 1990). Railroad Safety: Weaknesses Exist in FRA’s Enforcement Program (GAO/RCED-91-72, Mar.22, 1991). Railroad Safety: Weaknesses in FRA’s Safety Program (GAO/T-RCED-91-32, Apr. 11, 1991). Hazardous Materials: Chemical Spill in the Sacramento River (GAO/T-RCED-91-87, July 31, 1991). Railroad Competitiveness: Federal Laws and Policies Affect Railroad Competitiveness (GAO/RCED-92-16, Nov. 5, 1991) Railroad Safety: Accident Trends and FRA Safety Programs (GAO/T-RCED-92-23, Jan.13, 1992). Railroad Safety: Engineer Work Shift Length and Schedule Variability (GAO/RCED-92-133, Apr. 20, 1992). Amtrak Training: Improvements Needed for Employees Who Inspect and Maintain Rail Equipment (GAO/RCED-93-68, Dec.8, 1992). Amtrak Safety: Amtrak Should Implement Minimum Safety Standards for Passenger Cars (GAO/RCED-93-196, Sep.22, 1993). Railroad Safety: Continued Emphasis Needed for an Effective Track Safety Inspection Program (GAO/RCED-94-56, Apr.22, 1994). Amtrak’s Northeast Corridor: Information on the Status and Cost of Needed Improvements (GAO/RCED-95-151BR, Apr. 13, 1995). Railroad Safety: Status of Efforts to Improve Railroad Crossing Safety (GAO/RCED-95-191, Aug. 3, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO provided information on the safety of highway railroad crossings, commuter passenger rails and adequacy of track safety inspections. GAO found that: (1) the leading cause of death associated with the railroad industry involved railroad crossing accidents; (2) about half of rail-related deaths occur because of collisions between trains and vehicles at public railroad crossings; (3) in 1994, 501 people were killed and 1,764 injured in railroad crossing accidents; (4) to improve the safety of railroad crossings, the Department of Transportation (DOT) must better target funds to high-risk areas, close more railroad crossings, install new technologies, and develop educational programs to increase the public's awareness of railroad crossings; (5) DOT plans are costly and will require congressional approval; (6) the Federal Railroad Administration (FRA) is unable to adequately inspect and enforce truck safety standards or direct transportation officials to the routes with the highest accident potential because its database contains inaccurate information; and (7) Congress has directed FRA to establish sufficient passenger car safety standards by 1999. |
American Indian tribes are among the most economically distressed groups in the United States. According to data from the 2000 U.S. Census, American Indian tribes’ median per capita income of $9,200 in 1999 was less than half the $21,600 per capita income for the entire U. S. population. In addition, the percentage of American Indians with household incomes at or below the official poverty level averaged 30 percent across tribes—more than double the 12 percent for the U.S. population as a whole. According to tribal officials and government agencies, conditions on and around tribal lands generally make successful economic development more difficult. These officials indicated that American Indian communities often are lacking in adequate basic infrastructure, such as water and sewage systems. These communities also frequently lack sufficient technology infrastructure, such as telecommunications lines that are commonly found in other American communities. Without such infrastructure, tribal communities often find it difficult to compete successfully in the economic mainstream. A 1999 EDA study that assessed the state of infrastructure in American Indian communities found that these communities also had other disadvantages that made successful business development more difficult. This study found that the high cost and small markets associated with investment in Native communities continued to deter widespread private sector involvement. Another factor that creates more difficult business conditions in some tribal areas has been downturns in regionally significant industries. For example, tribes in the Northwest and Alaska have been hurt by the decline in the fishing and timber industries in their areas. To help address the needs of Indian tribes, various federal agencies provide assistance, including economic development. BIA is charged with the responsibility of implementing federal Indian policy. BIA assists tribes in various ways, including providing for social services, developing and maintaining infrastructure, and providing education services. BIA also attempts to help tribes develop economically by providing resources to administer tribal revolving loan programs and guaranteed loan programs to improve access to capital in tribal communities and providing assistance in obtaining financing from private sources to promote business development initiatives on or near Indian reservations. In addition to the support provided by BIA, other agencies with significant programs for tribes include the Department of Health and Human Services, which provides funding for the Head Start Program and the Indian Health Service; the Department of Housing and Urban Development, which provides support for community development and housing-related projects; and the Department of Agriculture, which provides support for services pertaining to food distribution, nutrition programs, and rural economic development. The Department of Commerce’s EDA is an agency that provides assistance to tribes specifically for economic development. EDA’s mission is to create wealth and minimize poverty in economically distressed rural and urban communities that experience high unemployment, low income, or other severe distress. EDA fulfills its mission with grant programs, including six programs explained in table 1. EDA has six regional offices that administer its grant programs across multistate areas. Each regional office accepts preapplication investment proposals from prospective grantees, including American Indian tribes and Alaska Natives. Based on established regulations, EDA regional officials encourage only those investment proposals that will significantly benefit areas experiencing or threatened with substantial economic distress to continue with the application process. Before receiving a grant, an entity must submit a preapplication proposal to an EDA Economic Development Representative responsible for that area. After preliminary reviews by various EDA regional office staff, each preapplication proposal is considered by the region’s Investment Review Committee, which consists of the Regional Director, Regional Counsel, and Division Chiefs, to ensure that entity is eligible to receive funds and that the project is likely to provide benefits meeting EDA’s criteria. The Investment Review Committee will then recommend whether the entity should be invited to submit an application. EDA headquarters reviews the recommendation action for quality assurance. According to Commerce, after receiving quality control clearance and depending on the type of grant program, the Regional Director approves the decision to invite the entity to submit a formal application. After this application is received and found to be complete, the grant funds will be awarded. During the 1990s, the goals EDA generally sought to meet through its grants were to fund projects that would create jobs and produce income for distressed communities. However, since 2002 EDA has placed more emphasis on projects that create higher-skill, higher- wage jobs and that are market based and likely to attract private sector investment. Activities that tribes are authorized to undertake as a result of the Indian Self-Determination and Education Assistance Act, as amended, could help them develop economically. This act authorizes Indian tribes to take over the administration of programs that had been previously administered on their behalf by the Departments of the Interior or Health and Human Services. In passing the act, Congress recognized that the government’s administration of Indian programs prevented tribes from establishing their own policies and making their own decisions about program services. The act allowed tribes to contract for a range of Indian programs that are managed by the Interior Department’s BIA and Health and Human Services’ Indian Health Service on their behalf. According to the act, tribal contractors must receive funding equivalent to what each of the agencies would have provided if they had operated the programs. The act, as amended, also provides that tribal contractors are to receive funding for the reasonable costs of activities that they must perform to manage a program’s contract—known as contract support costs. Once having contracted a program, a tribe assumes responsibility for all aspects of its management, such as hiring program personnel, conducting program activities, delivering program services, and establishing and maintaining administrative and accounting systems. Typical programs that are contracted by tribes include such BIA programs as law enforcement, social services, road maintenance, and forestry as well as such Indian Health Service programs as hospitals and health clinics; dental care; and mental health services. Congress has amended the act several times since 1975. A series of amendments from 1984 through 1994 streamlined contracting requirements, provided funds for contract support, and allowed more participation by tribal governments in federal rulemaking. In 1988, a new title was added to the 1975 act authorizing the creation of the Self- Governance Demonstration Project that enabled a number of tribes to receive funding for multiple federal programs in one lump sum under a self- governance agreement. This new title to the act, known as Title III Self- Governance Demonstration Project, enables tribes generally to receive funding for multiple federal programs in one lump sum under a self- governance compact. Tribes operating under a self-governance compact have the flexibility to administer funds for multiple programs as they see fit, rather than abiding by the circumstances of single-program contracts. The 1988 amendment also added reasonable contract support costs to comply with the terms of the contract and to support prudent management. The Tribal Self-Governance Act Amendments of 1994 directed the Secretary of the Interior to negotiate contracts annually with participating tribes to enable the tribes to plan, conduct, consolidate, and administer functions and activities that were administered by the Secretary. Through the act and the subsequent amendments, Congress envisioned that Indian tribes and Indian people are best able to determine the most effective and efficient provision of government programs, services, and economic development for Indian people. The funding that EDA provided to tribes between 1993 and 2002 represented a small portion of the economic assistance that EDA provided during this 10-year period. The extent to which tribes received EDA grants varied across states and these were used for various purposes. From 1993 to 2002, EDA provided funding for 63 enterprise projects intended to generate revenues but these have had mixed success in producing economic development. EDA has also provided a small amount of funding for business development activities, including several revolving loan funds, which were used to fund tribal enterprises or training. In addition, 23 tribes received EDA grants for infrastructure projects that tribal officials reported as having resulted in subsequent economic development activities for their tribes. During the 10-year period, 99 tribes and tribal organizations received EDA grants for planning activities, including feasibility studies, and almost all of the tribes that got these grants either received other funding from EDA or obtained economic development aid from other government agencies. The funding that EDA provided to tribes represent a small portion of overall EDA grants. We obtained data from EDA that included all grants it made to American Indian tribes during the years 1993 to 2002. Our analysis of these data indicated that 143 Indian tribes and tribal organizations received a total of $112 million in EDA grants during this 10-year period. Comparing this with the total amount of grants that EDA awarded, EDA grants to tribes represented 3 percent of the $3.4 billion that the agency had awarded overall between 1993 and 2002. Figure 1 shows the relative proportion of funding that tribes received each year from EDA, which ranged between 2.1 percent and 5.3 percent of total EDA grant appropriations during this period. Using information from the 2000 Census, we calculated that approximately 3.5 percent of the persons in the United States with income below the official poverty level are American Indians or Alaska Natives. Therefore, the proportion of EDA funds going to tribes appears to be similar to the proportion of the U.S. population living in poverty that these tribes represent. Based on our analysis of EDA data, 125 (or 22 percent) of the 562 federally recognized tribes in the lower 48 states and Alaska received EDA grants between 1993 and 2002. EDA also provided grants to 18 tribal organizations or Alaska Native entities. According to EDA officials, other tribes did not receive any EDA grants for various reasons. For example, they said that the demand for funding exceeds the available grant funding. Also, one EDA official said that some tribes are unable to propose a project that appears likely to generate sufficient economic development. Of the $112 million in total grants to tribes that EDA awarded between 1993 and 2002, $86 million went to 113 tribes in the lower 48 states. This included grants to 100 federally recognized tribes and 13 tribal organizations. The remaining $26 million was awarded to 30 Native entities in Alaska, including grants to 25 federally recognized Native entities and 5 Alaska Native organizations representing more than one entity. Our analysis indicated that the amount of EDA funding to tribes varied across states. For example, Alaska accounted for almost 23 percent of EDA grants to all tribal entities during the years 1993 to 2002, as shown in figure 2. In 2001 alone, Native entities in Alaska received over half of all EDA grants to tribes, much of which were awarded under EDA’s disaster relief appropriation for projects to address the slump in Alaska’s fishing industry. In addition to more tribes in some states receiving grants, our analysis showed that EDA-awarded grants to tribes also varied on a per capita basis across states. As figure 3 shows, tribes in seven states, including Colorado, Florida, Idaho, Maine, Massachusetts, North Carolina, and Oregon, received more than $600 per individual in grants from EDA between 1993 and 2002. In contrast, tribes in at least eight states received no EDA grant funding during this period. The grants EDA made to tribes were for various purposes. In the data we analyzed, EDA categorized the grants it awarded according to the various funding programs it administers, such as for planning, public works, or economic adjustment. However, upon review of these data, we found that EDA used funds from these different categories to provide grants for similar types of projects. For example, an economic adjustment grant or a public works grant could be used for either the planning of a project or the construction of a project. Therefore, for our analysis, we grouped the various grants into the following categories according to type of project or activity funded: Enterprise projects: grants used to develop projects designed to generate income for the tribe, such as a cannery, a resort, or a sawmill; Infrastructure projects: grants used for the design and construction of public works infrastructure (e.g., roads, highways, and sewers) that would serve as the foundation for general economic development activities; Business development projects: grants used to fund loan funds, training, and other business development projects, including those for business incubators, revolving loan funds (RLF), training and capacity building, and other assistance that enhances the tribes’ economic development activities; and Planning/feasibility grants: grants used for general planning purposes such as paying for staff salaries or the broad administration of the tribes’ planning departments, as well as for developing plans, analyses of projects’ environmental impact, and feasibility studies for specific economic development projects. Based on the results of our analysis of EDA data, we found that the largest portion of the dollars EDA awarded to tribes were for enterprise projects. As figure 4 shows, about half of the $112 million that EDA awarded to tribes between 1993 and 2002 was for enterprise projects. Grants for planning and feasibility studies represented the next largest portion of the grants. In addition to accounting for 27 percent of grant dollars EDA awarded between 1993-2002, the most frequent type of grant that tribes received was for planning and feasibility studies. As shown in figure 5, more tribes received planning and feasibility grants than any other type. The grants that EDA provided to tribes have had mixed success in creating revenue-generating enterprises. Of the $112 million that EDA provided to tribes from 1993 to 2002, as shown in figure 6, $54 million or nearly half went to fund 63 tribal enterprise projects, including almost $20 million for projects in Alaska. As shown in figure 6, most of the enterprise projects EDA funded for tribes included industrial enterprises, such as wood products plants, or commercial projects, such as retail businesses and shopping centers. Most of the EDA-funded enterprise projects in Alaska involved community or cultural centers, which provided facilities for community and tourist activities. Some of the EDA grants also funded natural resource enterprises involving fish, wildlife, or horticulture restoration. Figure 7 describes an example of one enterprise project that EDA funded. This grant helped a tribe fund the development of a horticultural enterprise that grows vegetation to improve fishing areas in local rivers. Although producing some benefits, at the time we contacted the tribe operating this project, they were using funds from other sources to subsidize its operations, although they hoped it would eventually be profitable. The enterprise projects EDA funded for tribes have had mixed success in helping tribes create revenue-generating enterprises. We gathered information on 59 of the 63 projects funded by EDA between 1993 and 2002—12 by site visits and 47 by telephone interviews with tribal officials. Tribal officials we contacted in late 2003 and early 2004 reported that 31 projects had been completed; 3 were completed, but just opened, and no operating results were yet 25 had not yet been completed, including 20 projects funded in 2001 and 2002. As shown in figure 8, of the 31 completed projects with results, tribal officials reported about half were either profitable or were earning enough to cover their operating costs. However, the remaining projects were either requiring subsidies or had ceased operations. Tribal officials predicted that 4 of the 7 projects currently being subsidized could become sustainable given more time or further expansion. Five of the 7 projects that failed were industrial enterprises, including a sawmill, a bottled water plant, and a plant to manufacture fiberglass household furnishings. According to tribal officials, the reasons for failure included market changes or downturns, lack of an ongoing source of funding to keep the enterprise afloat, management problems, and environmental problems. Four of the 7 failed enterprise projects were funded between 1993 and 1995. However, in recent years, some tribes reported that they have been able to keep fledgling enterprises afloat by subsidizing them with revenues from gaming or other tribal enterprises, and the failure rate for EDA-funded enterprise projects has decreased. Since 1996, EDA funded 24 tribal projects that had been completed and, of these, 14 were either profitable or covering their costs, 3 had failed, and 7 were still being subsidized (see appendix II, figure 30 for more information on the outcome of projects by year of funding). Figure 9 provides an example of a project that, although it has not failed, is used only once a year and must have its operating costs subsidized by the tribes that operate it. Our analysis of the enterprise projects that EDA funded indicated that most of the projects that tribes developed with EDA funding had not attracted funding from private entities. For most of the enterprise projects we reviewed, EDA funds covered between 30 percent and 80 percent of the total project costs. As shown in figure 10, tribal officials reported direct private sector investment in 17 percent of the projects, though in some cases the tribal share of project funding included funds borrowed by the tribes from private financial institutions. EDA officials recognize the difficulty tribes face in attracting private investment on Indian lands and sometimes make allowances for the amount of matching funds they require or suggest to tribes that they locate projects outside of reservation land. The grants that EDA provided to tribes for enterprise projects also appeared to create limited numbers of jobs. Tribal officials told us that many of the EDA-funded enterprise projects had resulted in the creation of jobs for tribal members, although the number of jobs created generally was less than 10 per project. As shown in the figure 11, 20 (59 percent) of the enterprise projects resulted in 10 or fewer jobs. Although most projects did not create a large amount of jobs, some projects that EDA funded were more successful in employing larger numbers of people. Of the 34 completed projects we reviewed, 4 projects resulted in the creation of 50 or more jobs. These included the following projects: One Northwest tribe received a $1.6 million EDA grant in 2002 to help fund the opening of a plywood processing plant that uses wood from the tribe’s own forests. The total project cost was $10 million, with additional funds coming from other federal and state grants, the tribe, and a bank loan. Tribal officials reported that the enterprise has created 265 jobs in a generally depressed rural area and is generating enough revenue to cover costs, including debt servicing. One Southwest tribe we visited received a $2.5 million EDA grant in 2000 to help fund a shopping center. Total project cost was nearly $4.5 million, including an earlier $1 million investment by the tribe to install basic infrastructure for the site. The project was built to accommodate seven retail businesses. At the time of our visit, the center had five tenants—a grocery store, pizza restaurant, laundromat, hair salon, and video store--and two vacancies. According to tribal officials, the project was still being subsidized by the tribe but is expected to be profitable in 3 to 5 years. According to a tribal report, the center had generated 70 jobs, provided retail services to local consumers, stopped a portion of leakage of tribal dollars to off-reservation towns, and provided opportunities for some tribal members to go into business. A Montana tribe received three EDA grants totaling $1 million between 1996 and 2002 for expansion of a tribal electronics enterprise. Total cost of the expansion projects was $2 million. According to tribal officials, the projects generated a total of 65 new jobs, including participants in a welfare-to-work program. The first two projects were profitable, but the third project was not yet turning a profit as of early 2004. Although the projects EDA funded had only mixed success in generating revenue and large numbers of jobs, tribal officials told us that the EDA- funded enterprise projects had produced other benefits. In some cases, tribal officials said that the projects EDA funded resulted in the creation of jobs and revenue at other entities. For example, one tribe used a $350,000 EDA grant in 1997 to help fund construction of a fish hatchery. This enterprise did not make a profit, but 15 jobs were created at the hatchery. The project had indirect economic benefits to the tribe because it helped to support the local fishing and tourism industry, which employed 15 to 20 tribal individuals as fishing guides and 8 seasonal jobs at a campground. The hatchery also generated increased business for local restaurants and motels and enabled tribal subsistence fishermen to catch fish for their own consumption. (See app. II, fig. 29, for a list of the completed enterprise projects for which we obtained information.) In light of the mixed success that EDA-funded enterprise projects experienced, tribes may find obtaining EDA funding in the future more difficult because of the changes in the agency’s criteria for awarding grants. Since fiscal year 2002, EDA’s criteria for approving grant applications requires its staff to seek to fund projects that create jobs requiring greater skills and paying higher wages. Their criteria also emphasize projects more likely to attract private sector investment. In meeting these criteria, EDA officials informed us that tribes have to compete with other entities such as state and local governments and nonprofit organizations for EDA grants. As a result, tribes in rural areas, in particular, find it difficult to propose projects that are likely to attract private sector investment or result in jobs that pay high wages. For example, EDA officials we spoke to in one EDA region noted that communities closer to urban areas were more likely to be able to propose projects, such as industrial parks, that were likely to attract high technology firms than were the more rurally-located tribes in their region. From 1993 through 2002, EDA also awarded grants to tribes to be used for loan funds, business development, and training. Grants for these purposes totaled $4.9 million, comprising 4 percent of the total $112 million that EDA provided to tribes during that 10-year period. About $2.2 million of the EDA grants were to support RLFs. These RLFs are pools of money loaned out for revenue-generating enterprises. The repayment of loan principal and interest replenish the RLF, creating a revolving source of capital to finance additional loans and further develop the local economy. Of the $2.2 million EDA awarded for RLFs between 1993 and 2002, $950,000 was to provide the initial capital—seed money—to get three new RLFs started and about $1.3 million was used to support business development and training programs associated with two existing RLFs (see app. II for details). Tribal officials reported that these RLFs EDA has supported have successfully funded both tribal enterprises and small businesses started by individual tribal members. For example, a tribe in Northern California has administered one of these RLFs since 1977, when EDA originally provided initial capital of $1.5 million to finance loans relating to the tribe’s forest industries. Since 1994, EDA provided $285,000 to fund a business training program to assist applicants seeking funding from this RLF with instruction on preparing business plans, contract agreements, credit applications, and on the use of computers and other office equipment. According to documents provided by fund officials, over the years, this RLF has made 356 loans to businesses resulting in 658 new jobs and attracted $7.8 million in private sector investment. Among the projects tribal officials told us had received funding from this RLF included a shopping center, a motel, a restaurant, a gas station, and a gravel enterprise. Tribal officials told us that most of the projects this RLF has funded have provided jobs or other benefits, although not all are operating profitably. To keep the more marginal enterprises operating, tribal officials reported using profits from the tribe’s own successful enterprises to subsidize the financing costs of the other enterprises. By keeping these enterprises in operation until loans were paid off, their community benefits from the additional jobs and services, and the enterprises receive additional time to become sustainable on their own. According to documents provided by fund officials, the total amount of loan defaults as a percentage of the total loaned out since inception for this tribe’s RLF was 5.4 percent, and through repayments of principal and interest the fund’s capital pool has more than doubled to over $3.2 million. Although we did not attempt to independently verify the accuracy of these figures, we visited several of the businesses the tribe indicated had received funding. For example, the gravel enterprise, in its first year of production since receiving an $850,000 loan from the RLF to purchase rock-crushing machinery, was in full production, and enterprise officials expected to turn a profit within 2 years and hoped to add an asphalt plant in the future. At the time of our visit, we saw that this facility was operating actively with considerable truck traffic into and out of the facility. The amount of loan activity by the three newly-funded RLFs has been limited by start-up challenges and the amount of money in these funds. According to RLF officials we interviewed, there are many challenges to establishing a successful RLF, including finding additional funds to match EDA’s seed money and cover operating costs until sufficient interest income begins to be received. Other challenges include finding and hiring a competent, experienced loan manager; training loan applicants in such areas as drawing up business plans; and establishing relationships and gaining the confidence of financial institutions to leverage the loans. One of the RLFs took 3 years from the time its EDA grant was approved until its first loan was made. At the time of our survey, the three RLFs that EDA had funded since 1998 reported each had made between 7 and 14 loans. Because new loans cannot be made until older loans are repaid, considerable time is required for RLFs to grow and become more active. Figure 12 shows an example of a resort cabin business that was funded by a loan from an EDA-supported RLF. EDA also funded a grant that was used for the design of the project. The tribe operating the project reported that the project is generating enough revenue to cover its costs despite a short tourist season, but they hoped that an expansion of the project could make it profitable. In addition to grants to support RLFs, EDA also provided tribes with funding for training or business development, but according to tribal officials, the success of these programs has been hampered by lack of operating funds. Between 1993 and 2002, EDA provided Indian tribes with $2.1 million in grants to start 6 training programs. The largest of these was a $1.2 million grant in 2000 to renovate a building for a vocational training center in Alaska that, according to local officials, has trained 830 students. A California tribe that, according to a tribal official, lost 150 jobs due to timber industry closures, used a $66,000 grant in 1998 to establish a training program that has resulted in 2 entrepreneur classes, 100 individual business counseling sessions, and 5 start-up businesses. All 6 of these programs sought operating funds from other sources, such as state and federal agencies. However, 4 of the 6 programs reported difficulties getting on-going operating funds—two closed down due to lack of funds, one transferred its facilities to a university program, and the fourth reported that its program was in jeopardy. In addition to training programs, EDA also provided about $500,000 to six other Indian tribes and organizations for business development activities between 1993 and 2002. In most cases the grants were for one-time training workshops or conferences on business development related topics. In two cases, the grants were used to recruit several businesses for business/industrial parks. One tribe, which received a $75,000 grant in 2000, reported finding one current and two future tenants for their park and used part of the grant to conduct a seminar on how tribal businesses can apply for government contracts under the Small Business Administration’s 8(a) minority contracting program. About 20 percent of the funding that EDA provided to Indian tribes between 1993-2002 was for projects to improve infrastructure for tribal lands. According to government and tribal officials, many rural Indian communities lack the infrastructure needed to support industrial and commercial development, such as roads, water and waste treatment pipelines, and processing facilities. A 1999 EDA study cited lack of funding as the overwhelming reason why tribes were not making the infrastructure investments needed to facilitate economic development. The tribal officials we spoke with said that obtaining sufficient funding for infrastructure development was particularly difficult because such projects do not always offer an immediate return on the investment. However, in some cases tribes with revenues from gaming and other tribal enterprises were able to use these sources in addition to funding from EDA and other federal grants to finance improvements to their infrastructure. Our analysis indicated that EDA provided $22.1 million in grants to 23 tribes for 26 infrastructure projects between 1993 and 2002. In many cases, these funds were supplemented by grants from other federal agencies. Most of these projects involved construction or expansion of water and waste treatment systems, electrical lines, and roads. Other grants that EDA awarded were used to improve dock and harbor facilities, to shore up riverbanks for flood control, and to install telecommunications equipment. For example, one coastal tribe we visited used $2.6 million in EDA funds to construct a breakwater and marina to support and protect the tribal fishing boats and to bolster the tribe’s seasonal boating-related tourism industry (see figure 13). Another tribe we visited received a $1 million EDA grant in 1997 to construct the water and sewer pipes, roads, and electrical lines needed for a new industrial park. Of the tribes that received EDA infrastructure grants, tribal officials reported that the funded projects facilitated either current or anticipated future business development. We gathered information on 25 of the 26 EDA infrastructure grants, visiting 4 during our site visits, and conducting telephone interviews for the remainder. Nineteen of the 25 projects had been completed and, according to tribal officials, all have led to economic development for their tribes. For example, one tribe received a $1.1 million EDA grant in 1995 to upgrade and extend their water and sewer systems, which enabled the development of a resort, hotel, and casino complex with more than $25 million in annual revenues. Tribal officials reported that the complex has created more than 550 jobs, which helped reduce the tribal unemployment rate from 37 percent to 11 percent. According to these officials, the success of this project is spurring further economic development, including a planned industrial park. Figure 14 shows the various developments that were facilitated by the infrastructure grants that EDA provided to tribes. The benefits for some of the enterprise projects EDA had funded had yet to be realized. Of the 25 infrastructure projects that we reviewed, construction on 6 had not yet been completed, and 4 had only recently been opened, and tribal officials told us that it was too soon to realize most of the anticipated development benefits. About one-fourth of the total dollars that EDA awarded to tribes were provided to fund planning and feasibility study efforts, which appeared to help these tribes identify their needs and obtain other funding for their economic development efforts. According to the Department of Commerce’s fiscal year performance report, EDA considers funding distressed communities’ planning efforts critical to effective economic and sustainable development. Based on our analysis of EDA grants, 99 tribes and organizations received $30 million in EDA grants to conduct planning activities or to fund the preparation of feasibility studies from 1993-2002. The grants awarded for planning went to 72 tribes and 7 tribal organizations in the lower 48 states and 6 Alaska Native villages or organizations. These grants, which typically ranged from $30,000 to $65,000, were generally used by tribes to pay part or all of the salary of an individual tasked with developing economic development plans for the tribe. More than half of the tribes receiving planning grants received them annually throughout the 10-year period, and many have received these grants continuously since the 1970s. Over 90 percent of the tribal officials in our survey indicated that the planning grants were crucial or very important in achieving success in their tribes’ economic development. However, some officials reported that the effectiveness of their planning grants was limited because of lack of funds to implement the projects they envisioned or lack of support or consensus among the tribal leadership as to what projects to pursue. EDA also helped fund feasibility studies for 38 tribes. These grants were awarded to 34 tribes in the lower 48 states and 4 in Alaska. According to tribal officials, performing a feasibility studies before embarking on a potential project can assist tribal officials in determining whether the project would benefit the tribe. Of the 17 feasibility studies that we obtained information on from our telephone interviews and site visits, 3 of the projects studied were successfully implemented, 4 were in the planning stage, 3 were not implemented due to a lack of funds, 3 were not implemented due to a change in direction by the tribal council, and 4 were determined not to be feasible. As shown in figure 15, most tribes that receive planning or feasibility study grants also receive project funding from EDA or other federal agencies. The EDA planning grants appeared to help tribes successfully implement EDA enterprise projects. According to the information we obtained on 25 completed enterprise projects in the lower 48 states that EDA funded from 1993-2002, 9 of 14 that had also received EDA planning grants were either profitable or covering their costs, compared with 4 of 11 projects done by tribes that had not obtained EDA planning grants. In addition, 4 of the 7 tribal enterprise projects that had failed during this period were implemented by tribes that had not received EDA planning grants. As authorized under the Indian Self-Determination and Education Assistance Act, as amended, nearly all tribes enter into contracting or self- governance arrangements to operate their own tribal programs and services. Based on our analysis of the relationship between contracting and changes in tribes’ economic profiles, we found that self-governance tribes and those that had contracted to operate a high proportion of their programs and services generally experienced greater growth in their employment levels but had not generally shown greater gains in income levels. Our analysis also suggested that tribes that received a high proportion of their income from federal contracts and grants generally experienced lower income growth than tribes that had been able to find other sources of revenue. Despite these results, the tribal representatives saw advantages to running their own programs, including the experience such arrangements provide in administering their own affairs and the increased flexibility provided for tailoring programs to meet local needs. However, tribal officials we spoke to said that one disadvantage to contracting is the amounts provided under such arrangements can lead to funding shortfalls that divert money away from other tribal activities. Furthermore, tribal representatives and others identified other factors, such as the tribes’ location, availability of resources, ability to generate gaming revenues, access to capital, and quality of tribal governance as significant influences on the ability of tribes to develop their tribal economics successfully. Because we were not able to include the extent to which these factors also affected tribe’s economic development, our contracting analysis examined only the relationship to such activities and changes in economic profile and could not assess causation. The Indian Self-Determination Act allows tribes to enter into various arrangements with BIA or the Indian Health Service to assume the operation of many of the programs and services previously provided by the agencies. From the list of federally recognized tribes and from 2000 Census data, we identified 219 tribes in the lower 48 states that had 100 or more Native Americans living in the tribal area. According to BIA information, 43 of the 219 were tribes that had entered into self-governance arrangements with BIA. As a result, these tribes operated most of their own tribal functions and services under a funding compact agreement with BIA. By analyzing Single Audit Act data that shows funding provided by federal agencies, we determined that nearly all of the remaining 176 tribes operated many of their tribal functions and services under contracts or other agreements with BIA. Our analysis of the relationship between contracting and economic profile changes for Indian tribes showed that tribes that contracted more generally experienced greater employment growth than did tribes that contracted less. To identify the extent to which tribes were contracting, we grouped the 219 tribes in the lower 48 states with populations greater than 100 into three categories. The first category included the 43 tribes that had entered into self-governance compacts with BIA. Such tribes generally have assumed the operation of most of the services used by tribal members. For the remaining 176 tribes that were non-self-governance, we analyzed how much funding these tribes received from BIA contracts and grants from 1998 to 2000 in total and on a per capita basis. Based on these analyses, we categorized the 121 tribes with annual per capita BIA contract amounts exceeding $580 and total annual BIA contracting amounts greater than $300,000 as high-contracting tribes, and we, therefore, categorized the remaining 55 tribes whose per capita or total contracting amounts were less than these thresholds as low-contracting tribes. To analyze the relationship between contracting and changes in tribal economic profiles, we compared how various indicators of economic well- being from Census Bureau data had changed for these three groups of tribes. Within each group, the changes in their economic indicators varied greatly, with some tribes experiencing significant improvement and others experiencing declines between 1990 and 2000 (see app. III, table 8 for complete data). As shown in figure 16 below, our analysis showed that high-contracting and self-governance tribes experienced higher growth on average in their employment levels than did tribes that contracted less. However, the high-contracting and self-governance tribes we analyzed did not, on average, experience greater growth in the income of the Native Americans living on their lands. As the figure also shows, high-contracting and self-governance tribes’ per capita incomes did not grow faster than those of tribes contracting less. Additionally, these tribes did not experience greater improvement in the proportion of Native Americans living on their tribal lands with incomes above poverty level. As a result of the wide variability in our data, our statistical tests indicated that the differences in income growth and proportion above poverty level shown below were not statistically meaningful. Although their incomes did not grow faster on average than low- contracting tribes, the high-contracting and self-governance tribes in our analysis were less likely to experience declines in their income-related measures than the low-contracting tribes. As figure 17 shows, a greater proportion of the high-contracting and self-governance tribes experienced positive growth on both employment and income indicators over the 10- year period from 1990 to 2000. For many tribes, federal contracts and grants are the major funding source for tribal jobs and income. For example, of 53 tribes we contacted by telephone or during our site visits, 68 percent reported that tribal government, which is funded largely from federal sources, was the main source of jobs and income for tribal members. To examine further the relationship between level of contracting and economic indicators, we compared the total federal contracts and grants received by each tribe with the total income of Native Americans living in the tribal area. For this analysis, we identified the total average amount of federal grants and contracts the tribes in our analysis received annually in 1998, 1999, and 2000 from all federal agencies using the Single Audit Act database. We then found the total income of the tribe by multiplying the per capita income of the tribe by the total number of Native Americans living in the tribal area. Dividing the total federal funding for each tribe by this tribal income amount resulted in a grants-to-income ratio. We then classified each tribe in our analysis into four categories based on the level of this ratio. By analyzing the results of these ratios across the 199 tribes in the lower 48 states for which data were available, we found that the tribes with a very high grants-to-income ratio experienced the least amount of improvement in their income growth on average (see app. III, table 9, for details). As the figure 18 below shows, tribes with a moderate or low grants-to-income ratio on average had more than double the growth in per capita income compared with tribes with a very high grants-to-income ratio. Differences in employment level were not statistically significant. Tribes with a moderate grants-to-income ratio—a balance of contracts/grants and other sources of income—showed the highest growth on economic indicators. Looking at the high-contracting and self-governance tribes, we found that over half had moderate or low grants-to-income ratios, indicating there were considerable sources of jobs and income for tribal members beyond those funded by federal grants and contracts. However, 15 percent (25 of 162) of the high-contracting and self-governance tribes had a very high grants-to-income ratio, and the economic growth for these tribes was relatively low. From 1990 to 2000, the moderate to low ratio group had a median per capita income growth more than double that of the very high ratio group. Also, the percentage above poverty for the moderate to low ratio group increased by 18 percent, compared with 6 percent for the very high ratio group (see fig. 19). Many tribal officials we interviewed recognized that obtaining external sources of income was critical to their overall economic development. They reported that federal contracts and grants alone are insufficient to meet a tribe’s needs and raise tribal members out of poverty. Tribes reporting economic development success credited much of that success to the development of other sources of jobs and income. Although the extent to which contracting tribes appeared to have experienced improvements in their economic profiles was mixed, tribal officials indicated that contracting with federal agencies provided other advantages but also some disadvantages. Officials at eight of the tribes we spoke with indicated their contracting activities provided them other benefits beyond improvement in their economic profile. Some tribal representatives and Indian Health Service officials told us that the experience of running a program helps develop specific skills, which can be carried over to other aspects of tribal activities. For example, an official at one tribe said that, as a result of operating their own services through contracting, their members have developed skills to produce accurate financial statements, helping them prove fiscal responsibility and attract additional grants. In addition, a 1991 academic report we reviewed on tribal activities found that tribes that entered into contracts to manage their own their forestry activities generally resulted in increased production and revenue than had been generated when such activities were under BIA management. One academic who studies Indian issues told us that contracting also allows tribes to gain experience in leadership, management, accountability, and organization. He said that the resulting enhanced leadership and management skills help tribes to receive audits with unqualified opinions, which can allow these tribes to attract additional sources of funding. The advantages tribes gain from contracting can vary by the type of arrangement undertaken. Under self-governance arrangements, tribes have greater control and flexibility in the use of their funds and less reporting requirements. This flexibility allows tribes to design programs that are tailored to their needs and set their own priorities. For example, an official at one tribe that switched from multiple contracts to a self-governance arrangement said they received greater funding as a result to use as they saw best given their priorities. According to this official, the tribe was able to substantially increase their higher education program to provide more assistance to tribal members who wanted to go to college and also increase their funding for natural resource services. In regard to regular contracting arrangements, one tribe told us that the skill sets learned through this process are a stepping stone to undertaking self-governance, which is the next phase of self-determination. However, a primary disadvantage to contracting with federal agencies was the shortfalls in contract funding. Several tribal officials told us that the amount of direct program funding they receive when they contract to administer their own programs is not sufficient to provide the adequate level of service to tribal members. In addition, several tribal officials told us that their contract programs do not receive full funding to cover the indirect, support, or start-up costs that tribes incur as part of managing these contracts. These contract support cost shortfalls arise when funding appropriations for these contracts is less than that required by tribes to pay for such costs. As shown in figure 20, BIA and the Indian Health Service estimate that total administrative funding shortfalls arising from the contracts these agencies funded during fiscal years 1993 through 2002 ranged from a low of about $25 million to as much as $130 million annually. The funding shortfalls associated with contracting arrangements can hamper tribes’ ability to develop economically. Our 1999 report on the shortfalls in Indian contract support costs found that tribes have had to cover the shortfalls with tribal resources, thereby foregoing the opportunity to use those resources to promote economic development. In addition, these shortfalls divert money away from other important tribal activities. For example, tribes may not receive enough money to enhance the management of their programs by establishing educational systems for leaders, instituting constitutional reform, and developing strategies for economic development. Several tribes mentioned that they have had to take steps to subsidize contract programs, which includes using tribal funds earmarked for economic development, returning the management of the program to the federal government, and undertaking supplemental programs of their own to fill in the unmet service gaps. According to three tribal officials, the threat of, or actual funding shortfalls, discouraged their tribes from entering into or continuing contracting arrangements with the federal government. In addition to the extent to which tribes are contracting to perform their own services, other factors can significantly influence the degree to which Indian tribes’ efforts to develop economically are successful. According to the tribal and federal officials we interviewed and the various studies of Indian economic development issues that we reviewed, the location of a tribe’s reservations or lands can greatly affect its economic development success. For example, tribal officials whose reservation was located near an urban area told us that this gives them greater access to existing infrastructure, including water and power. In addition, the close proximity of the urban population provides them with a greater potential market for their tribal enterprises. Another tribe whose lands were located near an urban area and a heavily traveled highway has benefited from the already established water sewer systems and power lines for development projects. Because of its highly visible location, this particular tribe has successfully developed a hotel, arts and cultural center, golf course, grocery store, and gaming facility. Tribes located in more remote, rural areas lack such advantages. Tribal officials told us they may have to first develop infrastructure before they can then invest in development projects. According to tribal officials, this can be complicated by the need to conduct more extensive environmental or archeological surveys before land can be developed. For example, officials for one tribe in an isolated area told us that they had to complete land and environmental surveys and have water sewer systems, electric power, and roads built as a prerequisite to development. In addition, Native officials we talked to in Alaska also noted how the isolation of their villages greatly complicates their development efforts. Another factor related to location that can assist a tribe economically is whether or not their tribal lands can be developed for tourism. For example, some tribes in the Pacific Northwest were able to build campgrounds and marinas that attracted visitors interested in recreation. A tribe we visited in the Southwest had scenic natural rock formations that attracted visitors to their tribal lands. However, one village in Alaska that would like to build a visitor center to attract tourists is located in an area that is difficult to reach and faces less certain prospects for developing such an industry. Another factor that can provide tribes with an advantage in economic development is whether or not they have access or ownership to exploitable natural resources. We found tribes with access to timber or fisheries often were able to develop these as significant sources of income for their tribal members. For example, one of the tribes we talked to with forest lands on their reservation had opened a successful plant producing plywood and dry veneer, which created jobs for tribal members. In contrast, some tribes are located in remote desolate areas with little vegetation or natural resources they can exploit. Another factor that can affect tribal economic success is having sound legal systems and commercial regulations. Because Indian tribes are considered sovereign nations within the United States, they must develop their own judicial system and laws for governing operations and business conduct within their tribal lands. Having an effective judicial system is frequently seen as a prerequisite for attracting private investments on tribal lands. This provides investors with confidence that disputes will be resolved fairly. For example, one expert study examined 67 tribes and found that a strong independent judicial system reduces unemployment by 5 percent. Another study states that tribal success can also be facilitated by sound uniform commercial codes. According to tribal officials, an additional challenge is having sufficient numbers of tribal members with the relevant law and graduate degrees to help develop and administer these codes. According to tribal officials and studies, another factor cited as important for economic development was stable and effective tribal government. For example, some tribal officials we spoke with cited the high turnover among the members of their governing councils often resulted in abrupt shifts in economic priorities that sometimes delayed their ability to seek funding or implement previously planned projects. According to EDA officials, some tribes’ governing councils are completely replaced very frequently, and this greatly reduces their effectiveness in achieving economic progress. According to tribal officials, one effective method was to stagger tribal council members’ terms over time, which increases the continuity of the tribal government and its policies. Tribes’ ability to develop gaming facilities can also be a significant factor that can affect their economic development. Tribes that own gaming facilities near concentrated population centers have been able to use gaming revenues to develop other projects that have aided their tribes and produced income. For example, one particular tribe used its gaming revenues to develop a water sewer system for development projects and helped it withstand reduced federal and state funding for its activities. Another tribe told us gaming revenue is used for supplementing education and health programs. As shown in figure 21, tribes with greater amounts of revenue from gaming generally experienced greater growth in their total populations, employment rates, per capita income, and in the percentage with incomes above the poverty level. Efforts to improve the financial well-being of American Indians and Alaska Natives face many challenges. These challenges can include the isolated and rural locations of tribal lands and lack of infrastructure, which can limit their attractiveness to private sector investment. Tribal lands may also lack exploitable resources, such as oil or timber, or natural features that could serve as a draw for tourism. Although EDA has provided limited funding to assist some tribes, we found that these grants had resulted in mixed success in helping develop the economies and improve the quality of life for these groups. In cases in which EDA grants were more successful in producing economic development, the tribes sometimes had advantages, such as resources or proximity to areas with populations likely to take advantage of gaming or other development. In other cases, EDA provided funding to tribes without such advantages and at least produced some jobs for tribal members. Overall, we found that the relationship of Indian economic development to EDA grants was mixed, and these findings could help inform decisions about how and where to focus future efforts. However, EDA’s grants to tribes represents only 3 percent of the total amount of funding that it awarded between 1993 and 2002. As a result, we were neither able to evaluate the overall effectiveness of EDA’s program nor were we able to evaluate the adequacy of how it administers its grants, including how the agency applies its criteria in determining what activities to fund. Beyond government aid, Indian tribes are also taking steps to increase their role in their own governance and community activities. Through contracting arrangements and self-governance, nearly all tribes are assuming the management of programs and services that federal agencies previously provided to their communities. Although we found that tribes with the highest levels of these contracting activities generally saw greater improvements in employment levels, we did not find a relationship between level of contracting and tribal individual’s incomes. However, we did learn that tribes conducting such contracting find that it provides other benefits to their communities, including providing them with experience in administering their own affairs. In addition, the other factors that make improving economic development for tribes challenging, such as availability of resources or attractiveness to private sector investment, may have proven to be greater determinants to tribe’s overall economic well-being regardless of any benefits resulting from their contracting activities. We requested and obtained comments from the Department of Commerce, which provided EDA’s comments, and the Department of the Interior, which provided BIA’s comments; these agencies written comments are reproduced in appendixes IV and V, respectively. The letter from the Department of the Interior’s Assistant Secretary for Policy, Management, and Budget stated that BIA generally agreed with our report’s conclusions. The letter notes that BIA supports increased self-determination contracting and compacting as a means of improving tribal economic development efforts, but notes, as our report does, that other factors can significantly influence the ability to develop tribal economies successfully. In the letter from the Secretary of Commerce, EDA questioned our characterization that EDA grants have had mixed success. EDA acknowledged that its enterprise development investments had mixed success given the EDA investments we reviewed and evaluated and agreed that a large portion of EDA funds went to enterprise development projects. EDA stated that the success of other types of EDA investments should be considered in order to make a broad statement about the economic development generated by EDA grants as a whole. The letter stated that the other grant funding that EDA provided, including those for infrastructure, business development, and planning has produced benefits. After considering our findings for all EDA grants to tribes between 1993 and 2002, we believe that our conclusion that EDA grants have had mixed success is accurate. The grants for enterprise projects represent the largest portion—almost half—of the funding EDA provided to tribes during this period and these grants, as our report shows, have had mixed success in producing economic development. The other half of the total funding EDA provided to tribes during this 10-year period included grants for business development loan funds and training, infrastructure projects, and planning. Regarding business development activities, our review of the grants that funded RLFs indicated that some were reportedly very successful, and others had yet to produce much development. Similarly, the training projects had produced some benefits but were also hampered by lack of operating funding. Although our report presents information from tribal officials that indicates that many infrastructure grants have reportedly produced economic development, we found that not all projects had yet done so. In addition, although the tribes receiving EDA planning grants reported them to be critical to their success, the benefits we reported as resulting from these planning grants were that most tribes that received them also received other EDA grants, including for enterprise projects whose mixed success we discussed, or funding from other federal agencies for economic development purposes. In addition, not all EDA planning grants led to development projects as the result of lack of funding or other issues. Similarly, the EDA funding for feasibility studies indicated that, at the time of our review, 3 of the 17 projects studied had been implemented successfully. Commerce’s letter also provides some technical comments for which we made changes to our draft. The letter also presented other comments that provide additional detail about EDA grants and their administration. Our responses to these comments are presented in appendix IV. We are sending copies of this report to the Ranking Minority Member of the Senate Committee on Indian Affairs; the Secretary, Department of Commerce; the Secretary, Department of the Interior; and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact Mr. Cody Goebel or me at (202) 512-8678 or goebelc@gao.gov or shearw@gao.gov. GAO staff that made major contributions to this report are shown in appendix VI. To review all grant funds made available to Indian tribes and tribal organizations by the Department of Commerce’s Economic Development Administration (EDA), we analyzed data from EDA on all grants awarded to tribes during the years 1993 to 2002. For each grant, we obtained from EDA’s data the following information: state where the grant recipient is located, fiscal year the grant was awarded, general project description. The data EDA provided categorized its grants according to the various funding programs the agency administers, such as its planning, public works, or economic adjustment programs. However, upon review of these data, we found that EDA used funds from these various programs to provide grants for similar types of projects. For example, EDA was sometimes providing grants to fund the planning of a project or the construction of the project using its economic adjustment grant program or its public works grant program. Therefore, for the purposes of our analysis, we grouped the various grants into the following categories according to type of project or activity funded: Enterprise projects: grants used to develop projects designed to generate income for the tribe, such as a cannery, a resort, or a sawmill; Infrastructure projects: grants used for the design and construction of public works infrastructure, such as roads, highways, and sewers, that would serve as the foundation for general economic development activities; Business development projects: grants used to fund loan funds, training, and other business development projects, including those for business incubators, revolving loan funds (RLFs), training and capacity building, and other assistance that enhances the tribes’ economic development activities; and Planning/feasibility grants: grants used for general planning purposes such as paying for staff salaries or the broad administration of the tribes’ planning departments, as well as for developing plans, analyses of projects’ environmental impact, and feasibility studies for specific economic development projects. Our analysis of EDA grants was limited to grants provided to Indian tribes. Therefore, we were not able to evaluate how EDA generally applies its stated criteria to grant applications. In addition, the application process includes an evaluation by EDA’s regional investment review committees of preapplication proposals and recommending whether or not an application should be invited. We did not analyze EDA’s preapplication process. To determine what economic development activities have resulted from these EDA grants, we surveyed all 95 tribes that received EDA grants for enterprise projects, infrastructure projects, and loan funds, business development, and training activities. We made 15 site visits in Alaska, Arizona, California, New Mexico, and Washington to learn more about these tribes’ economic development projects and observe the results of the EDA grants they received during the years 1993 to 2002. We chose tribes for our site visits based on various factors, including the types of economic development projects the tribes had, the tribes’ location, and the projects’ stage at the time of our visit. We surveyed the remaining 80 tribes and organizations by phone, interviewing tribal officials that were cognizant of the tribes’ economic development projects and activities. Our survey results reflect the information provided by and the opinions of tribal officials who participated in our survey. Outside of obtaining documents from some tribes and visiting some projects, we did not independently verify the tribal officials’ responses to our questions. We also interviewed relevant officials from EDA, the Bureau of Indian Affairs, and the Department of Health and Human Services to get their perspective on federal assistance to tribes. To determine whether there exists a relationship between the degree to which an Indian tribe operates federal programs and services under contracting or self-governance and that tribe’s economic profile, we used data from the Department of the Interior’s Bureau of Indian Affairs (BIA), the U.S. Census Bureau, and the Single Audit Act database to group all tribes we analyzed into three separate groups. First, we used BIA data to identify those tribes that had entered into self-governance compacts. We then grouped non-self-governance tribes into two groups based on the extent to which they were contracting with BIA. We grouped tribes into these two groups based on the extent to which they were contracting on both a per capita basis and on a total dollar amount basis. To derive the per capita BIA grants and contracts amount, we obtained data from the Single Audit Act database on the amount of grants and contracts that were received by tribes from BIA during the years 1998 to 2000. For those tribes that did not have available Single Audit Act information, we calculated this per capita measure based on BIA’s 1998 shortfall budget data. An examination of the listing of tribes ranked by their per capita contract amount, tribes above a threshold of $580 appeared to include those large tribes with the largest overall contract amounts. We found that some smaller tribes had per capita contracting amounts that exceeded $580 but whose total contract amount was not significant compared with other tribes. Therefore, we only placed tribes into the high-contracting group whose totals in total contract funding exceeded $300,000, which appeared to be a reasonable level to indicate significant contracting activity. Table 2 summarizes our criteria for classifying tribes. We then compared the change in the economic profiles for each of these three groups of tribes. To measure each tribe’s economic profile, we collected data from the U.S. Census from 1990 and 2000. For this analysis, we only included those tribes in the lower 48 states that are federally recognized, had available 2000 Census data, and had tribal population of 100 people or more based on population data from the 2000 Census. For each tribe, we obtained economic profile data on Native Americans living on tribal land, including the percentage employed, the per capita income, and the percentage in households with incomes above the poverty level. Our economic profile data categorized by self-governance, high- contracting, and low-contracting tribes is descriptive in nature and does not represent an assessment of causation of economic factors based on governing status or high- or low-contracting category. To supplement our analysis of contracting and economic profile changes, we also analyzed the relationship between the extent to which tribes received federal funding and changes in economic profile indicators. For this analysis, we identified the total average amount of federal grants and contracts the tribes in our analysis received annually in 1998, 1999, and 2000 from all federal agencies. We then found the total income by multiplying the per capita income of the tribe by the total number of Native Americans living on the reservation. Dividing the total federal funding for each tribe by its tribal income resulted in a grants-to-income ratio (GTI). We then classified each tribe in our analysis into four categories based on the level of their GTI, as shown in table 3. We then analyzed the extent to which changes in tribal economic profile indicators varied for the tribes in these four GTI categories. Our economic profile data categorized by grant to income ratio is descriptive in nature and does not represent an assessment of causation of economic factors based on the tribe’s GTI ratio. To ensure that EDA’s data on the grants awarded to tribes and the data we used in our analysis of the tribes’ economic profiles were sufficiently reliable for our analyses, we conducted detailed reliability assessments of the four datasets that we used. In assessing the reliability of EDA’s grants data, the Single Audit Act data, U.S. Census Bureau data, and the BIA population data, we reviewed relevant documentation, interviewed knowledgeable officials, and conducted frequency analysis of critical data fields, as appropriate. We restricted these assessments to the specific variables that were pertinent to our analyses. We found that all of the datasets were sufficiently reliable for use in our analyses but have included know limitations in our report when appropriate. In assessing the reliability of EDA’s grants data, we interviewed EDA officials that were knowledgeable about the data system and reviewed relevant documents, such as EDA’s data manual and documents on internal controls. On the basis of the information we gathered, we concluded that EDA’s grants data were reliable for our purposes for this analysis. Our reliability assessment of the Single Audit Act data included two steps. First, to assess the general reliability of the Single Audit Act data we used in our analysis, we reviewed relevant documents (e.g., online information on the database, a report by the Department of Commerce’s Office of the Inspector General), as well as corresponded with a knowledgeable official from the U.S. Census Bureau, about the Single Audit Act data. On the basis of these document reviews and correspondence, we concluded that the data we used in our analysis were reliable for our purposes for this analysis. Second, to assess the completeness and accuracy of the Single Audit Act data we used in our analysis, we conducted frequency analysis of relevant fields. On the basis of the results of our frequency tests of relevant data elements and our review of pertinent documents, we concluded that the Single Audit Act data we used in our analysis were reliable for our purposes for this analysis. In assessing the reliability of relevant 1990 and 2000 decennial U.S. Census Bureau data, we reviewed information available online from the U.S. Census Bureau Web site on its data quality assurance processes and interviewed relevant officials from Census. On the basis of the results of our document review and discussions with Census officials, we concluded that the relevant Census data we used were reliable for our purposes for this analysis. In assessing the reliability of the BIA’s population data, we interviewed knowledgeable officials from BIA and tribal representatives and reviewed relevant documentation. Based on the results of these discussions with relevant officials and review of pertinent documentation, we concluded that the BIA’s population data were reliable for our purposes for this analysis. We also reviewed EDA policies and regulations and talked to EDA Regional Directors and field staff to determine if EDA complies with its legislative criteria for monitoring grants. We interviewed tribal officials and economic development experts and reviewed studies by the Harvard Project on American Indian Economic Development and the National Congress of American Indians to determine what other factors impact tribes’ economic development efforts. We obtained data from the Economic Development Administration (EDA) that included all grants it made to American Indian tribes during the years 1993 to 2002. Our analysis of this data indicated that 143 Indian tribes and tribal organizations received a total of $112 million in EDA grants during this 10-year period. Of the $112 million in total grants to tribes that EDA awarded between 1993 and 2002, $86 million went to 113 Indian tribes and tribal organizations in the lower 48 states. The extent to which EDA funded tribes varied across geographic regions. Operating nationally, EDA has organized its staff into six regional offices that cover the various states. These offices are in Atlanta, Austin, Chicago, Denver, Philadelphia, and Seattle. For the purposes of our analysis, we divided EDA’s Seattle region into three subareas: (1) Alaska; (2) the Northwest covering Idaho, Oregon, and Washington; and (3) the Southwest covering Arizona, California, and Nevada. By analyzing EDA’s funding across these regions as shown in figure 22, we found that about 60 percent of EDA grants to Indian tribes went to tribes in the Seattle region, with tribes in the Northwest receiving 21 percent of the grant monies awarded by EDA during the 10-year period. In addition to funding provided to Native entities in Alaska, the extent to which tribes in other states received EDA funding also varied. For example, as shown in figure 23, tribes in Arizona, Washington, and Oregon received about 35 percent of all EDA grants to tribes in the lower 48 states during the years 1993 to 2002. Our analysis found that 30 of the 42 federally recognized tribes in Idaho, Oregon, and Washington received EDA grants, as shown in figure 24. In contrast, only 3 of the 37 tribes in Oklahoma received such grants. According to EDA officials, they funded few tribes in Oklahoma because entities in other states in EDA’s Austin Region, which also includes Texas, Louisiana and Arkansas; and New Mexico, were deemed more economically distressed and in greater need of EDA assistance. Using the EDA grants data, we also found that grant amounts varied widely across states on a per capita basis. For example, tribes in the Pacific Northwest, which had a combined per capita EDA grant amount of $593, had the highest per capita, while tribes in Oklahoma and Utah had the lowest, with per capita EDA grant amounts of $2 and $4. Although some states received large dollar amounts of funding, the amounts were not always large given the large Indian populations in their states. For example, although Arizona received over $10 million during this 10-year period, this amounted to only $46 per capita because of its large Indian population. Table 4 and figure 26 show how these amounts varied across states. As shown in figure 25, the total amount of EDA grants awarded also varied considerably by region. Similarly, as figure 26 shows, EDA grants also varied considerably by region on a per capita basis. The extent to which tribes received EDA planning grants also varied greatly by EDA region. For example, EDA’s Chicago Region did not provide any of the 29 tribes located in that region with individual planning grants, although it did provide planning grants to three intertribal organizations in that region that represented several individual tribes joining together to receive a grant. By contrast, 79 percent of the tribes in the Pacific Northwest with a population over 100 as well as two intertribal organizations received planning grants (see fig. 27). Demand for planning grants can sometimes exceed the amount of available funding in some regions. For example, officials in EDA’s Seattle Region told us that they have a waiting list of 36 tribes that would like to obtain EDA planning grants but insufficient funds exist to award these grants. EDA grants appeared to be awarded equally to tribes with differing levels of income. Using Census data for 2000, we ranked the tribes in the lower 48 states by per capita income. By comparing the tribes ranked by income with the amounts tribes received from EDA, we found that tribes in the top 25 percent of per capita income had received 28 percent of the grants EDA awarded to tribes in the lower 48 states between 1993 and 2002. Similarly, the tribes in the bottom 25 percent of per capita income had received 30 percent of the total amount EDA awarded to tribes during this period. EDA has not been the largest source of funding for economic development grants for tribes. To analyze how the total EDA grants to tribes compared with other economic development-related grants received by tribes from other federal agencies, we obtained data from the Single Audit Act database, which is maintained by the Bureau of the Census and contains information on the amounts of federal funding received by states, governments, and nonprofit organizations, including Indian tribes. With the data available for 1998 to 2001, we found that 7 percent of all economic development-related grants annually received on average by tribes during these years were from EDA. The remaining 93 percent of the economic development-related grants tribes received came from other federal agencies, including the Department of Housing and Urban Development, which provided block grants to tribes to improve the housing stock, provide community facilities, make infrastructure improvements, and expand job opportunities by supporting the economic development of Native American communities; the Department of the Interior, which provided economic development funding to tribes for protecting and restoring rangelands and forests and for operating irrigation projects; the Department of Health and Human Services, which provided loans and grants for implementing social and economic development strategies that support locally determined projects, including developing the tribes’ comprehensive tourism and business plans and providing training in job, computer, and small business skills to tribal members; and the Department of Agriculture, which provided funds for rural development. Figure 28 shows, on average, the extent to which various federal agencies funded economic development assistance to Indian tribes based on amounts provided between 1998 and 2001. Table 5 shows that several other federal agencies have typically given a greater amount of economic development-related grants to tribes than has EDA. Figure 29 shows provides details on the results of completed EDA-funded enterprise projects, including the status of the projects as of early 2004, the EDA grant amount, the number of jobs created, and other benefits that have accrued to the tribe as a result of undertaking the project. Figure 30 shows the status of EDA-funded enterprise projects broken down by year funded. Officials from two of the tribes that had projects fail in the earlier years said they have learned from their mistakes and were now engaged in successful enterprise development buttressed by revenues from gaming and other tribal enterprises. As noted earlier, many of the tribal enterprise projects that EDA funded were in Alaska, and most have yet to be completed. From 1998 through 2001, EDA provided $14 million to cover approximately 40 percent of the cost of constructing 15 Alaska Native cultural/community centers. The goal of these projects was to promote tourism and/or community development. The economic impact of these projects has yet to be determined because 11 of the 15 centers are still under development, and two of the completed projects have not been in operation long enough to establish results. However, Native officials provided revenue and job projections that indicate the cultural/community center projects would not create many jobs or generate much revenue for Alaska Natives. However, an EDA official told us that economic development for these communities is challenging for several reasons, including these areas’ remoteness, harsh climate, limited infrastructure, high fuel and shipping prices, and short construction seasons. Table 6 gives details on the Indian revolving loan funds (RLFs) supported by EDA during the 1993-2002 period. In some instances EDA gave funds to support business-training programs for loan fund applicants. In other instances, EDA provided seed money to help start new RLFs. Table 7 provides information on the results of 19 completed EDA-funded infrastructure projects including the year funded, the project description, the EDA grant amount, and the benefits accrued. In recent years, EDA has reduced the amount of staff and resources it uses to conduct monitoring of grant recipients, including projects developed by Indian tribes. EDA regulations require regional offices and field staff to monitor grant activities by reviewing reports and conducting site visits within 3 years of the application. According to EDA development strategy guidelines, grant recipients annually submit their development strategy to ensure their plan or strategies for developing the area economically are complete and up to date. EDA headquarters officials told us they expect for field staff to review reports quarterly and annually visit grant sites to review the progress of EDA-funded construction projects, including enterprise or infrastructure projects. According to the regional officials, the purpose of these visits is to verify that grantees are actually using the funds for the purpose stated in the approved grant application and in their economic development strategy. According to EDA funding documents, the number of EDA staff acting as economic development representatives in individual states has declined by about 26 percent from 47 to 35 between fiscal years 1993 to 2002. According to EDA staff, this has reduced their ability to monitor funded projects and provide technical assistance to grant recipients. Also, one regional official told us cutbacks in travel funds have required some economic development representatives to forego visiting some projects and to rely instead on reviewing reports submitted by the private sector construction engineers. For example, staff in one of the EDA regional offices told us that one of their field staff is responsible for two very large states with its grants located in such remote locations that site visits are seldom made because of the limited travel funds. The staffing and travel fund reductions have also reduced the amount of technical assistance that EDA provided to tribes. According to regional EDA officials, their economic development representatives frequently provide one-on-one consultations with grantees either by telephone or during site visits. These consultations give tribe officials the opportunity to address concerns or issues with the grant application, construction, or infrastructure projects. However, with fewer field staff and less travel funds, their staff are able to provide such assistance less frequently. Tribal officials we interviewed indicated that they needed more assistance from EDA. For example, one tribal official told us that they needed help completing grant applications; while others said that they would like to have more frequent visits by the Economic Development Representatives and to have them work directly with the tribes. Another official at one tribe said that they experienced difficulties obtaining necessary funding to complete their projects. According to a study on Indian economic development, the lack of technical assistance can negatively affect the success of EDA-funded projects. For example, one Alaskan tribe told us they had to seek additional funds before their project failed because of lack of direct interaction with an Economic Development Representative to answer questions. According to regional officials, in addition to the direct consultations, EDA also formerly provided technical assistance through conferences and seminars. In addition to a national conference, EDA would hold regional seminars, which EDA officials saw as beneficial because people in the local area could more easily attend and receive information specific to their particular region or tribe. However, as a result of the resource cutbacks, EDA officials told us that the agency now only holds the one annual national conference and no longer provides funding for any regional events. The Indian Self-Determination Act, as amended, allows tribes to enter into various arrangements with federal government agencies to assume the operation of many of the programs and services previously provided by the agencies. From the list of federally recognized tribes and from 2000 U.S. Census Bureau (Census) data, we identified 219 tribes in the lower 48 states that had 100 or more Native Americans living in the tribal area. According to Department of the Interior’s Bureau of Indian Affairs (BIA) information, 43 of the 219 tribes had entered into self-governance arrangements with BIA. As a result, these tribes operated most of their own tribal functions and services under a funding compact agreement with BIA. By analyzing Single Audit Act data that shows funding provided by federal agencies, we determined that nearly all of the remaining 176 tribes operated many of their tribal functions and services under contracts and other agreements with BIA. We grouped the 219 tribes in the lower 48 states with populations greater than 100 into three categories. The first category included the 43 tribes that had entered into self-governance arrangements with BIA. Such tribes generally have assumed the operation of most of the services used by tribal members. For the remaining 176 non-self-governance tribes, we analyzed how much funding these tribes received from BIA contracts and grants from 1998 to 2000 in total and on a per capita basis. Based on these analyses, we determined that the 121 tribes with annual per capita BIA contract amounts exceeding $580 and total annual BIA contracting amounts greater than $300,000 appeared to be high-contracting tribes, and we, therefore, categorized the remaining 55 tribes whose per capita or total contracting amounts were less than these thresholds as low-contracting tribes. To analyze the relationship between contracting and changes in tribal economic profiles, we compared how various indicators of economic well-being from Census data had changed for these three groups of tribes. Table 8 shows the changes in economic indicators for three categories of tribes used in our analysis—the self-governance tribes, the high- contracting tribes, and the low-contracting tribes. The data shows that there was great variability within each category, with the top 10 percent of tribes showing high growth, while the bottom 10 percent had negative growth. On average, the high-contracting and self-governance tribes showed greater growth in employment levels, but differences in the other indicators were not statistically significant. We also analyzed how the amount of federal grants and contracts related to tribes’ total tribal income and how changes in economic profiles varied according to this relationship. For this analysis, we identified the total average amount of federal grants and contracts the tribes in our analysis received annually in 1998, 1999, and 2000 from all federal agencies using the Single Audit Act database. We then found the total income of the Native Americans living on the tribe’s lands, which was calculated by multiplying the per capita income of the tribe by the total number of Native Americans living on the reservation, with an adjustment for Native Americans living in the reservation’s service area. Dividing the total federal funding for each tribe by its tribal income resulted in a grants-to-income ratio. We then classified each tribe in our analysis into four categories based on the level of this ratio. As table 9 shows, tribes with a moderate or low grants-to- income ratio showed significantly higher gains in per capita income and percent above poverty than did tribes with a very high grants-to-income ratio. We also analyzed the relationship between variations in tribes’ grants to income ratio and the extent to which they were contracting or self- governance. Figure 31 shows that about half the high-contracting and self- governance tribes had moderate or low grants-to-income ratios, while about 9 percent of the self-governance tribes and 18 percent of the high- contracting tribes have a very high grants-to-income ratio. The following are GAO’s comments on the Department of Commerce’s letter dated August 9, 2004. 1. Our scope for analyzing EDA grants was confined to the 95 Indian tribes we surveyed. Our survey methodology included interviewing tribal officials that were cognizant of the tribes’ economic development projects and activities. Our survey results reflect the information provided by and the opinions of tribal officials who participated in our survey. We also interviewed relevant officials from EDA. We think our methodology was sufficient to reach our overall findings. 2. We made revisions based on this comment. 3. Our report notes that tribal officials and some EDA staff expressed the view that tribes, particularly those located in rural areas, would have a harder time obtaining funding under the investment criteria that EDA implemented in 2002. This criteria favors projects that result in higher- wage, higher skill jobs and private investment. However, Commerce’s letter states that no area or region will be disadvantaged and that its long history of support of Indian tribes will continue. 4. We made revisions based on this comment. In addition to the individuals named above, Carl Barden, Mark de la Rosa, DuEwa Kamara, Jeffery Malcolm, Bettye Massenburg, Don Porteous, LaSonya Roberts, Walter Vance, and Carrie Wilks made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | American Indians and Alaska Natives generally face worse economic conditions than the rest of the U.S. population. The Economic Development Administration (EDA) within the Department of Commerce provides grants to distressed communities, including to American Indian tribes and Alaska Native entities, to generate employment and stimulate economic growth. Because data on how these EDA grants helped tribes was not publicly available, GAO analyzed all EDA grants made to Indian tribes from 1993-2002 and determined what economic development resulted. Tribes also enter into self-governance and other contracting arrangements with two federal agencies--the Bureau of Indian Affairs (BIA) and the Indian Health Service--to assume the management of individual services, including law enforcement, education, social services, and road maintenance. GAO also analyzed the relationship between changes in tribes' economic profile and the extent to which they had self-governance or contracting arrangements to perform their own services. BIA and EDA provided comments on a draft of this report. BIA generally agreed with GAO's conclusions. EDA took issue with GAO's characterization of the relative success of EDA grant programs. Indian tribes have used EDA grants to create businesses, build roads and other infrastructure, and create economic development plans, but these grants have had mixed success in generating jobs, income, and private sector investment. From 1993 to 2002, 143 Indian tribes and tribal organizations received $112 million in EDA grants, but this represented a small portion of EDA's awards to all organizations. Of the total amount awarded to Indian tribes or Alaska Native entities, $54 million was used to fund 63 enterprise projects designed to create income and jobs. Of the 59 projects GAO collected data on, 25 had not yet begun operating, and 3 others had just been completed and no results were available. Of the 31 operational projects, tribal officials reported that about half were profitable or were covering their costs, and the remainder were being subsidized or had failed. Most had resulted in the creation of 10 or fewer jobs, and few had attracted private sector investment. EDA also provided $22 million in grants to tribes for infrastructure projects, such as roads and sewer systems, $30 million in grants to assist tribes with economic planning, and $5 million for loan funds and business development. Almost all of the 219 federally recognized tribes with available data had entered into either contracts or self-governance compacts to operate their own tribal programs and services. Based on GAO's analysis of U.S. Census Bureau data, tribes that had self-governance arrangements or were engaging in higher levels of contracting showed greater gains on average in employment levels from 1990 to 2000 compared with tribes that were contracting less. However, the change in per capita income or the percentage of tribal individuals with incomes above poverty levels over this period was not statistically different for self-governance or high-contracting tribes compared with low-contracting tribes. |
Nationally, teen birth rates have declined steadily in the last several years. From 1991 to 1997, the number of teens having engaged in sexual activity has also decreased; for sexually active teens, the rate of condom use has increased. However, the teen birth rate in the United States is high at about 54 per 1,000 girls aged 15 to 19. Teen birth rates vary greatly by state, ranging in 1996 from 30 per 1,000 girls aged 15 to 19 in Vermont to 76 per 1,000 in Mississippi. Research shows that four risk factors consistently predict teen pregnancy: poverty, early school failure, early behavior problems, and family problems and dysfunction. Risk factors for teen pregnancy are common to other problem youth behavior, such as delinquency and substance abuse. Research has also identified several factors that can help protect against teen pregnancy, including positive relationships with parents and positive connections to a school community. Recent reviews of program evaluation results concluded that certain approaches are more promising than others, but too few programs have been rigorously evaluated to assess their effect on teen pregnancy. Numerous federal, state, and local agencies as well as private citizens and organizations have had a role in TPP activities. For decades, the federal government has supported efforts to prevent teen pregnancy. As part of HHS’ Healthy People 2000 initiative, each state sets goals to reduce teen pregnancy. To help meet these goals, the federal government provides funding to states and local communities for teen pregnancy prevention through a variety of grants and programs administered primarily by HHS. HHS also supports research and data collection and surveillance on the magnitude, trends, and causes of teen pregnancies and births. The 1996 welfare reform legislation also includes provisions aimed at reducing teen pregnancy. For example, the new law provides funding for abstinence-only education—sex education programs that emphasize abstinence from all sexual activity until marriage and exclude instruction on contraception—and allows states to use their TANF block grants for other TPP activities. In addition, the legislation requires states to set goals for decreasing out-of-wedlock births and will financially reward states with bonuses for the largest decreases in all out-of-wedlock births. The legislation also requires teen parents receiving assistance to stay in school and live at home or in another approved setting. States must also indicate how they intend to address the problem of statutory rape, and the government is required to study the link between teen pregnancy and statutory rape. Finally, the new law requires HHS to develop a national strategy to prevent out-of-wedlock teen pregnancy. States in our review have designed strategies for reducing teen pregnancy and have implemented and overseen programs that support their strategies. Generally, state health departments lead state TPP efforts. However, because of the crosscutting nature of teen pregnancy prevention, coordination is necessary with other state agencies whose programs and activities can affect efforts to prevent teen pregnancy, such as departments of social services, justice, and education. Governors’ offices, special commissions, and task forces can also play a central role in designing and implementing strategies and programs at the state level. States generally administer statewide programs, but most of the responsibility for implementing programs is delegated to local communities. States also encourage building coalitions among community groups and organizations involved in teen pregnancy prevention. State strategies must operate within the context of statutes, local policies, and other activities in the state. At the local level, public institutions, like schools and health departments as well as community-based and other organizations, often implement TPP programs or otherwise influence how TPP programs are implemented. Finally, some private organizations at the national, state, or local level may support public efforts or, in some cases, run independent initiatives. In their efforts to address the problem of teen pregnancy, the states that we visited developed prevention strategies with multiple components that included a variety of programs and services. But in all cases, a key objective of these states’ strategies was to target high-risk groups, such as teens living under impoverished conditions. Within the context of their broad strategies, states generally gave localities the flexibility to administer programs to meet local needs and preferences. States identified the federal government as a major contributor of funds that support their TPP strategies. Since the early 1980s, the TPP strategies in the eight states that we visited have evolved from focusing on services for teen parents to an array of programs with increased emphasis on prevention, while still providing programs and services for pregnant and parenting teens. The TPP strategies of all the states we visited contained six basic components: sex education, family planning services, teen subsequent pregnancy prevention programs, male involvement, comprehensive youth development, and public awareness. (See table 1.) Although each state generally had all of these components in their TPP strategies, the emphasis placed on the components and the types of services and programs included in their strategies varied. Two of these components—male involvement and youth development—are beginning to play prominent roles in states’ TPP strategies. Traditionally, pregnancy prevention efforts almost exclusively targeted young women. More recently, strategies have begun to focus on young men’s role in decisions to have sex and to use contraception. In 1995, 68 percent of males surveyed by the National Survey of Adolescent Males reported having had intercourse by age 18. The survey also observed that one of the biggest shifts in teen reproductive behavior is the improvement in teenage males’ use of contraception. These shifts suggest that male teens can be encouraged to delay sex or use contraception if they have begun having sex. All the states we visited included a male involvement component in their TPP strategies in an effort to change male behavior and produce more promising results. For example, California’s male involvement program—a 3-year, $8 million grant program established in 1995—funds 23 projects across the state to improve teen males’ motivation for being sexually responsible through peer education, mentoring, youth conferences, and other activities. California also supports prevention and parenting programs for incarcerated young men and has increased enforcement of statutory rape laws to increase the prosecution and conviction of adult men who have unlawful sex with minors. In addition, one of the state’s public awareness campaigns specifically targets males. Georgia’s male involvement effort aims to establish community-based programs that focus on male responsibility for pregnancy prevention, responsible fatherhood, and motivation for academic achievement and economic self-sufficiency. In 1996, Georgia used $265,500 from the Medicaid Indigent Care Trust Fund (ICTF) to sponsor 17 projects across the state. Grant recipients included health departments, community centers, and various chapters of Alpha Phi Alpha Fraternity, Inc. In 1997, Georgia used $200,000 from ICTF to award 23 grants that focused specifically on pregnancy prevention programs from a male perspective. All of the states we visited also included youth development—another nontraditional component—in their TPP strategies. Although many of these programs do not focus specifically on teen pregnancy prevention, states and some experts believe they can reduce teen pregnancy by improving teens’ belief in their future and improving their education and career opportunities. Youth development activities often include mentoring, after-school homework assistance and tutoring, peer leadership, self-esteem building, social and recreational activities, and sex education. For example, Illinois’ Teen REACH (Responsibility, Education, Achievement, Caring, and Hope)—an $8.4-million annual after-school program—aims to decrease teen pregnancies, arrests, alcohol and drug use and increase school attendance and completion and work or work-related activities. The program targets girls and boys aged 10 to 17 at 41 sites across the state and will link participants to other state and community-based programs and services. Table 2 summarizes the activities and services in the various components that the eight states used to implement their TPP strategies. A key objective of all the states’ strategies is to target their TPP efforts to groups or communities at higher risk of teen pregnancy. California, for example, targeted TPP efforts to communities and neighborhoods with high rates of teen births, high poverty and unemployment rates, and low education levels. The states’ strategies also focused on meeting the needs of three different groups of teens: those who were not yet sexually active, those who were sexually active, and those who were already pregnant or were parenting. Louisiana is targeting 12 zip codes in the New Orleans area with the highest teen birth rates in the city. Oregon offers special life-skills training for teens whose parents receive public assistance because they are at increased risk of becoming teen parents. California, Illinois, and Vermont developed programs aimed at youth in foster care or with foster parents because research has shown that these youth are at a greater risk for unsafe sexual behavior and teen pregnancy. Other state strategies target high-risk groups such as incarcerated males and siblings of teen mothers. The states we visited were using different types of data to target TPP efforts to high-risk communities and youth. For example, all of the states in our review use teen birth data, frequently broken down by zip code, to identify and target high-risk areas. Illinois uses data from a sexually transmitted disease reporting project sponsored by HHS’ Centers for Disease Control and Prevention (CDC) to help target TPP initiatives. In addition, the states that participated in the federal Youth Risk Behavioral Survey (YRBS) use this data in developing their strategies and programs. For example, to improve access to and use of contraception, Oregon uses YRBS data to target sexually active teens who report not using contraception. While TPP strategies were applicable statewide, the states we visited typically gave communities flexibility in selecting and implementing programs to meet local needs and preferences. States generally offered localities a choice among certain state-approved programs or programs that used promising approaches. Communities selected programs that they found most consistent with local policy and values. According to state officials, this resulted in a mix of programs, approaches, and services that varied among communities within a state. Some communities, for example, have chosen programs that encourage abstinence, while others chose a more comprehensive approach that includes abstinence-based sex education as well as access to family planning services, including contraceptive services. Still other communities emphasized youth development programs that focus not on teen pregnancy but on general skill building aimed at improving youth life options. In particular, family planning and sex education programs varied considerably among communities because of local preferences and policies, particularly in schools. Providing sex education and access to family planning services, particularly in school-based settings, varied considerably among communities because they adopted approaches consistent with their preferences and values. Even though each state we visited encouraged or mandated sex education in the schools, local policies dictated the content of such programs in school settings. In some cases, states offer these programs in settings other than schools; in others, state strategies tried to encourage a school-based approach. For example, Maine and Vermont provide funding for health educators who work with schools to provide technical assistance, develop curricula, and train teachers in sex education. But officials in these states said that not all schools offer sex education and in those that do, the curricula vary. Oregon’s strategy encourages the use of a specific abstinence education program for sixth- and seventh-graders and encourages comprehensive sex education in grades 5 through 12. Oregon officials report that 45 percent of the state’s sixth- and seventh-graders received the prescribed abstinence curricula but said that only a few schools are providing comprehensive sex education in the higher grades. Louisiana’s strategy encourages sex education in schools but only within a targeted area with high birth rates. Illinois’ strategy encourages sex education in community or home settings and funds community-based sex education programs. Maryland’s strategy includes a media campaign and outreach program that encourages parents to be the primary sex educators of their children as well as encouraging comprehensive health education in the schools. Two states, Oregon and Maine, are beginning to implement systems that are intended to encourage schools to teach sex education. Although not a part of the states’ strategies, all eight states received federal funding from CDC to support school HIV prevention education programs. The purposes of these programs are similar to those of some TPP programs—to increase the percentage of high school students who do not engage in intercourse and to increase the percentage of sexually active teens who correctly and consistently use condoms. Officials in some of the states we visited cited HIV prevention education as one reason for the decline in teen pregnancy in their states. Illinois, Maine, and Oregon encouraged access to family planning in school-based health centers. However, local policies and statutes control the types of school-based family planning services—primarily contraceptives and information on abortion—that may be made available in these centers. Some communities permitted school health centers to dispense contraception—including condoms, birth control pills, and implantable and injectable birth control—while other communities only allowed school health centers to refer students to other facilities for these services. In some states, such as Georgia, laws restrict referrals and providing family planning information in schools. Louisiana state laws prohibit school-based health centers from providing any family planning services, but the law allows schools to refer students elsewhere for these services. Even though California and Maryland did not include school-based health centers in their strategies, these states had some school-based health centers that provided referrals and access to family planning where permitted by local communities. To improve teen access to family planning services, some states included in their strategies access to family planning in other settings. For example, California’s strategy includes over 2,200 state-funded community, hospital, university, and private practice providers that serve low-income males and females, with 56 of these clinics offering enhanced counseling for teens. Georgia provides similar services along with other youth services and activities in 27 community-based youth centers. Also, strategies in Georgia, Illinois, Maine, Maryland, Oregon, and Vermont included collaboration with the federal Title X Family Planning Program to overcome barriers to teen access by opening teen-only clinics, having clinics open at hours convenient for teens, and doing outreach to inform teens about available services. Although the Title X Family Planning Program serves teens in the remaining states, these states do not include title X programs in their TPP strategies. Some states also included Medicaid expansions to improve access to family planning. Other states’ Medicaid managed care programs also allow enrollees to obtain family planning services from other health care providers. Federal, state, and local governments and private entities fund state TPP activities. In the six states where data were available, the federal government provided a large share of the funds states use and distribute to local communities for teen pregnancy prevention. (See table 3.) In the six states that provided funding data, the federal share of total TPP funding ranged from 74 percent in Georgia to 12 percent in California. The primary mechanisms by which states receive federal funds for TPP efforts include block grants, entitlement programs, and categorical programs. Because federal funds provided through many of these programs are not designated specifically for teen pregnancy prevention, states have some flexibility in deciding what activities to support with federal funding and how much to devote to TPP efforts. The federal government also provides grants directly to local communities to fund TPP initiatives. Officials in the states we visited said that they do not keep track of funds communities receive directly from the federal and local governments or from private contributions. Federal welfare reform legislation contained several provisions related to teen pregnancy prevention, but the law did not require major changes to the TPP strategies of the states we reviewed. Before federal welfare reform, the eight states were already requiring teen mothers to live at home and stay in school in order to continue receiving welfare benefits—key welfare reform provisions. However, at the time of our review, state officials had mixed reactions to other welfare reform provisions intended to reduce teen pregnancy. Only one of the eight states currently plans to apply for an out-of-wedlock bonus, and all states were concerned about the prescriptive requirements surrounding federal grants for abstinence education, although each applied for and received funding. The eight states in our review had already begun requiring teen parents receiving welfare to live at home or in supervised living arrangements and stay in school or job training to receive assistance—requirements that were subsequently included in federal welfare reform. Officials in some states said they believe that these provisions may deter teens from having any more children until they finish school and become self-sufficient and discourage other teens from having their first child. In addition, all the states’ TPP strategies included a teen subsequent pregnancy prevention component that emphasized school completion and prevention of another pregnancy, and some states included activities to inform teens of the welfare requirements. For example, for more than 10 years, California’s Adolescent Family Life and Cal-Learn programs have encouraged pregnant and parenting teens to complete school as well as provided these teens case management and health and social services. Officials and teenagers in two of the states we visited said that they believe the states’ requirements related to school and living arrangements played a part in preventing some teens from getting pregnant. Federal welfare reform legislation provides a financial incentive for reducing the ratio of out-of-wedlock births to all births within the state. According to the proposed regulations for the “Bonus to Reward Decrease in Illegitimacy” provision, states can receive a total award of up to $100 million annually for 4 fiscal years starting in fiscal year 1999 for reducing the ratio of out-of-wedlock births without increasing the abortion rate. Each eligible state can receive up to $25 million a year. As proposed, the bonus would be based on a calculation of birth and abortion rates for a state’s population as a whole; bonuses would not be based on reductions for specific populations, such as teenagers. The five states that demonstrate the largest proportionate decrease in their out-of-wedlock birth ratios between the most recent 2 years and the prior 2-year period will be potentially eligible for a bonus award. Among the eight states we reviewed, state officials had mixed views about their chances to successfully compete for the bonus. Some say they will likely not be competitive for the bonus because they are focusing their prevention efforts on teens rather than adult women, who have most out-of-wedlock births; other states say they may not be eligible to compete because they do not have available the abortion data needed to compete. For example, California does not have an abortion reporting system for the data required under the proposed rules and, therefore, is unsure of its ability to compete. Illinois and Maryland had concerns about their abortion data being overstated because of current limitations in capturing information on marital status and residency. Oregon’s state law prohibits marriage under the age of 17 and, because the bonus encourages marriage, state officials do not believe the state will be competitive. Georgia, Maine, and Vermont will continue to focus their prevention efforts on teens, but since most out-of-wedlock births in these states occur among women 20 or older, these states believe they will not be competitive. Conversely, Louisiana—with its high teen birth rate—is very interested in getting any financial assistance available to support its TPP efforts and, thus, plans to compete for the bonus. Welfare reform also included a provision to enhance efforts to provide sexual abstinence education and authorized $50 million annually for 5 years in grants to states that choose to develop programs for this purpose. States must match 3 state dollars for every 4 federal dollars spent. States, local governments, and private sources often provide such funds in the form of cash or in-kind contributions, such as building space, equipment, or services. The funding can be used for abstinence-only education or mentoring, counseling, and adult supervision programs to promote abstinence until marriage and cannot be incorporated with programs that provide information on both abstinence and contraception. States had some concerns about the restrictive nature of the abstinence programs. One concern was that implementing education programs that stressed only abstinence would interfere with their efforts to develop and continue comprehensive programming. Maine, for example, encourages comprehensive sex education in the schools and felt that abstinence-only programs were not consistent with the state’s attempts to provide education that addresses both abstinence and contraception. Some states were also concerned that the research on abstinence-only education was limited. Moreover, they noted that the data that were available suggested that such programs have little or no effect on the initiation of sex, while research results on programs that provide information on both abstinence and contraception show that these types of programs do have some effect. Officials in seven of the eight states were also concerned about how to come up with the required matching funds without affecting the comprehensive programs they already had in place. Despite these concerns, all the states we visited applied for and received the federal funding to either initiate new programs or expand existing abstinence efforts. Fiscal year 1998 federal grants to the states for various abstinence-only initiatives ranged from $69,855 for Vermont to $5,764,199 for California. (See table 4.) As of June 1998, six of the eight states we visited had begun to implement their abstinence-only initiatives. In California, the state legislature did not approve the Governor’s proposal to implement the abstinence program, thereby preventing the use of federal funds. California has until September 1999 to approve a program and use the federal funds. Although Louisiana had received HHS approval on the basis of its initial application, the state withdrew the proposal in light of state pressure to implement a stronger abstinence program. HHS is currently reviewing the state’s revised plan. All of the states we visited had a variety of efforts under way to assess state TPP programs, including monitoring birth rates and conducting program evaluations. However, few of the evaluations measure program effect on the number of teens who become pregnant or on outcomes closely related to teen pregnancy, such as sexual and contraceptive behavior or high school achievement. Most of the state’s evaluations are measuring other outcomes, such as changes in knowledge, attitude, and behavioral intentions—outcomes that have been shown to be only moderate or weak predictors of teen pregnancy—or are monitoring program processes to determine whether certain aspects of programs were operating as intended, such as whether procedures and protocols were being followed. Some states are using performance measurement systems intended to assess their progress towards achieving TPP goals and improve accountability, but these alone will provide little information on program effectiveness. At the time of our review, all eight states were tracking the number of teen births and conducting evaluations of program operations, known as process evaluations. These data and evaluations enable states to know, for example, the number of program participants and whether or not programs were following procedures; however, they do not provide information on whether or not the program has had an effect on particular outcomes. While all states had begun evaluations that measure program effect on outcomes, most of the outcomes evaluated were of the type that research shows to be moderate or weak predictors of teen pregnancy.(See table 5.) Four states—California, Georgia, Illinois, and Maryland—had evaluations under way for some of their programs that would measure program effect on outcomes that research results have shown to be closely related to teen pregnancy, such as changes in sexual or contraceptive behavior or school achievement. However, most of the states’ outcome evaluations tended to measure program effects on knowledge, attitudes, and behavioral intention. Although evaluations of these indicators are useful, they do not necessarily show the long-term effects of the program or, more importantly, the effect the program has on teen pregnancy. The process evaluations being conducted in the eight states typically measured the number of clients served, types of services received, client responses to certain activities, and procedures and protocols followed. States use this information to monitor, evaluate, and modify program operations. In Maine and Vermont, for example, teens who used family planning clinics were surveyed to evaluate their satisfaction with the hours and locations of clinics, the types of services provided, and the overall appearance of the facility. The results were used to improve the delivery of teen-oriented services. States also used birth rates to track overall progress. Vermont officials told us that rather than conducting evaluations on each component in its strategy, the state’s oversight efforts focus on teen birth and pregnancy rates and responses to the state’s YRBS. These officials further believe that the availability of many TPP programs is responsible for the state’s low teen birth rate. Four states—California, Georgia, Illinois, and Maryland—are evaluating key programs in their TPP strategies that will likely give state officials some insight into the impact these programs are having on outcome measures closely related to teen pregnancy. At least three of these evaluations will use more rigorous designs and include comparison groups and follow-up. Georgia has awarded a contract for a 4-year evaluation that will determine the effect of its key program—Teen Plus—on contraceptive use as well as on teen pregnancies and births. The results of this evaluation will give state policymakers insight into whether the presence of the clinical services offered at the centers improved teen-pregnancy-related outcomes. California’s Community Challenge Grant Program is evaluating program effect on delay of sexual activity, contraceptive use, and school and job achievement and comparing results of its program participants with a group of nonprogram participants after 1 year. Illinois plans to evaluate the effect of its after-school program by assessing high school drop-out rates, graduation rates, and births to teens under age 18 and comparing these results with those for similar communities that did not participate in the program. Maryland plans to track over 5 years participants in its after-school programs to assess program effect on teen pregnancy. Two states we visited have used the results of previous outcome evaluations to modify their strategies. For example, when evaluation results of Illinois’ teen subsequent pregnancy prevention program showed an increased rate of school completion and a lower rate of subsequent pregnancy among participants, the state expanded the program to other communities. When an outcome evaluation of a California education program that focused on postponing sexual activity of 12- to 14-year-olds showed some improvement in knowledge gain but no delay of sexual intercourse, improved use of birth control, or reduced teen pregnancy, the state discontinued the program and implemented a more comprehensive TPP program. Officials in most states we visited expressed interest in knowing the effect of their programs on teen pregnancy. However, state officials said that available funding and resources limited their ability to conduct rigorous and long-term outcome evaluations, which research indicates may be necessary to evaluate and measure program effectiveness. Also, some program staff are reluctant to spend program dollars on evaluations. Four states we visited—Illinois, Maine, Maryland, and Oregon—were implementing performance measurement systems. Performance measurement—the ongoing monitoring and reporting of program accomplishments, particularly toward preestablished goals—is intended to improve program accountability and performance by requiring programs to establish and meet agreed-upon performance goals. In assessing their progress, states can use process, output, outcome measures, or some combination of these. To measure progress toward its goal of reducing teen pregnancy, Oregon plans to compare program performance measures—including the number of students remaining abstinent, the percent of sexually active teens using contraception, and the percent of teen mothers with no subsequent births—with established goals. Oregon has adopted an official statewide benchmark for the pregnancy rate among girls aged 10 through 17: The state has set a goal of reducing this rate to 15 by the year 2000 and to 10 by the year 2010. Maine requires all state health service contracts to be performance based and has established specific goals and objectives against which teen pregnancy programs are to be measured. The state plans to use assessment results in budgeting decisions. Maryland’s Partnership for Children and Families performance management system will measure teen birth rates, among other indicators. Illinois, which is in the early stages of developing its program performance measurement system, plans to use performance measurement in all program and service contracts, including teen pregnancy prevention. The federal government funds numerous TPP programs and supports research and data collection and surveillance on indicators related to teen pregnancy. Although a number of federal agencies provide funding, HHS has the primary federal role in supporting programs to reduce teen pregnancy. Together, 27 different HHS programs are available to states and local communities to support teen pregnancy prevention. Some of the funds are solely for teen pregnancy prevention; but others, such as the Maternal and Child Health Block Grant, allow states to fund various activities that improve the health of women, infants, and children. Although HHS could not isolate all of the funding specifically for TPP efforts, it was able to identify at least $164 million in fiscal year 1997. HHS also supports research, data collection, and surveillance related to teen pregnancy prevention and, in some cases, evaluates programs and demonstration projects related to teen pregnancy prevention at the state and local levels. HHS has evaluated very few of its programs to determine whether and how these programs affect teen pregnancies, births, or closely related behavioral outcomes. HHS recently began program evaluation efforts for two of its TPP programs—the multisite Community Coalition Partnership Program and the new Abstinence Education Program—that will measure the programs’ effects on behavior outcomes closely related to teen pregnancy. Also, in its strategic plan required by the Government Performance and Results Act of 1993, HHS established performance measures against which the performance of HHS-funded activities will be assessed. Nine federal agencies support programs that could be used to support TPP efforts: HHS; the Departments of Agriculture, Defense, Education, Housing and Urban Development, Justice, and Labor; the Corporation for National Service; and the Office of National Drug Control Policy. (See app. II for a list of these agencies’ programs related to teen pregnancy prevention.) HHS has the primary federal leadership role in teen pregnancy prevention. In fiscal year 1997, the agency provided at least $164 million in federal support to reduce teen pregnancy. About $126 million of this total was from Medicaid and the Title X Family Planning Program. Another $28 million was for two of the three federal programs whose primary goal is teen pregnancy prevention—the Adolescent Family Life (AFL) Program and the Community Coalition Partnership Program for the Prevention of Teen Pregnancy (CCPPPTP). The remaining $10 million is from the Preventive Health and Health Services Block Grant and several broad youth programs that were able to isolate specific funds for teen pregnancy prevention. Beginning fiscal year 1998, HHS provided states with $50 million in funding for the new Abstinence Education Program (AEP). AFL and CCPPPTP are funded directly to local communities and may not be included in a state’s strategy, whereas funding for AEP goes directly to states. Many other TPP initiatives are funded through block grants, but HHS could not isolate the amount of additional funding. Because of the nature of block grant programs, funds are not specifically allocated to teen pregnancy at the federal level and states have some flexibility in deciding how to use them. The states we visited said they relied on programs such as the Maternal and Child Health Block Grant, the Social Services Block Grant, and TANF to support their TPP strategies. Other funding streams that support programs addressing other issues may include teen pregnancy prevention as one of the objectives. For example, the Community Services Block Grant funds programs that address poverty in communities, but the programs can include teen-pregnancy-related initiatives, such as family planning, substance abuse prevention, and job counseling. Table 6 shows fiscal year 1997 funding available through HHS that could be used to support teen pregnancy prevention. To complement the activities summarized in table 6, HHS is developing a TPP strategy at the federal level. In 1997, HHS released the National Strategy to Prevent Teen Pregnancy, a departmentwide effort to prevent out-of-wedlock teen pregnancy and support and encourage teens to remain abstinent. As part of the strategy, HHS has reported that it is strengthening its efforts to improve data collection, research and evaluation, and the dissemination of information. In addition, HHS said it will strengthen its support for promising research-based approaches that are tailored to the unique needs of individual communities. In addition to its funding for programs, HHS supports data collection, surveillance, and research related to teen pregnancy prevention through broader public health activities and research on issues such as adolescent health. Within HHS, CDC has the primary role of monitoring teen pregnancy and births by collecting data on pregnancies, live births, fertility, contraception, and teen sexual behavior and collaborating with state vital statistics offices to develop data on the incidence and trends of teen pregnancies and births. CDC also monitors sexual risk behaviors among high school students at national and state levels and monitors TPP policies and programs implemented by the nation’s state education agencies, school districts, and schools. The National Institutes of Health (NIH) supports research on the causes of and risks associated with teen pregnancy. (See table 7.) HHS has conducted very few evaluations to determine whether and how programs that it supports actually affect teen pregnancies, births, or the behavioral outcomes closely related to teen pregnancy. Because block grants—a source of funding used by the eight states to support their TPP strategies—give states flexibility in using funds, specific program evaluations are not typically required. Other programs that can support TPP activities do not evaluate their effect on teen pregnancy because teen pregnancy prevention is not their primary goal. HHS does require evaluations of three HHS programs whose primary goal is teen pregnancy prevention. Two of these program evaluations will measure program effects on teen sexual behavior, use of contraceptives, and teen births. AFL, one of three TPP programs, provides local and state grantees with funding for abstinence programs. The enabling legislation requires annual evaluations and are supposed to be funded by not less than 1 percent and not more than 5 percent of program funds. According to HHS officials, evaluations of AFL programs have shown positive short-term results in increased knowledge and changed attitudes but have not examined program effects on teen pregnancy. CDC’s Community Partnership Program requires all grantees to evaluate program processes and allocates about 20 percent of program funds to evaluations. All 13 of these communities will collect similar data, including behavioral data that are closely related to teen pregnancy, so that comparisons across sites can be made. Six of the 13 communities are participating in enhanced evaluations that will include a special focus on certain program components. CDC is providing supplementary funding and technical assistance to the communities participating in the enhanced evaluation. Although there is no evaluation requirement for states participating in AEP to evaluate their abstinence-only programs, the Balanced Budget Act of 1997 authorized HHS to use up to $6 million in fiscal years 1998 and 1999 to evaluate AEP. In May 1998, HHS issued a request for proposals to evaluate the effectiveness of selected AEP programs. The evaluation’s goal is to determine the effects of the abstinence education programs in achieving key outcomes, including reduced rates of sexual activity, teen pregnancies and births, and sexually transmitted diseases. In August 1998, HHS awarded the contract to Mathematica Policy Research, Inc. In addition to these evaluations, HHS is currently evaluating or has recently completed evaluating two multisite teen parent programs that measure TPP outcomes, including teen subsequent pregnancies and births, sexual activity, contraceptive practices, as well as other measures related to education attainment, employment, welfare dependency, and child well-being. According to HHS officials, HHS plans to direct additional funds toward evaluation of the specific TPP programs the agency funds. As part of a national strategy, HHS announced in May 1998 the availability of $300,000 to enhance ongoing state, local, or private evaluations. HHS officials said they recognize that even more program evaluations need to be done. According to some experts, higher quality evaluation is also needed. These evaluations should measure program effects on the behavioral goals of the program and risk factors associated with teen pregnancy; they should also follow program participants to learn about long-term effects. HHS officials also suggested that evaluation dollars be used selectively on promising programs and not be spread too thinly. As required under the Results Act, HHS recently began implementing performance goals and measures for all of its programs, including those intended to prevent teen pregnancy. In 1997, the Maternal and Child Health Bureau worked with states and other stakeholders to pilot test the new Results Act requirements on the Maternal and Child Health Services Block Grant Program. For this program, state grantees must set numeric goals for each performance measure and are required to report progress in achieving these goals. The Bureau and its eight pilot states—including Maine, a state in our review—collaborated to pretest the new reporting requirements, such as those related to reducing the birth rate among teens aged 15 to 17—one of the 18 national core performance measures.According to an HHS official, the pilot resulted in the automated reporting of more uniform data and a much more streamlined process, making it easier for Bureau officials to assess program performance against goals. The official stated that the piloted process has the potential to improve state accountability for progress toward state goals. Officials in Maine said that the experience they gained from participating in the pilot prompted them to reexamine priorities and focus on current needs of its Maternal and Child Health Services Block Grant population. In developing its 1998 plan, Maine added a state-initiated performance measure of lowering the number of unintended births among women under age 24. Maine officials also reported that the new application and reporting process helped them make resource decisions that were more consistent with agreed-upon state and federal priorities. The federal government provides millions of dollars to support TPP efforts. Although the states in our review relied on research findings in developing certain aspects of their strategies, too few programs are systematically evaluated to guide TPP program efforts. Some programs within the state strategies are being evaluated, but most do not measure the known risks or outcomes that are linked to teen pregnancy, such as school achievement, delay of sexual initiation, and contraceptive and sexual behavior. Furthermore, most do not allow for sufficient follow-up to determine long-term program effects. Evaluation efforts at the federal level have also been limited. However, HHS is beginning two major evaluations of TPP programs that will look at their long-term impact on outcomes known to be related to teen pregnancy prevention. The results of evaluations that focus on outcomes related to teen pregnancy should help states, the federal government, and others in choosing the programs or approaches most likely to be effective in preventing teen pregnancy. Four of the states we visited and the federal government are establishing performance measures systems to allow for assessments of program performance toward achieving established TPP goals and to help improve accountability. Although performance measurement alone will not provide the information necessary to understand the link between the programs and their effects on reducing teen pregnancy, the Results Act encourages a complementary role for performance measurement results and program evaluation findings. Performance measurement combined with program evaluations of outcome measures that are predictors of teen pregnancy is more likely to yield results that can be used to improve the overall effectiveness of states’ TPP efforts. We obtained comments on a draft of this report from HHS; the eight states we visited; the Director of the Center for Reproductive Health Policy Research, University of California; and the Director of the National Campaign to Prevent Teen Pregnancy. The reviewers generally agreed with the findings and conclusions in the report. HHS felt that the Department’s commitment to evaluating TPP programs described in the report could be expanded to include other efforts that evaluate how teen parent programs affect teen births and behavioral outcomes related to teen pregnancy. We added the information HHS provided. Each reviewer provided additional information and clarification and suggested technical changes, which we incorporated where appropriate. We plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of HHS, officials of the states included in our review, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. Please contact me on (202) 512-7119 if you or your staff have any questions about this report. Other major contributors to this report were James O. McClyde, Assistant Director; Martha Elbaum; and Karyn Papineau. In response to congressional concern about teen pregnancy, we were asked to identify the strategies states have been implementing to prevent teen pregnancy and how states fund these strategies, determine if federal welfare reform had an effect on these strategies, identify these states’ efforts to evaluate their pregnancy prevention efforts, and describe the federal government role in supporting state efforts to prevent teen pregnancy. To accomplish these objectives, we first contacted HHS and experts from the National Campaign to Prevent Teen Pregnancy, the Urban Institute, the National Governor’s Association, the Annie E. Casey Foundation, and the Henry J. Kaiser Family Foundation to learn about states that had strategies or were embarking on interesting approaches. Complementing this information, we used HHS’ teen birth rate data by state from 1991 to 1994, the most current data available at the time, to determine which states had high, low, and moderate birth rates. Subsequent to our review of state-level data in April 1997, the National Center for Health Statistics published state-level teen birth rates for 1995 and 1996. The variations among states in 1995 and 1996 were not markedly different from those reported for 1994. Using this information and the 1994 data, we selected eight states for review: California, Georgia, Illinois, Louisiana, Maine, Maryland, Oregon, and Vermont. All had their TPP strategies in place or had initiatives or reorganization under way. The teen birth rates in these states were high, low, or stable. (See table I.1.) These states provided a cross section of approaches to teen pregnancy prevention, but the results of our work cannot be generalized nationally—particularly since we chose states that had strategies under way. Table I.1: Changes in Birth Rates Per 1,000 Teens Aged 15 to 19 in the Eight Selected States, 1991, 1994, and 1996 Ga. Ill. La. Md. Oreg. Vt. To learn about each state’s TPP strategy, we interviewed state officials within the lead agencies responsible for TPP efforts, along with officials from other state agencies that had a supporting role in the strategy, as shown in table I.2. To describe state strategies and programs and the effect welfare reform may have had on these efforts, we obtained and analyzed program documents and data in each of the case study states and obtained descriptions of applicable laws. We also interviewed local program officials from county governments, local health departments, and community organizations responsible for implementing TPP programs. In the states where Title X Family Planning Program funding does not go directly to the state, we interviewed officials in the nonprofit corporations who administer the program. In the states where major private TPP programs were operating independent of the state strategy, we interviewed relevant officials to determine their involvement with the states. To determine how states evaluate their strategies and programs, we reviewed and analyzed completed evaluations and discussed with officials plans to conduct additional evaluations. We also reviewed the literature on the current status of evaluating TPP programs and conducted interviews with program evaluators. To determine how much states spend on teen pregnancy prevention, we asked each state to provide financial information for their fiscal year 1997 programs. We asked the states to provide us the dollar amount and sources of federal and state funding for programs to prevent teen pregnancy. Some states were able to identify the amount of money from various federal sources, but some states were unable to break out TPP spending from the various block grants used to fund the effort. Federal requirements do not mandate that funding for TPP efforts be separated from more broad categories, such as the Maternal and Child Health Block Grant, and block grants offer states discretion in the use of funds. We did not verify the funding information the states provided. To obtain information on the federal role in supporting state efforts to reduce teen pregnancy, we met with HHS officials, who identified all agencies within HHS that administer TPP programs along with other federal agencies that fund TPP efforts. Through HHS, we asked each HHS agency and the other federal agencies to provide us information on the programs they administer that can impact teen pregnancy. We also asked them to provide information on the programs’ total funding and the amount of the funding directly for teen pregnancy. Many of the programs could not isolate funding for teen pregnancy prevention because it was not an explicit focus of their programs. We did not verify the funding data provided. We performed our work between April 1997 and November 1998 in accordance with generally accepted government auditing standards. The Cooperative State Research, Education, and Extension Service links education resources and Department of Agriculture programs and works with land grant universities and other educational institutions. A systemwide initiative on children, youth, and families at risk has highlighted programs and research related to teen pregnancy prevention. In addition, the service reaches 5.6 million youth through 4-H programs managed by state land grant partners. Programs vary from state to state; state land grant institutions typically do not have a budget line item for teen pregnancy prevention. Supports youth programs that offer no specific efforts to prevent teen pregnancy. Most Department of Defense youth program staff can refer youth to appropriate education or health programs, and many youth programs provide curricula geared to informing teens about pregnancy prevention services offered by military medical treatment facilities. Some educational activities at U.S. installations have prevention education for teens and preteens. Programs are not authorized to allocate money for TPP activities, but some of the money distributed to states in the form of grants may be used for that purpose. No specific programs for teen pregnancy prevention; however, the Department does have some grant programs that local grantees may use for broad purposes, such as youth development programs with more specific teen pregnancy prevention goals. Administers programs focused on at-risk youth and designed to reduce juvenile delinquency, which may have a tangential impact on teen pregnancy. Youth programs that target poor areas and at-risk youth and seek to ameliorate youth problems by providing services and education, training, and work opportunities. Programs may include education, counseling, and services related to teen pregnancy prevention. Volunteers through the Corporation’s volunteer program work with communities on various activities, some of which may be TPP activities or youth development programs. Does not provide direct programming on teen pregnancy prevention. The Office coordinates substance abuse prevention focus of other federal agencies, with a focus on youth. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on: (1) state strategies to reduce teen pregnancy and how states fund these efforts; (2) how welfare reform affected states' strategies; (3) the extent to which programs that are part of states' prevention strategies are evaluated; and (4) what teen pregnancy prevention activities the federal government supports. GAO noted that: (1) the eight states in GAO's review have, over time, developed teenage pregnancy prevention (TPP) strategies involving numerous programs that fall into six areas; (2) in general, these states targeted high-risk populations and communities and tailored programs to three different groups of teens; (3) while strategies were applicable statewide, states typically relied on local communities to select and implement specific programs from an array of alternatives; (4) states generally gave localities the flexibility to choose the type and mix of programs they wanted to put in place; (5) some communities chose not to implement programs that the state strategy encouraged; (6) all of the states GAO visited relied on federal funding to support their strategies and in many of the states, federal funding exceeded state funding for TPP; (7) the 1996 federal welfare reform legislation had a limited effect overall on these states' TPP strategies, in part, because the states in GAO's review already required that teen parents live at home and stay in school to receive assistance--two key provisions now mandatory under federal welfare reform; (8) only two of the eight states plan to compete for the bonus provided by the law to states that show the greatest success in reducing out-of-wedlock births; (9) the other states are unlikely to compete because they lack the data needed to show reductions or because their prevention efforts focus on teens who account for a relatively small proportion of out-of-wedlock births; (10) although the eight states initially had concerns about the prescriptive nature and administrative requirements of the new law's grant program for sexual abstinence education, the eight states applied for the grants, received funding, and plan to either initiate new abstinence education programs or expand programs that they had already included as part of their strategies to prevent teen pregnancy; (11) although all eight states are tracking changes in teen births, few are evaluating the effect of their TPP programs on teen pregnancy; (12) only four states are attempting to link some of their TPP efforts to changes in teen pregnancies, births, or other closely related outcomes; (13) for fiscal year 1997, the Department of Health and Human Services identified at least $164 million for TPP programs or services; and (14) however, funding specifically for TPP activities could not be isolated at the federal level, primarily because of the flexibility on spending decisions given to states. |
Despite some progress in addressing staffing shortfalls since 2006, State’s diplomatic readiness remains at risk for two reasons: persistent staffing vacancies and experience gaps at key hardship posts that are often on the forefront of U.S. policy interests. First, as of September 2008, State had a 17 percent average vacancy rate at the posts of greatest hardship (which are posts where staff receive the highest possible hardship pay). Posts in this category include such places as Peshawar, Pakistan, and Shenyang, China. This 17 percent vacancy rate was nearly double the average vacancy rate of 9 percent at posts with no hardship differentials. Second, many key hardship posts face experience gaps due to a higher rate of staff filling positions above their own grades (see table 1). As of September 2008, about 34 percent of mid-level generalist positions at posts of greatest hardship were filled by officers in such above-grade assignments—15 percentage points higher than the rate for comparable positions at posts with no or low differentials. At posts we visited during our review, we observed numerous officers working in positions above their rank. For example, in Abuja, Nigeria, more than 4 in every 10 positions were staffed by officers in assignments above grade, including several employees working in positions two grades above their own. Further, to fill positions in Iraq and Afghanistan, State has frequently assigned officers to positions above their grade. As of September 2008, over 40 percent of officers in Iraq and Afghanistan were serving in above-grade assignments. Several factors contribute to gaps at hardship posts. First, State continues to have fewer officers than positions, a shortage compounded by the personnel demands of Iraq and Afghanistan, which have resulted in staff cutting their tours short to serve in these countries. As of April 2009, State had about 1,650 vacant Foreign Service positions in total. Second, State faces a persistent mid-level staffing deficit that is exacerbated by continued low bidding on hardship posts. Third, although State’s assignment system has prioritized the staffing of hardship posts, it does not explicitly address the continuing experience gap at such posts, many of which are strategically important, yet are often staffed with less experienced officers. Staffing and experience gaps can diminish diplomatic readiness in several ways, according to State officials. For example, gaps can lead to decreased reporting coverage and loss of institutional knowledge. In addition, gaps can lead to increased supervisory requirements for senior staff, detracting from other critical diplomatic responsibilities. During the course of our review we found a number of examples of the effect of these staffing gaps on diplomatic readiness, including the following. The economic officer position in Lagos, whose responsibility is solely focused on energy, oil, and natural gas, was not filled in the 2009 cycle. The incumbent explained that, following his departure, his reporting responsibilities will be split up between officers in Abuja and Lagos. He said this division of responsibilities would diminish the position’s focus on the oil industry and potentially lead to the loss of important contacts within both the government ministries and the oil industry. An official told us that a political/military officer position in Russia was vacant because of the departure of the incumbent for a tour in Afghanistan, and the position’s portfolio of responsibilities was divided among other officers in the embassy. According to the official, this vacancy slowed negotiation of an agreement with Russia regarding military transit to Afghanistan. The consular chief in Shenyang, China, told us he spends too much time helping entry-level officers adjudicate visas and, therefore, less time managing the section. The ambassador to Nigeria told us spending time helping officers working above grade is a burden and interferes with policy planning and implementation. A 2008 OIG inspection of N’Djamena, Chad, reported that the entire front office was involved in mentoring entry-level officers, which was an unfair burden on the ambassador and deputy chief of mission, given the challenging nature of the post. State uses a range of incentives to staff hardship posts at a cost of millions of dollars a year, but their effectiveness remains unclear due to a lack of evaluation. Incentives to serve in hardship posts range from monetary benefits to changes in service and bidding requirements, such as reduced tour lengths at posts where dangerous conditions prevent some family members from accompanying officers. In a 2006 report on staffing gaps, GAO recommended that State evaluate the effectiveness of its incentive programs for hardship post assignments. In response, State added a question about hardship incentives to a recent employee survey. However, the survey does not fully meet GAO’s recommendation for several reasons, including that State did not include several incentives in the survey and did not establish specific indicators of progress against which to measure the survey responses over time. State also did not comply with a 2005 legal requirement to assess and report to Congress on the effectiveness of increasing hardship and danger pay from 25 percent to 35 percent in filling “hard to fill” positions. The lack of an assessment of the effectiveness of the danger and hardship pay increases in filling positions at these posts, coupled with the continuing staffing challenges in these locations, make it difficult to determine whether these resources are properly targeted. Recent legislation increasing Foreign Service officers’ basic pay will increase the cost of existing incentives, thereby heightening the importance that State evaluate its incentives for hardship post assignments to ensure resources are effectively targeted and not wasted. Although State plans to address staffing gaps by hiring more officers, the department acknowledges it will take years for these new employees to gain the experience they need to be effective mid-level officers. In the meantime, this experience gap will persist, since State’s staffing system does not explicitly prioritize the assignment of at-grade officers to hardship posts. Moreover, despite State’s continued difficulty attracting qualified staff to hardship posts, the department has not systematically evaluated the effectiveness of its incentives for hardship service. Without a full evaluation of State’s hardship incentives, the department cannot obtain valuable insights that could help guide resource decisions to ensure it is most efficiently and effectively addressing gaps at these important posts. State continues to have notable gaps in its foreign language capabilities, which could hinder U.S. overseas operations. As of October 31, 2008, 31 percent of officers in all worldwide language-designated positions did not meet both the foreign language speaking and reading proficiency requirements for their positions, up slightly from 29 percent in 2005. In particular, State continues to face foreign language shortfalls in areas of strategic interest—such as the Near East and South and Central Asia, where about 40 percent of officers in language-designated positions did not meet requirements. Gaps were notably high in Afghanistan, where 33 of 45 officers in language-designated positions (73 percent) did not meet the requirement, and in Iraq, with 8 of 14 officers (57 percent) lacking sufficient language skills. State has defined its need for staff proficient in some languages as “supercritical” or “critical,” based on criteria such as the difficulty of the language and the number of language-designated positions in that language, particularly at hard-to-staff posts. Shortfalls in supercritical needs languages, such as Arabic and Chinese, remain at 39 percent, despite efforts to recruit individuals with proficiency in these languages (see figure 1). In addition, more than half of the 739 Foreign Service specialists—staff who perform security, technical, and other support functions—in language-designated positions do not meet the requirements. For example, 53 percent of regional security officers do not speak and read at the level required by their positions. When a post fills a position with an officer who does not meet the requirements, it must request a language waiver for the position. In 2008, the department granted 282 such waivers, covering about 8 percent of all language- designated positions. Past reports by GAO, State’s Office of the Inspector General, the Department of Defense, and various think tanks have concluded that foreign language shortfalls could be negatively affecting U.S. national security, diplomacy, law enforcement, and intelligence-gathering efforts. Foreign Service officers we spoke to provided a number of examples of the effects of not having the required language skills, including the following. Consular officers at a post we visited said that because of a lack of language skills, they make adjudication decisions based on what they “hope” they heard in visa interviews. A security officer in Cairo said that without language skills, officers do not have any “juice”—that is, the ability to influence people they are trying to elicit information from. According to another regional security officer, the lack of foreign language skills may hinder intelligence gathering because local informants are reluctant to speak through locally hired interpreters. One ambassador we spoke to said that without language proficiency— which helps officers gain insight into a country—the officers are not invited to certain events and cannot reach out to broader, deeper audiences. A public affairs officer at another post said that the local media does not always translate embassy statements accurately, complicating efforts to communicate with audiences in the host country. For example, he said the local press translated a statement by the ambassador in a more pejorative sense than was intended, which damaged the ambassador’s reputation and took several weeks to correct. State’s current approach for meeting its foreign language proficiency requirements involves an annual review process to determine language- designated positions, training, recruitment, and incentives; however, the department faces several challenges to these efforts, particularly staffing shortages. State’s annual language designation process results in a list of positions requiring language skills. However, the views expressed by the headquarters and overseas officials we met with suggest State’s designated language proficiency requirements do not necessarily reflect the actual language needs of the posts. For example, because of budgetary and staffing issues, some overseas posts tend to request only the positions they think they will receive rather than the positions they actually need. Moreover, officers at the posts we visited questioned the validity of the relatively low proficiency level required for certain positions, citing the need for a higher proficiency level. For example, an economics officer at one of the posts we visited, who met the posts’ required proficiency level, said her level of proficiency did not provide her with language skills needed to discuss technical issues, and the officers in the public affairs section of the same post said that proficiency level was not sufficient to effectively explain U.S. positions in the local media. State primarily uses language training to meet its foreign language requirements, and does so mostly at the Foreign Service Institute in Arlington, Virginia, but also at field schools and post language training overseas. In 2008, the department reported a training success rate of 86 percent. In addition, the department recruits personnel with foreign language skills through special incentives offered under its critical needs language program and pays bonuses to encourage staff to study and maintain a level of proficiency in certain languages. The department has hired 445 officers under this program since 2004. However, various challenges limit the effectiveness of these efforts. According to State, two main challenges are overall staffing shortages, which limit the number of staff available for language training, and the recent increase in language-designated positions. The staffing shortages are exacerbated by officers curtailing their tours at posts, such as to staff the missions in Iraq and Afghanistan, which has led to a decrease in the number of officers in the language training pipeline. For example, officials in the Bureau of East Asian and Pacific Affairs told us of an officer who received nearly a year of language training in Vietnamese, yet cancelled her tour in Vietnam to serve in Iraq. These departures often force their successors to arrive at post early without having completed language training. As part of its effort to address these staffing shortfalls, in fiscal year 2009, State requested and received funding for 300 new positions to build a training capacity, intended to reduce gaps at posts while staff are in language training. State officials said that if the department’s fiscal year 2010 request for 200 additional positions is approved, the department’s language gaps will begin to close in 2011; however, State has not indicated when its foreign language staffing requirements will be completely met. Another challenge is the widely held perception among Foreign Service officers that State’s promotion system does not consider time spent in language training when evaluating officers for promotion, which may discourage officers from investing the time required to achieve proficiency in certain languages. Although State Human Resources officials dispute this perception, the department has not conducted a statistically significant assessment of the impact of language training on promotions. State’s current approach to meeting its foreign language proficiency requirements has not closed the department’s persistent language proficiency gaps and reflects, in part, a lack of a comprehensive strategic direction. Common elements of comprehensive workforce planning— described by GAO as part of a large body of work on human capital management—include setting strategic direction that includes measurable performance goals and objectives and funding priorities, determining critical skills and competencies that will be needed in the future, developing an action plan to address gaps, and monitoring and evaluating the success of the department’s progress toward meeting goals. In the past, State officials have asserted that because language is such an integral part of the department’s operations, a separate planning effort for foreign language skills was not needed. More recently, State officials have said the department’s plan for meeting its foreign language requirements is spread throughout a number of documents that address these requirements, including the department’s Five-Year Workforce Plan. However, these documents are not linked to each other and do not contain measurable goals, objectives, resource requirements, and milestones for reducing the foreign language gaps. We believe that a more comprehensive strategic approach would help State to more effectively guide and assess progress in meeting its foreign language requirements. In our recently-issued reports we made several recommendations to help State address its staffing gaps and language proficiency shortfalls. To ensure that hardship posts are staffed commensurate with their stated level of strategic importance and resources are properly targeted, GAO recommends the Secretary of State (1) take steps to minimize the experience gap at hardship posts by making the assignment of experienced officers to such posts an explicit priority consideration, and (2) develop and implement a plan to evaluate incentives for hardship post assignments. To address State’s long-standing foreign language proficiency shortfalls, we recommend that the Secretary of State develop a comprehensive strategic plan with measurable goals, objectives, milestones, and feedback mechanisms that links all of State’s efforts to meet its foreign language requirements. State generally agreed with our findings, conclusions, and recommendations and described several initiatives that address elements of the recommendations. In addition, State recently convened an inter- bureau language working group, which will focus on and develop an action plan to address GAO’s recommendations. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have at this time. For questions regarding this testimony, please contact Jess T. Ford at (202) 512-4268 or fordj@gao.gov. Individuals making key contributions to this statement include Godwin Agbara and Anthony Moran, Assistant Directors; Robert Ball; Joseph Carney; Aniruddha Dasgupta; Martin de Alteriis; Brian Hackney; Gloria Hernandez-Saunders; Richard Gifford Howland; Grace Lui; and La Verne Tharpes. Department of State: Comprehensive Plan Needed to Address Persistent Foreign Language Shortfalls. GAO-09-955. Washington, D.C.: September 17, 2009. Department of State: Additional Steps Needed to Address Continuing Staffing and Experience Gaps at Hardship Posts. GAO-09-874. Washington, D.C.: September 17, 2009. State Department: Staffing and Foreign Language Shortfalls Persist Despite Initiatives to Address Gaps. GAO-07-1154T. Washington, D.C.: August 1, 2007. U.S. Public Diplomacy: Strategic Planning Efforts Have Improved, but Agencies Face Significant Implementation Challenges. GAO-07-795T. Washington, D.C.: April 26, 2007. Department of State: Staffing and Foreign Language Shortfalls Persist Despite Initiatives to Address Gaps. GAO-06-894. Washington, D.C.: August 4, 2006. Overseas Staffing: Rightsizing Approaches Slowly Taking Hold but More Action Needed to Coordinate and Carry Out Efforts. GAO-06-737. Washington, D.C.: June 30, 2006. U.S. Public Diplomacy: State Department Efforts to Engage Muslim Audiences Lack Certain Communication Elements and Face Significant Challenges. GAO-06-535. Washington, D.C.: May 3, 2006. Border Security: Strengthened Visa Process Would Benefit from Improvements in Staffing and Information Sharing. GAO-05-859. Washington, D.C.: September 13, 2005. State Department: Targets for Hiring, Filling Vacancies Overseas Being Met, but Gaps Remain in Hard-to-Learn Languages. GAO-04-139. Washington, D.C.: November 19, 2003. Foreign Affairs: Effective Stewardship of Resources Essential to Efficient Operations at State Department, USAID. GAO-03-1009T. Washington, D.C.: September 4, 2003. State Department: Staffing Shortfalls and Ineffective Assignment System Compromise Diplomatic Readiness at Hardship Posts. GAO-02-626. Washington, D.C.: June 18, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday afternoon, GAO posts on its Web site newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to www.gao.gov and select “E-mail Updates.” The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s Web site, http://www.gao.gov/ordering.htm. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. | This testimony discusses U.S. diplomatic readiness, and in particular the staffing and foreign language challenges facing the Foreign Service. The Department of State (State) faces an ongoing challenge of ensuring it has the right people, with the right skills, in the right places overseas to carry out the department's priorities. In particular, State has long had difficulty staffing its hardship posts overseas, which are places like Beruit and Lagos, where conditions are difficult and sometimes dangerous due to harsh environmental and extreme living conditions that often entail pervasive crime or war, but are nonetheless integral to foreign policy priorities and need a full complement of qualified staff. State has also faced persistent shortages of staff with critical language skills, despite the importance of foreign language proficiency in advancing U.S. foreign policy and economic interests overseas. In recent years GAO has issued a number of reports on human capital issues that have hampered State's ability to carry out the President's foreign policy objectives. This testimony discusses (1) State's progress in addressing staffing gaps at hardship posts, and (2) State's efforts to meet its foreign language requirements. Despite a number of steps taken over a number of years, the State Department continues to face persistent staffing and experience gaps at hardship posts, as well as notable shortfalls in foreign language capabilities. A common element of these problems has been a longstanding staffing and experience deficit, which has both contributed to the gaps at hardship posts and fueled the language shortfall by limiting the number of staff available for language training. State has undertaken several initiatives to address these shortages, including multiple staffing increases intended to fill the gaps. However, the department has not undertaken these initiatives in a comprehensive and strategic manner. As a result, it is unclear when the staffing and skill gaps that put diplomatic readiness at risk will close. |
As I mentioned earlier, as has been the case for the previous 6 fiscal years, the federal government continues to have a significant number of material weaknesses related to financial systems, fundamental recordkeeping and financial reporting, and incomplete documentation. Several of these material weaknesses (referred to hereafter as material deficiencies) resulted in conditions that continued to prevent us from forming and expressing an opinion on the U.S. government’s consolidated financial statements for the fiscal years ended September 30, 2003 and 2002. There may also be additional issues that could affect the consolidated financial statements that have not been identified. Major challenges include the federal government’s inability to properly account for and report property, plant, and equipment and inventories and related property, primarily at the Department of Defense (DOD); reasonably estimate or adequately support amounts reported for certain liabilities, such as environmental and disposal liabilities and related costs at DOD, and ensure complete and proper reporting for commitments and contingencies; support major portions of the total net cost of government operations, most notably related to DOD, and ensure that all disbursements are properly recorded; fully account for and reconcile intragovernmental activity and balances; demonstrate how net outlay amounts reported in the consolidated financial statements were related to net outlay amounts reported in the underlying federal agencies’ financial statements; and effectively prepare the federal government’s financial statements, including ensuring that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with GAAP. In addition to these material deficiencies, we identified four other material weaknesses in internal control related to loans receivable and loan guarantee liabilities, improper payments, information security, and tax collection activities. The material weaknesses identified by our work are discussed in more detail in appendix II, and their primary effects are described in appendix III. The ability to produce the data needed to efficiently and effectively manage the day-to-day operations of the federal government and provide accountability to taxpayers and the Congress has been a long-standing challenge at most federal agencies. The results of the fiscal year 2003 assessments performed by agency inspectors general or their contract auditors under FFMIA show that these problems continue to plague the financial management systems used by most of the CFO Act agencies. While the problems are much more severe at some agencies than at others, their nature and severity indicate that overall, management at most CFO Act agencies lacks the full range of information needed for accountability, performance reporting, and decision making. These problems include nonintegrated financial systems, lack of accurate and timely recording of data, inadequate reconciliation procedures, and noncompliance with accounting standards and the U.S. Government Standard General Ledger (SGL). Agencies’ inability to meet the federal financial management systems requirements continues to be the major barrier to achieving compliance with FFMIA. Under FFMIA, CFO Act agency auditors are required to report, as part of the agencies’ financial statement audits, whether agencies’ financial management systems substantially comply with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the SGL at the transaction level. As shown in figure 1, auditors most frequently reported instances of noncompliance with federal financial management systems requirements. These instances of noncompliance involved not only core financial systems, but also administrative and programmatic systems. For fiscal year 2003, auditors for 17 of the 23 CFO Act agencies reported that the agencies’ financial management systems did not comply substantially with one or more of FFMIA’s three requirements. For the remaining 6 CFO Act agencies, auditors provided negative assurance, meaning that nothing came to their attention indicating that the agencies’ financial management systems did not substantially meet FFMIA requirements. The auditors for these 6 agencies did not definitively state whether the agencies’ systems substantially complied with FFMIA requirements, as is required under the statute. DHS is not subject to the requirements of the CFO Act and, consequently, is not required to comply with FFMIA. Accordingly, DHS’s auditors did not report on DHS’s compliance with FFMIA. However, the auditors identified and reported deficiencies that related to the aforementioned three requirements of FFMIA. Federal agencies have recognized the seriousness of their financial systems weaknesses and have efforts under way to implement or upgrade their financial systems to alleviate long-standing problems, but some of these efforts face significant challenges. For example, as we testified in May 2004, we have identified several issues related to NASA’s financial management systems modernization effort: (1) NASA did not involve key stakeholders in the design and implementation of the agency’s new financial management system’s core financial module; (2) NASA did not follow key best practices for acquiring and implementing this system; and (3) the new system lacks key external reporting capabilities for property and budgetary data. In addition, as I will discuss later in this testimony, DOD faces major challenges in its efforts to develop a business enterprise architecture. We recognize that it will take time, investment, and sustained emphasis to improve agencies’ underlying financial management systems. As I mentioned earlier, for the past 7 fiscal years, the federal government has been required to prepare, and have audited, consolidated financial statements. Successfully meeting this requirement is tightly linked to the requirements for the CFO Act agencies to also have audited financial statements. This has stimulated extensive cooperative efforts and considerable attention by agency chief financial officers, inspectors general, Treasury and OMB officials, and GAO. With the benefit of the past 7 years’ experience by the federal government in having the required financial statements subjected to audit, more intensified attention will be needed on the most serious obstacles to achieving an opinion on the U.S. government’s consolidated financial statements. Three major impediments to an opinion on the consolidated financial statements are (1) serious financial management problems at DOD, (2) the federal government’s inability to fully account for and reconcile transactions between federal government entities, and (3) the federal government’s ineffective process for preparing the consolidated financial statements. Essential to achieving an opinion on the consolidated financial statements is resolution of the serious financial management problems at DOD, which we have designated as high risk since 1995. In accordance with section 1008 of the National Defense Authorization Act for Fiscal Year 2002, DOD reported that for fiscal year 2003, it was not able to provide adequate evidence supporting material amounts in its financial statements. DOD stated that it is unable to comply with applicable financial reporting requirements for (1) property, plant, and equipment (PP&E); (2) inventory and operating materials and supplies; (3) environmental liabilities; (4) intragovernmental eliminations and related accounting adjustments; (5) disbursement activity; and (6) cost accounting by responsibility segment. Although DOD represented that the military retirement health care liability data had improved for fiscal year 2003, the cost of direct health care provided by DOD-managed military treatment facilities was a significant amount of DOD’s total recorded health care liability and was based on estimates for which adequate support was not available. DOD continues to confront pervasive decades-old financial management and business problems related to its systems, processes (including internal controls), and people (human capital). These problems preclude the department from producing accurate, reliable, and timely information to make sound decisions and to accurately report on its billions of dollars of assets. DOD’s long-standing business management systems problems adversely affect the economy, effectiveness, and efficiency of its operations and have resulted in a lack of adequate accountability across all major business areas. To date, none of the military services or major DOD components has passed the test of an independent financial audit because of pervasive weaknesses in financial management systems, operations, and controls. Additionally, the department’s stovepiped, duplicative, and nonintegrated systems contribute to its vulnerability to fraud, waste, and abuse. In this regard, we have recently testified on problems related to military pay and unused airline tickets. Vulnerability to fraud, waste, and abuse continues despite substantial systems investment. For fiscal year 2004, DOD requested approximately $19 billion to operate, maintain, and modernize its reported 2,274 business systems. The duplicative and stovepiped nature of DOD’s systems environment is illustrated by the numerous systems it has in the same functional areas. For example, DOD reported that it has 565 systems to support logistics functions. These systems are not integrated and thus have multiple points of data entry, which can result in significant data integrity problems. Further, DOD continues to lack effective management oversight and control over business systems modernization investments. The actual funding continues to be distributed among the military services and defense agencies, thereby enabling the numerous DOD components to continue to develop stovepiped, parochial solutions to the department’s long-standing financial management and business operation challenges. Lacking a departmentwide focus and effective management oversight and control of business systems investment, DOD continues to invest billions of dollars in systems that fail to provide integrated corporate solutions to its long-standing business operations problems. Over the past 14 years, DOD has initiated several broad-based reform efforts intended to fundamentally reform its business operations and improve the reliability of information used in the decision-making process. While these initiatives produced some incremental improvements, they did not result in the fundamental reform necessary to resolve the department’s long-standing management challenges. Secretary Rumsfeld has made business transformation a priority. For example, through its Business Management Modernization Program, DOD is continuing its efforts to develop and implement a business enterprise architecture and establish effective management and control over its business system modernization investments. However, we recently reported that after about 3 years of effort and over $203 million in obligations, we have not seen significant change in the content of DOD’s architecture or in DOD’s approach to investing billions of dollars annually in existing and new systems. Few actions have been taken to address the recommendations we made in our previous reports, which were aimed at improving DOD’s plans for developing the next version of the architecture and implementing the institutional means for selecting and controlling both planned and ongoing business systems investments. To date, DOD has not addressed 22 of our 24 recommendations. Currently, DOD has various initiatives under way to support its efforts to obtain an unqualified audit opinion on its fiscal year 2007 financial statements. Because there are not yet detailed plans guiding these activities, however, it is unclear whether and how they support each other and whether they support this goal. Therefore, the feasibility of meeting this goal is as yet unknown. The seriousness of DOD’s business management weaknesses underscores the importance of no longer condoning “status quo” business operations at DOD. Cultural resistance to change, military service parochialism, and stovepiped operations have all contributed significantly to the failure of previous attempts to implement broad-based management reforms at DOD. The department has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization and that many of these practices were developed piecemeal and evolved to accommodate different organizations, each with its own policies and procedures. To improve the likelihood that the department’s current business transformation efforts will be successful, we have previously suggested that a chief management official position be created. Previous failed attempts to improve DOD’s business operations illustrate the need for sustained involvement of DOD leadership in helping to assure that DOD’s financial and overall business process transformation efforts remain a priority. While the Secretary and other key DOD leaders have demonstrated their commitment to the current business transformation efforts, the long- term nature of these efforts requires the development of an executive position capable of providing strong and sustained executive leadership over a number of years and various administrations. This position would provide the sustained attention essential for addressing key stewardship responsibilities such as strategic planning, performance and financial management, and business systems modernization in an integrated manner. This position could be filled by an individual, appointed by the President and confirmed by the Senate, for a set term of 7 years with the potential for reappointment. Such an individual should have a proven track record as a business process change agent in large, complex, and diverse organizations—experience necessary to spearhead business process transformation across the department, and potentially administrations, and serve as an integrator for the needed business transformation efforts. Further, in a recent report we also suggest that to improve management oversight, accountability, and control of the department’s business systems funding, Congress may wish to consider providing the funds to operate, maintain, and modernize DOD’s business systems to the functional areas, known as domains, rather than the military services and the defense agencies. Currently, each military service and defense agency receives its own funding and is largely autonomous in deciding how to spend these funds, thereby hindering the development of broad-based, integrated corporate system solutions to common DOD-wide problems. We believe it is critical that funds for DOD business systems be appropriated to the domain owners in order to provide for accountability and the ability to prevent the continued parochial approach to systems investment that exists today. The domains would establish a hierarchy of investment review boards with DOD-wide representation, including the military services and defense agencies. These boards would be responsible for reviewing and approving investments to develop, operate, maintain, and modernize business systems for the domain portfolio, including ensuring that investments were consistent with DOD’s business enterprise architecture. DOD still has a long way to go, and top leadership must continue to stress the importance of achieving lasting improvement that truly transforms the department’s business systems and operations. Only through major transformation, which will take time and sustained leadership from top management, will DOD be able to meet the mandate of the CFO Act and achieve the President’s Management Agenda goal of improved financial performance. OMB and Treasury require the CFOs of 35 executive departments and agencies, including the 23 CFO Act agencies, to reconcile selected intragovernmental activity and balances with their “trading partners” and to report to Treasury, the agency’s inspector general, and GAO on the extent and results of intragovernmental activity and balances reconciliation efforts. A substantial number of the agencies continue to be unable to fully perform reconciliations of intragovernmental activity and balances with their trading partners, citing reasons such as (1) trading partners not providing needed data; (2) limitations and incompatibility of agency and trading partner information systems; and (3) lack of human resources. Amounts reported for federal agency trading partners for certain intragovernmental accounts were significantly out of balance in the aggregate for both fiscal years 2003 and 2002. We reported in previous years that the heart of the intragovernmental transactions issue was that the federal government lacked clearly articulated business rules for these transactions so that they would be handled consistently by agencies. In this regard, at the start of fiscal year 2003, OMB issued business rules to transform and standardize intragovernmental ordering and billing. To address long-standing problems with intragovernmental exchange transactions between federal agencies, Treasury provided federal agencies with quarterly detailed trading partner information during fiscal year 2003 to help them better perform their trading partner reconciliations. In addition, the federal government began a three-phase Intragovernmental Transactions e-gov project to define a governmentwide data architecture and provide a single source of detailed trading partner data. On April 20, 2004, however, OMB announced that it was appropriate to pause and evaluate the results of the project to date. OMB estimated that the evaluation will take 120 days and will be followed by a phased deployment. Resolving the intragovernmental transactions problem remains a difficult challenge and will require a commitment by the CFO Act agencies and continued strong leadership by OMB. The federal government did not have adequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with GAAP. In this regard, Treasury is developing a new system and procedures to prepare the consolidated financial statements beginning with the statements for fiscal year 2004. Treasury officials have stated that these actions are intended to, among other things, directly link information from federal agencies’ audited financial statements to amounts reported in the consolidated financial statements and resolve many of the issues we identified in the process for preparing the consolidated financial statements. As part of our fiscal year 2004 audit, we will evaluate the new system and procedures as they are fully developed and implemented and determine the extent of linkage accomplished for the fiscal year 2004 financial statements. Resolving issues surrounding preparing the consolidated financial statements has been a significant challenge and will require continued strong leadership by Treasury management. Our nation’s large and growing long-term fiscal imbalance, which is driven largely by known demographic trends and rising health care costs—coupled with new homeland security and defense commitments—serves to sharpen the need to fundamentally review and re-examine basic federal entitlements, as well as other mandatory and discretionary spending, and tax policies. As we look ahead, our nation faces an unprecedented demographic challenge with significant implications, among them budgetary and economic. Between now and 2035, the number of people who are 65 years old or over will double, driving federal spending on the elderly to a larger and ultimately unsustainable share of the federal budget. As a result, tough choices will be required to address the resulting structural imbalance. GAO prepares long-term budget simulations that seek to illustrate the likely fiscal consequences of the coming demographics and rising health care costs. Our latest long-term budget simulations reinforce the need for change in the major cost drivers—Social Security and health care programs. As shown in figure 2, by 2040, absent reform of these entitlement programs, projected federal revenues may be adequate to pay little beyond interest on the debt. Current financial reporting does not clearly and transparently show the wide range of responsibilities, programs, and activities that may either obligate the federal government to future spending or create an expectation for such spending and provides an unrealistic and even misleading picture of the federal government’s overall performance and financial condition. Few agencies adequately show the results they are getting with the taxpayer dollars they spend. In addition, too many significant federal government commitments and obligations, such as Social Security and Medicare, are not fully and consistently disclosed in the federal government’s consolidated financial statements and budget, and current federal financial reporting standards do not require such disclosure. Figure 3 shows some selected fiscal exposures. The spectrum of these exposures ranges from covering only the explicit liabilities that are shown on the consolidated financial statements to implicit promises embedded in current policy or public expectations. These liabilities, commitments, and promises have created a fiscal imbalance that will put unprecedented strains on the nation’s spending and tax policies. Although economic growth can help, the projected fiscal gap is now so large that the federal government will not be able to simply grow its way out of the problem. Tough choices are inevitable. Particularly troubling are the many big-ticket items that taxpayers will eventually have to deal with. The federal government has pledged its support to a long list of programs and activities, including pension and health care benefits for senior citizens, medical care for veterans, and contingencies associated with various government-sponsored entities, whose claims on future spending total trillions of dollars. Despite their serious implications for future budgets, tax burdens, and spending flexibilities, these unfunded commitments get short shrift in the federal government’s current financial statements and in budgetary deliberations. The federal government’s gross debt as of September 2003 was about $7 trillion, or about $24,000 for every man, woman, and child in this country today. But that number excludes many big-ticket items, including the gap between promised and funded Social Security and Medicare benefits, veterans’ health care, and a range of other commitments and contingencies. If these items are factored in, the total burden in current dollars is at least $42 trillion. To put that number into perspective, $42 trillion is 18 times the current federal budget, or 3.5 times our current annual gross domestic product. One of the biggest contributors to this total bill will be the new Medicare prescription drug benefit, whose estimated current-dollar cost over the next 75 years is more than $8 trillion. Stated differently, the current total burden for every American is more than $140,000—and every day that burden is growing larger. GAO’s long-term budget simulations show that by 2040, the federal government may have to cut federal spending by 60 percent or raise taxes to about 2.5 times today’s level to pay for the mounting cost of the federal government’s current unfunded commitments. Either would be devastating. Proper accounting and reporting practices are essential in the public sector. After all, the U.S. government is the largest, most diverse, most complex, and arguably the most important entity on earth today. Its services—homeland security, national defense, Social Security, mail delivery, and food inspection, to name a few—directly affect the well-being of almost every American. But sound decisions on the future direction of vital federal government programs and policies are made more difficult without timely, accurate, and useful financial and performance information. Fortunately, we are starting to see efforts to address the shortcomings in federal financial reporting. The President’s Management Agenda, which closely reflects GAO’s list of high-risk government programs, is bringing attention to troubled areas across the federal government and is taking steps to better assess the results that programs are getting with the resources they are given. The Federal Accounting Standards Advisory Board is also making progress on many key financial reporting issues. In addition to these efforts, we have published frameworks for analyzing various Social Security reform proposals and for analyzing health care reform proposals. We have also helped to create a consortium of “good government” organizations to stimulate the development of a set of key national indicators to assess the United States’ overall position and progress over time and in comparison to those of other industrialized nations. Budget experts at the Congressional Budget Office (CBO) and GAO continue to encourage reforms to the federal budget process to better reflect the federal government’s commitments and signal emerging problems. Among other things, we have recommended that the federal government issue an annual report on major fiscal exposures. The President’s fiscal year 2005 budget also proposes that future President’s budgets report on any enacted legislation in the past year that worsens the unfunded obligations of programs with long-term actuarial projections, with CBO to make a similar report. Such reporting could be a good starting point. Although these are positive initial steps, much more must be done given the magnitude of the federal government’s fiscal challenge. A top-to-bottom review of government activities to ensure their relevance and fit for the 21st century and their relative priority is long overdue. As I have spoken about in the past, the federal government needs a three-pronged approach to (1) restructure existing entitlement programs, (2) reexamine the base of discretionary and other spending, and (3) review and revise the federal government’s tax policy, including major tax preferences, and enforcement programs. New accounting and reporting approaches, budget control mechanisms, and metrics are needed for considering and measuring the impact of spending and tax policies and decisions over the long term. Our report on the U.S. government’s consolidated financial statements for fiscal years 2003 and 2002 highlights the need to continue addressing the federal government’s serious financial management weaknesses. With the significantly accelerated financial reporting time frame for fiscal year 2004 and beyond, it is essential that the federal government move away from the extraordinary efforts many federal agencies continue to make to prepare financial statements and toward giving prominence to strengthening the federal government’s financial systems, reporting, and controls. This is the only way the federal government can meet the end goal of making timely, accurate, and useful financial and performance information routinely available to the Congress, other policymakers, and the American public. The requirement for timely, accurate, and useful financial and performance management information is greater than ever as our nation faces major long-term fiscal challenges that will require tough choices in setting priorities and linking resources to results. The Congress and the President face the challenge of sorting out the many claims on the federal budget without the budget enforcement mechanisms or fiscal benchmarks that guided the federal government through the previous years of deficit reduction into the brief period of surplus. While a number of steps will be necessary to address this challenge, truth and transparency in federal government reporting are essential elements of any attempt to address the nation’s long-term fiscal challenges. The fiscal risks I mentioned earlier can be managed only if they are properly accounted for and publicly disclosed. A crucial first step will be to face facts and identify the significant commitments facing the federal government. If citizens and federal government officials come to understand various fiscal exposures and their potential claims on future budgets, they are more likely to insist on prudent policy choices today and sensible levels of fiscal risk in the future. In addition, new budget control mechanisms will be required, along with effective approaches to successfully engage in a fundamental review, reassessment, and reprioritization of the base of federal government programs and policies that I have recommended previously. Public officials will have more incentive to make difficult but necessary choices if the public has the facts and comes to support serious and sustained action to address the nation’s fiscal challenges. Without meaningful public debate, however, real and lasting change is unlikely. Clearly, the sooner action is taken, the easier it will be to turn things around. I believe that nothing less than a national education campaign and outreach effort is needed to help the public understand the nature and magnitude of the long-term financial challenge facing this nation. An informed electorate is essential for a healthy democracy. Members of Generations X and Y especially need to become active in this discussion because they and their children will bear the heaviest burden if policymakers fail to act in a timely and responsible manner. We at GAO are committed to doing our part, but others also need to step up to the plate. By working together, I believe we can make a meaningful difference for our nation, fellow citizens, and future generations of Americans. In closing, Mr. Chairman, I want to reiterate the value of sustained congressional interest in these issues, as demonstrated by the Congress’s annual hearings on the results of our audit of the consolidated financial statements and of audits of certain federal agencies’ financial statements. It will also be key that the appropriations, budget, authorizing, and oversight committees hold agency top leadership accountable for resolving these problems and that they support improvement efforts. For further information regarding this testimony, please contact Jeffrey C. Steinhoff, Managing Director, or Gary T. Engel, Director, Financial Management and Assurance, at (202) 512-2600. R. Navarro & Associates, Inc. R. Navarro & Associates, Inc. The federal government did not maintain adequate systems or have sufficient, reliable evidence to support information reported in the consolidated financial statements of the U.S. government, as described below. These material deficiencies contributed to our disclaimer of opinion on the consolidated financial statements and also constitute material weaknesses in internal control. In addition to the material deficiencies noted above, we found four other material weaknesses in internal control as of September 30, 2003: (1) several federal agencies continue to have deficiencies in the processes and procedures used to estimate the costs of their lending programs and value their related loans receivable; (2) most federal agencies have not reported the magnitude of improper payments in their programs and activities; (3) federal agencies have not yet fully institutionalized comprehensive security management programs; and (4) material internal control weaknesses and systems deficiencies continue to affect the federal government’s ability to effectively manage its tax collection activities. In general, federal agencies continue to make progress in reducing the number of material weaknesses and reportable conditions related to their lending activities. However, significant deficiencies in the processes and procedures used to estimate the costs of certain lending programs and value the related loans receivable still remain. These deficiencies continue to adversely affect the government’s ability to support annual budget requests for these programs, make future budgetary decisions, manage program costs, and measure the performance of lending activities. The most notable deficiencies existed at the Small Business Administration (SBA), which, while improved from last year, continues to have a material weakness related to this area. For example, SBA did not adequately document its estimation methodologies, lacked the management controls necessary to ensure that appropriate estimates were prepared and reported based on complete and accurate data, and could not fully support the reasonableness of the costs of its lending programs and valuations of its loan portfolio. We are currently assessing SBA’s actions to resolve certain of these deficiencies related to accounting for previous loan sales and cost estimates for disaster loans. Across the federal government, improper payments occur in a variety of programs and activities, including those related to health care, contract management, federal financial assistance, and tax refunds. While complete information on the magnitude of improper payments is not yet available, based on available data, OMB has estimated that improper payments exceed $35 billion annually. Many improper payments occur in federal programs that are administered by entities other than the federal government, such as states. Improper payments often result from a lack of or an inadequate system of internal controls. Although the President’s Management Agenda includes an initiative to reduce improper payments, most federal agencies have not reported the magnitude of improper payments in their programs and activities. The Improper Payments Information Act of 2002 provides for federal agencies to estimate and report on their improper payments. It requires federal agencies to (1) annually review programs and activities that they administer to identify those that may be susceptible to significant improper payments, (2) estimate improper payments in susceptible programs and activities, and (3) provide reports to the Congress that discuss the causes of improper payments identified and the status of actions to reduce them. In accordance with the legislation, OMB issued guidance for federal agencies’ use in implementing the act. Among other things, the guidance requires federal agencies to report on their improper payment-related activities in the Management Discussion and Analysis section of their annual Performance and Accountability Reports (PAR). While the act does not require such reporting by all federal agencies until fiscal year 2004, OMB required 44 programs and 14 CFO Act agencies to report improper payment information in their fiscal year 2003 PARs. Our preliminary review of the PARs found that 12 of the 14 agencies reported improper payment amounts for 27 of the 44 programs identified in the guidance. We also found that, for the programs where improper payments were identified, the reports often contained information on the causes of the payments but little information that addressed the other reporting requirements cited in the legislation. Although progress has been made, serious and widespread information security weaknesses continue to place federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. GAO has reported information security as a high-risk area across government since February 1997. Such information security weaknesses could result in compromising the reliability and availability of data that are recorded in or transmitted by federal financial management systems. A primary reason for these weaknesses is that federal agencies have not yet fully institutionalized comprehensive security management programs, which are critical to identifying information security weaknesses, resolving information security problems, and managing information security risks on an ongoing basis. The Congress has shown continuing interest in addressing these risks, as evidenced by recent hearings on information security and enactment of the Federal Information Security Management Act of 2002 and the Cyber Security Research and Development Act. In addition, the administration has taken important actions to improve information security, such as integrating information security into the Executive Branch Management Scorecard. Material internal control weaknesses and systems deficiencies continue to affect the federal government’s ability to effectively manage its tax collection activities. Due to errors and delays in recording activity in taxpayer accounts, taxpayers were not always credited for payments made on their taxes owed, which could result in undue taxpayer burden. In addition, the federal government did not always follow up on potential unreported or underreported taxes and did not always pursue collection efforts against taxpayers owing taxes to the federal government. Primary Effects on the Fiscal Years 2003 and 2002 Consolidated Financial Statements and the Management of Government Operations Without accurate asset information, the federal government does not fully know the assets it owns and their location and condition and cannot effectively (1) safeguard assets from physical deterioration, theft, or loss, (2) account for acquisitions and disposals of such assets, (3) ensure the assets are available for use when needed, (4) prevent unnecessary storage and maintenance costs or purchase of assets already on hand, and (5) determine the full costs of programs that use these assets. Problems in accounting for liabilities affect the determination of the full cost of the federal government’s current operations and the extent of its liabilities. Also, improperly stated environmental and disposal liabilities and weak internal control supporting the process for their estimation affect the federal government’s ability to determine priorities for cleanup and disposal activities and to allow for appropriate consideration of future budgetary resources needed to carry out these activities. In addition, when disclosures of commitments and contingencies are incomplete or incorrect, reliable information is not available about the extent of the federal government’s obligations. Inaccurate cost information affects the federal government’s ability to control and reduce costs, assess performance, evaluate programs, and set fees to recover costs where required. Improperly recorded disbursements could result in misstatements in the financial statements and in certain data provided by federal agencies for inclusion in the President’s budget concerning obligations and outlays. Problems in accounting for and reconciling intragovernmental activity and balances impair the government’s ability to account for billions of dollars of transactions between governmental entities. Until the differences between the total net outlays reported in federal agencies’ Statements of Budgetary Resources and the records used by the Department of the Treasury to prepare the Statement of Changes in Cash Balance from Unified Budget and Other Activities are reconciled, the effect that these differences may have on the U.S. government’s consolidated financial statements will be unknown. Because the federal government did not have adequate systems, controls, and procedures to prepare its consolidated financial statements, the federal government’s ability to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with U.S. generally accepted accounting principles was impaired. Without a systematic measurement of the extent of improper payments, federal agency management cannot determine (1) if improper payment problems exist that require corrective action, (2) mitigation strategies and the appropriate amount of investments to reduce them, and (3) the success of efforts implemented to reduce improper payments. Weaknesses in the processes and procedures for estimating credit program costs affect the government’s ability to support annual budget requests for these programs, make future budgetary decisions, manage program costs, and measure the performance of lending activities. Information security weaknesses Information security weaknesses over computerized operations are placing enormous amounts of federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. Primary Effects on the Fiscal Years 2003 and 2002 Consolidated Financial Statements and the Management of Government Operations Weaknesses in controls over tax collection activities continue to affect the federal government’s ability to efficiently and effectively account for and collect revenue. Additionally, weaknesses in financial reporting affect the federal government’s ability to make informed decisions about collection efforts. As a result, the federal government is vulnerable to loss of tax revenue and exposed to potentially billions of dollars in losses due to inappropriate refund disbursements. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | GAO is required to annually audit the consolidated financial statements of the U.S. government. Proper accounting and reporting practices are essential in the public sector. The U.S. government is the largest, most diverse, most complex, and arguably the most important entity on earth today. Its services--homeland security, national defense, Social Security, mail delivery, and food inspection, to name a few--directly affect the well-being of almost every American. But sound decisions on the future direction of vital federal government programs and policies are made more difficult without timely, accurate, and useful financial and performance information. Until the problems discussed in GAO's audit report on the U.S. government's consolidated financial statements are adequately addressed, they will continue to (1) hamper the federal government's ability to accurately report a significant portion of its assets, liabilities, and costs; (2) affect the federal government's ability to accurately measure the full cost as well as the financial and nonfinancial performance of certain programs while effectively managing related operations; and (3) significantly impair the federal government's ability to adequately safeguard certain significant assets and properly record various transactions. As in the 6 previous fiscal years, certain material weaknesses in internal control and in selected accounting and reporting practices resulted in conditions that continued to prevent GAO from being able to provide the Congress and American citizens an opinion as to whether the consolidated financial statements of the U.S. government are fairly stated in conformity with U.S. generally accepted accounting principles. Three major impediments to an opinion on the consolidated financial statements continue to be (1) serious financial management problems at DOD, (2) the federal government's inability to fully account for and reconcile transactions between federal government entities, and (3) the federal government's ineffective process for preparing the consolidated financial statements. For fiscal year 2003, 20 of 23 Chief Financial Officers (CFO) Act agencies received unqualified opinions, the same number received by these agencies in fiscal year 2002, up from 6 for fiscal year 1996. However, only 3 of the CFO Act agencies had neither a material weakness in internal control, an issue involving compliance with applicable laws and regulations, nor an instance of lack of substantial compliance with Federal Financial Management Improvement Act requirements. The requirement for timely, accurate, and useful financial and performance management information is greater than ever as our nation faces major longterm fiscal challenges that will require tough choices in setting priorities and linking resources to results. Given the nation's large and growing long-term fiscal imbalance, which is driven largely by known demographic trends and health care costs, coupled with new homeland security and defense commitments, the status quo is unsustainable. Current financial reporting does not clearly and transparently show the wide range of responsibilities, programs, and activities that may either obligate the federal government to future spending or create an expectation for such spending and provides an unrealistic and even misleading picture of the federal government's overall performance and financial condition. In addition, too many significant federal government commitments and obligations, such as Social Security and Medicare, are not adequately addressed in the federal government's financial statements and budget process, and current federal financial reporting standards do not require such disclosure. A top-to-bottom review of government activities to ensure their relevance and fit for the 21st century and their relative priority is long overdue. The federal government needs a three-pronged approach to (1) restructure existing entitlement programs, (2) reexamine the base of discretionary and other spending, and (3) review and revise the federal government's tax policy and enforcement programs. New accounting and reporting approaches, budget control mechanisms, and metrics are needed for considering and measuring the impact of spending and tax policies and decisions over the long term. |
The U.S. economy depends on the air cargo industry for the delivery of small, time-sensitive packages under 100 pounds, freight of 100 pounds or more, and mail. Air cargo carriers fall into two distinct categories: (1) all- cargo carriers that transport only cargo and (2) passenger carriers that transport cargo as belly freight in passenger aircraft. For the most part, all- cargo carriers can be categorized according to three business models: (1) large carriers, such as United Parcel Service (UPS) and FedEx, which operate large narrow-body and wide-body aircraft under part 121 of federal aviation regulations; (2) feeder carriers, which operate midsize and small aircraft (e.g., Cessna Caravans, Mitsubishi MU-2B-60s) under part 135 or part 121 on regularly scheduled flights in support of large cargo carriers; and (3) ad hoc carriers, which operate small aircraft (e.g., Cessna 401s, Beech Bonanzas) under part 135 and are individually contracted to haul cargo out of smaller airports while not necessarily operating on a regular schedule. Throughout this report, we use the term “small carriers” when referring to both feeder carriers and ad hoc carriers. Some carriers operate under both part 121 and part 135, and one large carrier leases aircraft to small carriers to provide feeder operations. (See fig. 1 for an illustration of a large carrier feeder-ad hoc relationship.) FAA estimated that as of May 6, 2009, the large all-cargo fleet contained 471 narrow-body and 593 wide-body aircraft and the small carrier all-cargo fleet contained 1,515 aircraft. Several federal transportation agencies play significant roles in air cargo safety. These agencies are FAA, the Department of Transportation’s (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA), and NTSB. Two FAA offices in particular have important responsibility for the oversight of air cargo carriers. First, FAA’s Flight Standards Service oversees cargo carrier operations conducted under parts 121 and 135. For each carrier, Flight Standards assembles a team of inspectors (known as a “certificate management team”) led by principal inspectors who focus on avionics, maintenance, or operations. For large carriers, dedicated teams of inspectors use the risk-based Air Transport Oversight System (ATOS) to carry out their duties. Under ATOS, inspectors develop surveillance plans for each carrier based on data analysis and risk assessment, and adjust the plans periodically in accordance with inspection results. For feeder and ad hoc carriers operating under part 135, inspectors—who unlike ATOS’s dedicated inspection teams may be assigned to multiple air carrier and other certificates—use the National Flight Standards Work Program Guidelines (NPG) to ensure that carriers comply with safety regulations. For NPG, Flight Standards annually identifies a minimum set of required inspections to be undertaken. In addition, individual inspectors determine annual sets of planned inspections based on their knowledge of and experience with the carriers they oversee. Second, FAA’s Office of Security and Hazardous Materials enforces hazardous materials (hazmat) safety policies and conducts annual inspections of cargo and passenger carriers. Inspectors in this office work exclusively on issues related to compliance with hazmat requirements. When violations of statutory and regulatory requirements are identified, FAA has a variety of enforcement tools at its disposal with which to respond, including administrative and legal sanctions. In addition, PHMSA ensures the safe transport of hazmat by air and other modes. PHMSA promulgates regulations concerning the types and amounts of hazmat that can or cannot be transported by air—often differentiating between what hazmat can be carried on all-cargo versus passenger aircraft—and maintains its own database of hazmat incidents as well as a portal that pulls together hazmat data from other databases. NTSB investigates and determines a probable cause for each U.S. aviation accident, which is defined as “an occurrence associated with the operation of an aircraft which takes place between the time any person boards the aircraft with the intention of flight and all such persons have disembarked, and in which any person suffers death or serious injury, or in which the aircraft receives substantial damage.” NTSB makes transportation safety recommendations to federal, state, and local agencies and private organizations to reduce the likelihood of recurrences of transportation accidents but has no authority to enforce its recommendations. NTSB also conducts annual reviews of aircraft accident data and determines U.S. aviation accident and fatal accident rates. NTSB also periodically holds public hearings and forums, and issues special studies on various transportation safety topics. From 1997 through 2008, air cargo accidents and fatal accidents each declined by about two-thirds. Despite this decline, small cargo carriers consistently experienced the largest shares of accidents and, especially, fatal accidents. Annually, air cargo accidents decreased 63 percent, from 62 in 1997 to 23 in 2008. Average annual air cargo accidents declined from the first to the second half of our review period, from an average of 45 accidents per year from 1997 through 2002 to an average of 28 accidents per year from 2003 through 2008. Fatal air cargo accidents also decreased over our 12-year review period, falling from 12 in 1997 to 4 in 2008. In addition, from the first to the second half of our review period, fatal cargo accidents dropped from an average of 10 per year to an average of 6 per year. The fluctuation in annual air cargo accidents could be the result of a number of factors, including the general decline in aviation activity after September 11, 2001; a fluctuation in overall U.S. aviation accidents; and other factors. (See fig. 2.) Ad hoc carriers experienced the largest decline in accidents, with 28 fewer accidents—dropping from 36 to 8—followed by large carriers, with 9 fewer accidents—dropping from 14 to 5 from 1997 through 2008. Feeder carrier accidents fluctuated during this period, reaching a high of 17 accidents in 2003 compared to a low of 7 accidents both in 1999 and in 2007. We do not know why the spike occurred in 2003. Large carriers had 3 fatal accidents during our review period, which occurred in 1997, 2000, and 2004. (See fig. 3.) Without actual data on the number of flight hours, however, we cannot determine an accident rate, and thus we do not know if the decline in ad hoc carrier accidents represents a better safety record for that sector of the air cargo industry. The small carriers (feeders and ad hoc) in our review experienced 79 percent of the air cargo accidents. Ad hoc carriers accounted for about half of accidents while the feeders were involved in over a quarter of them. (See fig. 4.) Feeder and ad hoc carriers averaged 29 accidents per year while large carriers averaged 8 accidents each year. Small air cargo carriers accounted for 96 percent of the fatal air cargo accidents that occurred from 1997 through 2008. Ad hoc carriers accounted for the majority of fatal accidents and feeders for over one-third of fatal accidents. (See fig. 5.) Together, feeder and ad hoc carriers averaged 8 fatal accidents per year while large cargo carriers experienced a total of 3 fatal accidents from 1997 through 2008. The accident rate per departure for large air cargo carriers has fluctuated over the last 25 years, but the overall trend has been downward, and in 2007 was roughly the same as for passenger carriers. It is possible to calculate these rates because FAA requires those carriers to report operational data (e.g., flight hours or departures). However, FAA does not require small on-demand carriers operating under part 135 to report operational information, and the majority of feeder and ad hoc cargo carriers fall into this group. In 2003, NTSB recommended that FAA collect this type of data and, according to NTSB, FAA is still reviewing the costs and benefits as well as options for collecting and processing the data. However, the lack of data about the flight hours for small on-demand carriers precludes calculation of the industry’s current accident or fatality rates or changes in the rates over time, making it difficult to determine whether the industry is becoming more or less prone to accidents. Instead, FAA relies on an annual survey of aircraft owners to form the basis for estimates of small carrier operations, but this survey does not distinguish between passenger and cargo operations, making it impossible to use the survey estimates to calculate cargo or passenger accident rates for on- demand operations or for the cargo industry as a whole. Even though operational data are not available for small air cargo carriers, their fatal accident rates would exceed those of large carriers for the latter part of our review period. From 2005 through 2008, there were no fatal accidents among large cargo carriers, so their fatal accident rate for those years was zero. However, there were 17 fatal feeder and ad hoc accidents over the same period, meaning that their fatal accident rate, if it could be determined, would be higher than zero—though it is unclear how much higher. This logic would not hold for air cargo accidents in general because large carriers had accidents in each year from 1997 through 2008, though they had fewer accidents than the feeder and ad hoc carriers. The lack of data makes it difficult for FAA and industry to target further improvements to the areas with the highest risk. Our review of NTSB and FAA air cargo accident and incident data as well as our interviews with industry officials and analyses of industry documents revealed that pilot performance was a prominent factor in air cargo accidents. Additionally, we concluded that accumulated risk, challenging operating conditions in Alaska, and undeclared hazmat were also prominent contributors to air cargo accidents. Our review of NTSB reports for 417 completed air cargo accident investigations found that pilot performance was cited as the probable cause for about 59 percent of them. Specifically, we found that NTSB cited pilot performance as the probable cause for about 53 percent of non- fatal and about 80 percent of fatal air cargo accidents. (See fig. 6.) Examples of pilot performance issues in these accidents included the pilot’s failure to maintain control of the aircraft or to execute the appropriate procedure. Our review determined that the second most prominent cause of air cargo accidents was some type of equipment equipment failure or malfunction. failure or malfunction. Pilots of small cargo aircraft have fewer human and other resources available to them to help avoid mistakes or recover from unexpected circumstances. Typically, there is no second pilot to share in the pilot’s many duties and help respond to emergencies. Eighty-one percent of the fatal air cargo accidents from 1997 through 2008 were single-pilot flights. The lack of a second pilot coupled with the many duties of a single pilot merits a mention of the issue of pilot fatigue. Although NTSB indicated fatigue as a contributing factor—not a probable cause—in just 4 of the 443 accidents in our data, 12 of 27 experts we surveyed ranked pilot fatigue as one of the three most serious challenges to safe air cargo operations. The view of the experts is not out of line with the accident record to the extent that the concern about pilot fatigue has led to vigilance in identifying and addressing fatigue issues. Further compounding the lack of pilot resources, cargo aircraft operated under part 135 are not required to have on-board safety technology such as a traffic collision and avoidance system, a terrain awareness and warning system, or an autopilot, which could aid a single pilot in monitoring the environment or responding to changing weather conditions. Most of these systems are required for small passenger aircraft that also operate under part 135. Additionally, small cargo aircraft may fly into airports where FAA does not provide air traffic control services at all hours and the airports offer fewer services than might be required for passenger operations. For example, at the Bethel, Alaska, airport—the transportation hub for the remote villages in the area and the third-busiest airport in Alaska—FAA provides air traffic control services from 7 a.m. to 8 p.m. from November to March and 2 hours later in other months, and the airport clears its runway of snow and staffs its aircraft rescue and fire-fighting equipment only during operations of passenger aircraft with more than 30 passenger seats. We analyzed NTSB reports of the 93 fatal air cargo accidents that occurred from 1997 through 2008 using FAA’s Flight Risk Assessment Tool and identified three or more risk factors in 63 of the accidents and four or more risk factors in 41 accidents. FAA’s tool is located in appendix III and includes 38 risk factors— in the areas of pilot qualifications and experience, operating environment, and equipment—each with an assigned value ranging from 2 to 5, with 5 indicating the highest risk. While we do not know how the presence of these risk factors differs from their occurrence during normal operations, the experts told us that the unrecognized accumulation of multiple risk factors can create a potentially dangerous situation. One 1997 fatal accident, which NTSB attributed to the pilot’s disregard of the preflight weather briefing for severe weather, involved six risk factors. The pilot had not flown a minimum number of hours during the previous 90 days, had not accumulated a minimum amount of experience flying the aircraft type involved in the accident, and was flying solo. Additionally, the pilot encountered severe turbulence and icing during the night flight. Table 1 lists the five most common risk factors we identified in the 93 fatal air cargo flights. With 18 fatal air cargo accidents from 1997 through 2008, Alaska led all states in this statistic because aviation operations in that state face several unique challenges. Alaska is more dependent on aviation for the transport of goods and people than other states because it lacks a comprehensive road system. Less than 10 percent of the state is accessible by road. Therefore, goods must be transported to remote villages via air or barge, and barge transport is not an option during the winter months. These factors make Alaska highly dependent on cargo aircraft, which often fly into poorly maintained airports that often do not meet FAA standards. Consequently, most of the accidents in Alaska involved small aircraft. Alaska is also subject to unusual weather conditions. Taken together, we believe these challenges render Alaska more susceptible to aviation accidents and fatal accidents than other states. Of the 27 experts we surveyed, 5 ranked operating conditions in Alaska as one of the top three challenges to the safe operation of cargo flights; in addition, 12 experts in our panel indicated that Alaskan operating conditions do pose at least moderate challenges to safety, whereas 2 experts (a pilot of large aircraft and a government official) said these conditions were not a challenge. Seven experts said they did not have enough specific knowledge to judge the degree of challenge that Alaskan operating conditions pose to safety. Very few of the cargo accidents occurring from year to year were conclusively caused by hazmat. However, 11 experts on our panel ranked undeclared hazmat—materials not noted as hazardous in shipping documents and/or labeled as such on their packaging—-among the greatest challenges to safe cargo operations, second only to pilot fatigue. According to our review of the NTSB accident data, only three cargo accidents involved hazmat. They occurred in 1997, 1998, and 2006 to large carriers, and none resulted in fatalities. The 2006 accident resulted in an NTSB hearing and recommendations that we discuss later in this report. The problem of undeclared hazmat was cited primarily by government and large carrier experts in our survey. Specifically, three of the four government experts and two of the three large carrier experts cited it as the most serious challenge to air cargo safety. These opinions may stem from the previously cited fires and the relative rarity of a destroyed aircraft among large carriers, as well as government concerns about the transport of lithium batteries on aircraft. FAA hazmat officials told us that undeclared shipments of hazardous materials represent the biggest challenge they face and that lithium batteries are the most challenging type of hazmat in air transportation. We reported in January 2003 that FAA, in the early 1990s, identified a number of incidents associated with batteries, particularly lithium batteries, aboard aircraft in which the batteries caused fires, smoke, or extreme heat. In response to these and other concerns, DOT took a number of actions designed to strengthen the regulations for the transportation of lithium batteries. In January 2008, NTSB noted that lithium batteries had been involved in at least 9 aviation incidents, and both primary and secondary lithium batteries are regulated as hazardous materials for the purposes of transportation. In December 2007, NTSB made six recommendations to PHMSA following a UPS aircraft fire at Philadelphia International Airport in February 2006, in which a number of secondary lithium batteries were found in the accident debris. The recommendations included requirements for transporting primary lithium batteries in fire-resistant containers and stowing cargo containing secondary lithium batteries in crew-accessible locations so that any fire hazards can be quickly addressed. These recommendations remain open with acceptable responses from PHMSA because they have not yet been implemented, but according to NTSB, actions are planned that, if satisfactorily completed, may comply with the safety recommendations. FAA and PHMSA have embarked on a lithium battery action plan, which aims to reduce the risk associated with the transport of batteries on aircraft by passengers and as cargo. The primary focus of this plan is all types of lithium batteries. According to DOT, PHMSA and FAA have also initiated a rule- making project to consider additional measures to enhance the safety of lithium battery shipments such as packaging, hazard communication, and stowage requirements. PHMSA plans to publish a notice of proposed rulemaking by December 2009. FAA, the Air Line Pilots Association, and PHMSA also issued safety alerts or advisories in 2007 that addressed smoke and fire hazards, recommended crew actions in the event of a battery fire, noted the availability of guidance for the safe transport of batteries and battery-powered devices on board aircraft, and provided information on proper packing and handling procedures for these batteries. Although our analysis for this study included 443 air cargo accidents, cargo carriers were involved in more than twice as many incidents during the first 11 years of our review period, and FAA and others recognize that incidents are potential precursors to more serious accidents. In an analysis of air cargo data for 1997 through 2007 from FAA’s Accident/Incident Data System, we identified over 900 air cargo incidents. These incidents covered a broad set of events, such as an engine losing power at 7,000 feet; a cargo door opening in flight; and an aircraft engine coming into contact with a fuel truck. FAA does not use incident data to identify precursors to aviation accidents, because the data were not developed for this purpose. However, the agency is moving toward using data to better identify precursors to accidents, but until it does so, it may be missing opportunities to make air cargo operations and aviation, in general, safer. For example, from 2000 to 2007, one ad hoc cargo carrier was listed in FAA’s database 10 times with incidents that resulted in varying degrees of damage to its aircraft, from none to substantial, and in NTSB’s database with one non-fatal accident. This carrier subsequently experienced a fatal accident in 2008. Had this carrier’s incident data been used to identify accident precursors, inspectors might have been alerted to underlying problems that might have been addressed, potentially preventing the subsequent fatal accident. In addition, NTSB’s accident database also does not track incidents in a way that would allow empirical analysis. The notion that incidents can be precursors to more serious accidents is accepted both inside and outside aviation. NASA’s Aviation Safety Reporting System collects, analyzes, and responds to aviation safety incident reports voluntarily submitted by pilots and others to lessen the likelihood of aviation accidents. In its 2005 Safety Management Manual, the International Civil Aviation Organization noted that for accidents, there are precursors evident before the accident, and focusing solely on instances of serious injury or significant damage is a wasted opportunity, since the factors contributing to such accidents may be present in hundreds of incidents. The International Civil Aviation Organization further noted that “effective safety management requires that staff and management identify and analyze hazards before they result in accidents,” particularly since there is the opportunity “to identify why the incidents occurred and, equally, how the defenses in place prevented them from becoming accidents.” The National Academy of Engineering undertook an accident precursor project in February 2003, which culminated in a report that included aviation accident precursor analysis and management. The report concluded that existing initiatives are not as effective as they could be and encourages government agencies that regulate high-hazard industries to increase their support of research into methods for effectively analyzing and managing precursors. Many government and industry efforts to improve safety focus primarily on large carriers. Such efforts include programs in which carriers and employees voluntarily disclose potential safety issues, attempts by carriers to institutionalize their safety procedures through safety management systems (SMS)—a proactive, risk-based approach to addressing potential hazards—and FAA’s ATOS oversight program even though there is nothing intrinsic to preclude these concepts from being implemented among small cargo carriers. Cargo experts view voluntary disclosure programs, efforts by associations to improve their members’ safety procedures, and carrier- implemented SMSs as the most effective current safety programs affecting air cargo (see fig. 7). The intent of voluntary disclosure programs is to identify and correct safety problems in a nonpunitive way and to provide additional safety information to FAA. Our panel of experts ranked FAA’s voluntary disclosure programs as the most effective current program for improving air cargo safety. Specifically, 16 of 27 experts ranked FAA’s voluntary disclosure programs as one of the most effective current efforts to improve air cargo safety, and all experts able to judge indicated that the programs were effective on some level. FAA operates multiple voluntary disclosure programs, which use different data sources to help identify safety deficiencies. The three major ones are Flight Operations Quality Assurance (FOQA), the Aviation Safety Action Program (ASAP), and the Voluntary Disclosure Reporting Program (VDRP). At the current time, FOQA is used only by large carriers because of the level of technology required and ASAP is not typically used by small carriers. FOQA collects and makes available for analysis digital flight data generated during the normal operations of the 23 participating carriers. As of January 2008, the only cargo carriers participating in FOQA were large carriers. Participating carriers pay for the special flight data recorders that can record FOQA data; these recorders cost approximately $20,000 each. Although such an investment can be expensive for some air carriers, some aircraft models come with the data recorder already built in. However, smaller carriers tend to operate older aircraft, which lack the data recorder equipment. ASAP encourages industry employees to report safety information that may be critical in identifying potential precursors to accidents. Under this program, employees of air cargo carriers and other participating entities report safety events, which a committee that includes the carrier, the employee labor group, and FAA then reviews and determines appropriate corrective actions, such as remedial training. FAA agrees not to pursue enforcement actions for safety violations reported exclusively under this program. As of December 2008, 73 carriers participated in ASAP, including 8 cargo carriers. Seven of these 8 cargo carriers were large carriers, possibly because large carriers are more likely than small carriers to have the time and resources required for participation. Officials from an ad hoc carrier we interviewed said it was not practical for their carrier to enter into an agreement with FAA and then organize meetings to discuss disclosures when the carrier could operate an informal safety issue disclosure program internally. VDRP encourages regulated entities, such as air carriers or repair stations, to voluntarily report instances of regulatory noncompliance. FAA does not take legal action on VDRP disclosures, but a violation with the same root cause can be reported only once by a carrier. Cargo carriers of all types we interviewed indicated that they have participated or would participate in VDRP. However, a 2008 DOT Blue Ribbon Panel found FAA does not routinely analyze VDRP data to identify trends and patterns that could indicate safety risks. FAA noted in commenting on a draft of this report that it began conducting regular analysis of VDRP data in January 2009 and that it modified the VDRP data software system and associated guidance to enable the identification of national trends of disclosures that represent the highest risk to safety. Numerous industry efforts to improve safety are aimed at different sectors of the air cargo industry. These include efforts by membership associations to improve safety among their members, which 14 of the 27 experts on our panel ranked among the top three most effective current efforts to improve air cargo safety—second only to voluntary disclosure. In addition, all experts in a position to comment indicated that these association efforts were at least slightly effective, and 11 of those experts indicated that they were greatly effective. We were unable to identify a central clearinghouse for these association efforts, but the ones we did identify fell into the following six general categories: establishing SMSs, providing fatigue awareness training, providing pilot skills training, adding on-board safety systems, improving flight risk assessments, and providing cargo-specific aircraft rescue and fire-fighting training. One expert said that membership-based efforts are often the most effective because they directly reflect the voluntary priorities of the membership organization and are often directly tailored to the group’s specific needs. Examples of industry efforts, including some by industry associations and one joint industry-federal effort, follow. The Regional Air Cargo Carriers Association—an association of primarily feeder cargo carriers—has developed an SMS template tailored to the needs of smaller, feeder cargo carriers. A Regional Air Cargo Carriers Association official said that the organization’s members found FAA’s SMS guidance appropriate for large carriers with safety departments, but less useful for feeder or ad hoc carriers that might have only a few employees. SMSs are considered in the international community to be an important way to improve safety in aviation operations and are required by the International Civil Aviation Authority. In commenting on a draft of this report, FAA indicated that it has not endorsed the Regional Air Cargo Carriers Association’s SMS program, nor does FAA believe it is consistent with FAA training and program guidelines for SMS. SMS is discussed in more detail later in this report. The National Air Transportation Association (NATA)—an association primarily of general aviation service companies whose members include some feeder and ad hoc carriers—has tailored flight risk assessments to the needs of its members by automating much of the assessment process. NATA officials said that feeder or ad hoc cargo carriers, like many general aviation operators, do not have the support personnel that larger carriers have to help with preflight checklists and other tasks, and that risk assessments would just add to those tasks unless they were largely automated. NTSB has recommended that a segment of the part 135 community—emergency medical services—utilize flight risk assessments before accepting flights. In some cases, large carriers act as membership associations by helping their feeder network acquire safety enhancements. For example, the Federal Express feeder program, which helps finance on-board safety enhancements on feeder aircraft, has reduced the number of accidents among its feeder network according to Federal Express officials. Since 2002, Alaska’s Medallion Foundation—a federally-funded, safety promotion organization that is overseen by FAA—offers training for part 135 pilots and has developed safety audits that can lead to carrier certifications in various areas, such as operational risk management, maintenance and ground service, and internal evaluation, to improve air transportation safety in Alaska. According to FAA, it has also approved a modified protocol for ASAP administered centrally by the Medallion Foundation in order to increase the feasibility of ASAP for small operators in Alaska. Meetings with FAA personnel to review reports submitted under the program are conducted by telephone, and all reports are tracked in a central database according to the agency. DOT said that the modified ASAP protocol with the Medallion Foundation has worked well for enabling small, remotely situated operators in Alaska to participate in the program. The Commercial Aviation Safety Team, a joint FAA-industry effort, has developed an integrated, data-driven strategy to reduce the commercial aviation fatality risk in the United States and promote new government and industry safety initiatives throughout the world. The Team has completed work on 40 of its 65 safety enhancements aimed at eliminating accident causes. Six of the safety enhancements specifically target cargo operations, and each of them is still under way as of the Team’s most recent update in May 2007. The Dallas-Fort Worth International Airport is developing a curriculum and a cargo-specific aircraft rescue and fire-fighting training course. Airport officials said that the course will focus on the unique challenges and approaches to fighting a cargo fire. For example, airport officials said that many aircraft rescue and fire-fighting teams treat passenger/cargo fires and cargo fires similarly when they should be treated differently, because passenger/cargo fires can involve hundreds of people whereas cargo fires typically only endanger the flight crew. The course will also provide hands-on training in the use of cargo-specific fire-fighting tools, such as hull-penetrating tools or devices for locating hot spots—tools that most fire fighters rarely use. Thirteen cargo experts ranked carrier SMSs as one of the most effective current efforts to improve air cargo safety, making this the third most frequently cited current safety effort. SMSs can differ in their specifics, but FAA defines SMS as a proactive, risk-based approach to addressing potential hazards by categorizing the risk level and taking appropriate mitigating actions to reduce the risk to an acceptable level. Some countries, such as Canada, require carriers to implement an SMS, but the United States does not yet require domestic carriers to do so. FAA and industry officials agreed that FAA will require part 121 carriers to implement SMSs in the next few years. In addition, FAA has issued guidance on developing an SMS, but none of the air cargo carriers we interviewed have implemented one. However, several of the air cargo carriers we interviewed said they had safety programs that are similar to SMSs. An expert in aviation safety said that effective SMSs are good for institutionalizing safety improvements and taking proactive steps to reduce the number of accidents. He further noted that larger companies with more airplanes and more resources are better positioned to do this. By contrast, companies with one airplane and one pilot will not have enough staff time to submit the paper work. As a result, the Regional Air Cargo Carriers Association—a membership organization for feeder carriers—has developed simplified SMS guidance specifically for part 135 cargo carriers. Some airports are also implementing SMSs. Most experts in our panel did not rank airport SMSs among the most effective current efforts, possibly because cargo airports have not implemented them nationwide. FAA uses airworthiness directives (AD) and operations specifications to improve aviation safety. An AD is a notification to owners and operators of aircraft that a particular model of aircraft, engine, avionics, or other system has a known safety deficiency that must be corrected. Carriers are prohibited from operating any aircraft that is out of compliance with any applicable AD. Operations specifications are specific limits and requirements developed for individual operators, such as the specific aircraft the carrier is allowed to operate. Ten experts on our panel ranked ADs and operations specifications among the three most effective current efforts to improve air cargo safety. Our interviews with carriers showed that some carriers depend on ADs and operations specifications to learn about safety issues that other carriers have discovered. An official from one feeder carrier said that ADs are like product recalls, and without them, she would never know that there was a problem until there was an accident. FAA also uses other methods for communicating with carriers, such as informational messages, alerts, advisory circulars, and seminars. However, none of the 27 experts on our panel indicated that FAA informational materials and seminars were any more than moderately effective at improving the safety of air cargo operations. FAA oversees the compliance with safety regulations of all 58 part 121 cargo large carriers by using ATOS, which applies a risk-based inspection system tailored to each carrier regulated under part 121. For example, under ATOS, principal inspectors develop surveillance plans for each airline based on data analysis and risk assessment, and adjust the plans periodically to reflect inspection results. Under ATOS, principal inspectors are assigned to just one part 121 carrier. Our interviews with part 121 carriers and FAA inspectors revealed mixed opinions about ATOS. Some of the carriers, particularly the smaller part 121 carriers, indicated that transitioning to ATOS was too complicated and costly and that its focus on administrative reviews has reduced the number of on-site FAA inspections they receive. In addition, some FAA inspectors said that the ATOS paperwork is time consuming and can have the effect of tethering them to their computers. For example, officials from a small part 121 air carrier said that they had to hire a full-time person to work on implementing ATOS as well as spend over $500,000 to hire a company to help revise the carrier’s manuals to satisfy FAA requirements under ATOS. However, officials from a large carrier indicated that ATOS is a more effective oversight system than NPG once it is fully implemented. In addition, some officials from large carriers said that the bureaucratic nature of ATOS limited the amount of direct oversight they receive. FAA officials said that ATOS does not impose new requirements on carriers, and FAA does not require carriers to set up an ATOS program. FAA officials also said that because ATOS is more robust than the oversight system it replaced— NPG—inspectors may find omissions in manuals that were overlooked before. Carriers may then be required to correct these deficiencies to meet regulatory requirements. In addition, FAA officials said they are exploring options for reducing the time inspectors spend at their computers and increasing the time they spend doing hands-on inspections. For example, FAA officials said that FAA is reducing the number of ATOS program elements in order to make inspection planning and management easier. The 303 part 135 carriers remain under the NPG system, which requires all active carriers to be inspected at least once a year and sets the numbers of required inspections nationally and planned inspections at the local level. Some FAA inspectors we interviewed who use the NPG system said they do not base their planned inspections on risk factors but, rather, on what was done the previous year or what they have time to do (NPG inspectors typically oversee several carriers). FAA officials said that FAA is moving toward a risk-based oversight “Safety Assurance System” for part 135 carriers. FAA has completed a gap analysis that compared the existing part 135 oversight system to the system requirements of the new system, and it plans to have the risk-based system developed for part 135 carriers by 2013. The number of inspections that part 135 carriers receive each year varies greatly under NPG. For fiscal years 2004 through 2007, we found 18 part 135 carriers received one inspection in a year, while 6 part 135 carriers received hundreds of inspections in a year. An FAA official said that variation should be expected. For example, carriers with more airplanes, airplanes of more types, and routes to more regions will receive more required inspections under NPG. The number of part 135 carriers that each cargo inspector oversees also varies greatly. On the low end (bottom 10 percent), part 135 cargo inspectors oversee 5 passenger and cargo carriers on average, and on the high end (top 10 percent), inspectors oversee 34 passenger and cargo carriers. While some variation is unavoidable, FAA officials said that FAA has not established guidelines to ensure that the workload is balanced among inspectors. The National Academy of Sciences also found FAA had inadequate staffing standards for its safety inspectors. The majority of the experts on our panel did not rank FAA oversight and inspections among the most effective current efforts to improve air cargo safety. Although 5 of the 27 experts rated FAA oversight and inspections among the three most effective current efforts to improve air cargo safety, none of the experts representing carriers’ perspectives listed FAA oversight among the most effective current efforts. Officials from an ad hoc cargo carrier said that FAA inspectors do not have enough specific knowledge of cargo operations to effectively oversee cargo operations. Two FAA cargo inspectors that also oversee passenger carriers said that their formal cargo oversight training consisted of a 2-hour online course. FAA officials said part 135 does not differentiate between passenger and cargo operations. However, FAA officials recognized that inspectors that oversee part 135 ad hoc operations may benefit from additional training and are revising an existing multiday course for maintenance inspectors to address cargo operations. Our analysis of oversight data for part 135 carriers showed that cargo inspectors also oversee passenger carriers, often in larger numbers. This could limit inspectors’ ability to focus on cargo-specific issues. Officials from several cargo carriers of different types said that FAA inspectors do not do enough on-site inspections to effectively find and correct safety problems. For example, FAA inspectors and carrier officials said that part 135 inspectors often focus on administrative reviews. FAA officials said there needs to be an appropriate balance of on-site inspections and administrative reviews, both of which are important for determining carrier compliance and ensuring safe operations. Ultimately, however, FAA officials said that regulatory compliance is an air carrier responsibility; FAA is responsible for ensuring that air carriers are capable of complying and, in some cases, administrative reviews may be the best way for FAA to do that. FAA’s oversight includes enforcement efforts, which are designed to promote compliance with statutory and regulatory requirements for aviation safety. When violations are identified, an FAA order calls for inspectors to take the actions most appropriate to achieve future compliance. These actions range from educational and remedial efforts, to administrative actions (such as warning notices), to punitive legal sanctions (such as fines or loss of operation certificate). Violations can be identified by FAA inspectors or by others, such as air traffic controllers or state or local government officials. The relevant FAA inspector prepares a report and recommends an enforcement action. That report and proposed enforcement action are then reviewed and possibly changed at various levels depending on the nature of the recommended enforcement action. FAA closes most air cargo regulatory violations with administrative action, such as a warning notice, or without taking any action. For 1997 through 2008, over half (56 percent) of the 6,564 enforcement actions against cargo carriers were administrative, and another 17 percent involved no action. These were very similar to the percentages for passenger carriers. Within cargo, the ad hoc sector had the lowest percentage of legal actions. Fourteen percent of ad hoc carrier enforcement actions were legal, compared with 24 percent of larger carrier enforcement actions and 16 percent of feeder carrier enforcement actions. Ad hoc cargo carriers also had the largest reductions, on average, among cargo carrier types, with the reduced fines 64 percent below the initially recommended fines. Our previous work found that FAA reduced legal actions for several reasons, including proof that the violator took corrective action to prevent a reoccurrence of the violation or economic hardship that might accrue to the entity that caused the violation. The percentages of cargo carrier violation cases closed with administrative or no action represent a continuation of trends that we observed in our last work on FAA enforcement in 2004. At that time, we found that FAA generally closes cases against passenger and cargo carriers with administrative actions, and reducing the amounts of fines may reduce the deterrent effects of those actions. Since then, our analysis of FAA’s Enforcement Information System data shows that the share of violations resolved using administrative or no action has increased slightly. Moreover, FAA still lacks information on how these actions have influenced the effectiveness of the enforcement actions, and the recommendation we made in that report—that FAA develop a process for measuring the performance and effectiveness of its enforcement actions— remains open. NTSB investigates transportation accidents, including air cargo accidents, and makes recommendations to improve safety. NTSB made numerous recommendations based on air cargo accidents but not all of those were related specifically to cargo issues. For example, after an air cargo accident in 1997, NTSB recommended that FAA require all part 121 air carriers to include additional information and training to flight crews in order to avoid that type of accident in the future. Other NTSB recommendations, however, are cargo-specific recommendations, and all of them remain open. Table 2 summarizes the status of NTSB’s cargo- specific recommendations. Most experts did not rank NTSB recommendations among the top efforts to improve air cargo safety. One expert on our panel described NTSB recommendations as the most effective effort to improve air cargo safety, and another 8 of the experts in our panel ranked NTSB recommendations as one of the top three efforts, but 18 experts did not include them among the top three efforts. According to our interviews, NTSB recommendations related to cargo operations are not always practical to implement. For example, one carrier noted that NTSB’s recommendation related to aircraft fire-fighting information (cited above) is challenging to implement because there are many cargo aircraft configurations, and even knowledge of the configuration involved in the incident that prompted the recommendation would not have helped firefighters respond to the incident. Despite these reservations, the carrier indicated that it is helping to implement the recommendation because NTSB believes it would improve safety. NTSB also holds public hearings and meetings on topics of particular interest to transportation safety personnel. For example, it held a forum on air cargo safety in 2004, but no explicit recommendations emerged from the forum. Five experts on our panel ranked NTSB’s meetings as one of the three most effective current air cargo safety efforts and none of the experts ranked the meetings as the most effective effort. The 27 experts in our panel commented on numerous additional steps that could further improve air cargo safety. Experts in our panel ranked installing state-of-the-art on-board safety systems on all cargo aircraft and tracking part 135 operations as the potential measures that would most improve air cargo safety (see fig. 8). Adding state-of-the-art on-board safety technology was the potential measure to improve air cargo safety endorsed by the most experts in our panel. Better on-board technology, particularly for smaller aircraft, could provide additional tools to better inform pilots’ judgment and decision making. One expert said that the type of technology needed depends on the type of carrier. He noted that large carriers often already have state-of- the-art technology, and that keeping pace with new technologies as they emerge is the challenge for them. However, he said that feeder and ad hoc air cargo carriers are not required to have certain on-board technologies, and that they would benefit from installing better situational awareness technologies, like Traffic Alert and Collision Avoidance Systems (TCAS) or Automatic Dependant Surveillance Broadcast (ADS-B), on their aircraft. TCAS monitors warn the pilot of potential collision dangers, and ADS-B uses satellite-based technology to broadcast aircraft identification, position, and speed with once-per-second updates. Other experts on the panel also indicated that TCAS and ADS-B would most improve safety on the smaller aircraft that lack it. FAA’s Capstone Program in Alaska has shown that better technology can reduce aircraft accidents. As described earlier, Alaska’s challenging operating conditions factored prominently in air cargo accidents over the last decade. The Capstone Program funded technology upgrades that provide pilots with information on terrain, weather, and air traffic. FAA’s goal was to reduce Alaska’s higher-than-average aviation accident rate. FAA has stated that an independent study found that, from 2000 through 2004, accidents for Capstone-equipped aircraft were reduced by 47 percent. However, the experts on our panel were not unanimous about the potential for improving safety through better on-board technologies. The experts who represent the part 135 perspective did not rate improving on- board safety systems as highly as the other experts did. Only one of the six part 135 pilots or carriers we surveyed ranked this as one of the top three potential improvements. The other five indicated that improving on-board safety systems is not feasible or would only slightly improve air cargo safety. For example, some part 135 ad hoc and feeder carriers we interviewed indicated that state-of-the-art on-board safety systems are not affordable relative to the value of the aircraft. Specifically, officials from one ad hoc cargo carrier said that traffic collision avoidance systems (such as TCAS or ADS-B) installed on a Cessna, valued at $100,000 to $200,000, would make the biggest improvement in safety, but such a system would cost about $25,000 to install on each aircraft, which the officials said was not practical. As stated earlier, we were unable to determine accident rates for small feeder or ad hoc cargo carriers because FAA does not track part 135 operations. However, operational data—such as flight hours or landings— for on-demand part 135 cargo operations would allow analysts to determine accident rates for all cargo carriers as they currently can do for all part 121 (large) carriers. This is important because a higher proportion of air cargo accidents and nearly all fatalities occur to part 135 (small) carriers. Many experts who responded to our survey indicated that better data on part 135 cargo flight and operations could improve air cargo safety. Specifically, tracking these data was ranked among the top measures with the greatest potential by the second largest share of experts in our panel. Industry experts and officials we spoke with said there is a need for FAA to have this type of data. For example, NTSB has recommended in 2003 that FAA collect additional operational data from small air carriers in order to generate accident and incident rate information for all sectors of commercial aviation, including air cargo, but the recommendation remains open because an FAA official said that FAA chose not to collect the information. In addition, numerous industry stakeholders told us that not having these data precludes FAA from effectively targeting its safety initiatives. One of the experts on our panel said that people might assume that the part 135 carriers with the most accidents are the ones with the poorest safety records, but that may not be the case when the number of operations is considered. One official from a part 135 carrier said that the industry is generally against greater reporting because it would increase workload. However, he stated that if FAA begins requiring such data, companies will find a way to comply because they already collect the data. We interviewed part 135 carriers of various sizes, and they all indicated that they already track operational data internally and could report these data to FAA without a substantial additional effort, but this was not designed to be a representative sample. As discussed earlier in this report, the regulations under which most large cargo carriers typically operate (supplemental) differ from the regulations under which most passenger carriers operate (domestic or flag). Although part 121 pilot experts on our panel indicated that aligning cargo regulations with passenger carrier regulations would improve safety, carrier experts generally disagreed. All three pilots, who fly under part 121, ranked alignment of regulations as the top potential measure to improve air cargo safety. As an example, the Air Line Pilots Association, the employee organization for most commercial U.S. pilots, supports the alignment of duty time regulations for all part 121 carriers. The Association believes that longer flight times can increase pilot fatigue and thus increase mistakes and accidents, and, as stated earlier in this report, the experts rated pilot fatigue as a serious challenge to safe cargo operations. On the other hand, none of the seven carrier experts ranked aligning cargo regulations with passenger regulations among their top three potential measures for improving air cargo safety. Five of them indicated that aligning regulations would have a slight or no improvement, and the other two indicated that aligning regulations would not be feasible. Although not specifically related to cargo operations, NTSB has also recommended that FAA revisit its time and duty regulations as they may be related to fatigue. For example, in 2008, it recommended that FAA develop guidance for operators to use in establishing fatigue management systems and then continually assess the effectiveness of the systems. It also recommended in 1995 that FAA review its flight and duty time regulations to include the findings of fatigue and sleep research. These recommendations remain open because FAA has not completed its actions related to these issues. In commenting on a draft of this report, FAA said that it convened a committee to address pilot fatigue issues and that the data it collects may be used in future rule-making efforts regarding pilot flight time, duty, and rest regulations. As described earlier, many air cargo accidents over the last 10 years occurred in conditions of accumulated risk—when several risk elements were present, but none was individually significant enough to result in the flight’s cancellation. FAA, the Flight Safety Foundation, and NATA have each developed tools that pilots or carriers could voluntarily use to assess accumulated risk factors and determine if the flight should go forward. These tools assign values to various risk elements, such as single pilot operations, night flights, and flights into areas without accurate weather reports. See appendix III for FAA’s sample flight risk assessment tool. Eight of the 27 experts on our cargo safety panel ranked flight risk assessment as one of the three potential efforts that could most improve safety. Additionally, 18 experts indicated that incorporating flight risk assessment checklists into air cargo daily operations would have a moderate or great effect on improving the safety of their operations. One expert on our panel said that flight risk assessment cannot prevent all accidents and that not all flights with multiple accumulated risk factors have accidents, but assessing the risk factors may help pilots reduce the number of cargo accidents by recognizing when the accumulated risks become unacceptably high and at which point pilots would either find ways to mitigate those risk factors or delay the flight. Despite the high level of support among our expert panelists for using flight risk assessment checklists, only 1 of the 10 carriers we interviewed used them in their daily operations. Officials from one part 135 carrier said that carriers are busy enough doing all the things that FAA requires to worry too much about ideas that might be very productive but are nonetheless optional. Other experts pointed out that all of the flight risk assessment checklists currently available were designed for passenger operations and that cargo carriers would have to tailor the tools to their needs—potentially a critical obstacle to implementation. Aviation in the United States remains safe and air cargo accidents have declined over the last 10 years, although fatal accidents do occur every year. Most of those accidents and nearly all of the fatal accidents in the last decade have happened to feeder and ad hoc carriers. However, the lack of operational data for part 135 carriers, which make up the bulk of the feeder and ad hoc carriers, makes it impossible to determine accident and fatality rates for small carriers or to track cargo-wide accident or fatality rates over time. FAA’s information on small carrier operations is based on its annual survey of aircraft owners, which does not differentiate between passenger and cargo operations, making it impossible to use the survey results for cargo operators. While the numbers of accidents suggest that the fatality rates for feeder and ad hoc carriers are higher than the rates for large carriers, it is impossible to know how much higher the fatality rates are for feeder and ad hoc carriers without data on the numbers of operations for all types of cargo aircraft. It is also difficult for FAA and industry to target further safety improvements to the areas with the highest risk. Despite the higher numbers of accidents and fatal accidents among small cargo carriers, FAA’s safety programs have focused primarily on large cargo carriers, the industry segment in which accidents and accident rates have steadily declined. While it makes sense to focus first on large carriers, which operate larger aircraft with larger crews and cargo holds, the safety of the smaller aircraft is also important. There is nothing intrinsic to small carriers that precludes risk-based oversight, voluntary disclosure programs, or the use of SMSs, but these efforts are usually targeted toward, or at least primarily used by, the large cargo carriers. However, cost is a concern for carriers, and poor economic conditions throughout the air cargo sector may mean that few funds will be available in the near term for new safety initiatives. In addition, FAA has increasingly focused on potential accident precursors that would reduce the risk of accidents before a related accident even occurs. However, neither FAA nor NTSB systematically tracks incidents in a way that would allow empirical analysis, even though incidents are widely viewed as accident precursors. Over half of the fatal air cargo accidents since 1997 had multiple risk factors. However, preflight risk assessment checklists are not required to be used within the cargo industry. The concept of assessing and recognizing accumulated risk through flight risk assessment presents an additional low-cost opportunity for identifying and reducing the risk associated with some cargo flights that might otherwise go unnoticed. To help FAA improve the data on and the safety of air cargo operations, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following four actions: Gather comprehensive and accurate data on all part 135 cargo operations to gain a better understanding of air cargo accident rates and better target safety initiatives. This can be done by separating out cargo activity in FAA’s annual survey of aircraft owners or by requiring all part 135 cargo carriers to report operational data as part 121 carriers currently do. Promote the increased use of safety programs by small (feeder and ad hoc) cargo carriers that use the principles underpinning SMS and voluntary self-disclosure programs. Evaluate the likelihood that cargo incidents could be precursors to accidents and, if FAA determines they are, create a process for capturing incidents that would allow in-depth analysis of incidents to identify accident precursors related to specific carriers, locations, operations, and equipment. Create incentives for cargo carriers to use flight risk assessment checklists in their daily operations, including tailoring a sample flight risk assessment checklist for part 135 cargo carriers. We provided copies of a draft of this report to DOT and NTSB for their review and comment. Both agencies provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, and the Chairman of the National Transportation Safety Board. We are also making copies available to others on request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objective in conducting this study was to review the nature and extent of safety issues in the air cargo industry and what the Federal Aviation Administration (FAA) and others are doing and could do to address them. To accomplish this objective, we established the following research questions: (1) What have been recent trends in air cargo safety? (2) What factors have contributed to air cargo accidents in recent years? (3) What have FAA and the industry done to improve air cargo safety, and how do experts view the effectiveness of these efforts? (4) What do experts say FAA and industry could do to further improve air cargo safety? To determine trends in air cargo safety, we obtained and analyzed accident and incident data for calendar years 1997 through 2008. From the National Transportation Safety Board (NTSB), we obtained accident data for part 121 and part 135 all-cargo and mail operations that occurred from January 1, 1997, through December 31, 2008. To capture the full extent of cargo operations in Alaska, however, we also included passenger/cargo accidents because Alaska’s by-pass mail system, which requires carriers to have a certain share of the passenger market to obtain a by-pass mail contract, resulted in fewer all-cargo carriers in that state. From these data we identified 443 fixed-wing aircraft accidents, including 93 fatal accidents. Six of the accidents involved 2 aircraft, so our data included a total of 449 accident aircraft. From FAA, we obtained data on part 121 and part 135 fixed-wing all-cargo accidents and incidents that occurred from January 1, 1997, through December 31, 2007. From these data, we eliminated accidents that were also included in the NTSB data, for a total of 937 accidents and incidents. To avoid confusion when discussing the two data sets, we refer to the FAA data as “incidents” and the NTSB data as “accidents.” Because we are familiar with and have previously determined that these data were sufficiently reliable for the nationwide descriptive and comparative analyses used in this report, we interviewed agency officials knowledgeable about the databases from which the data were derived to determine that the accident and incident data used in this report continue to be sufficiently reliable for the types of analyses we performed. We also obtained information on industry trends by conducting a literature search and reviewing the resulting documents, conducting a survey of air cargo experts, and interviewing officials and reviewing relevant documents from FAA, the Pipeline and Hazardous Materials Safety Administration (PHMSA), air cargo industry associations, air cargo carriers, airports, an employee group, and others. We also conducted site visits to Alaska, Ohio, and Texas. Those locations were selected to be geographically diverse and as the states with the largest number of air cargo accidents or because of the relatively large number of air cargo carriers of various sizes located there. To assess what factors have contributed to air cargo accidents in recent years, we conducted several analyses. First, to determine prominent accident causes, we analyzed data on probable causes and contributing factors from completed NTSB investigations of 417 air cargo accidents. Second, to assess accumulated risk, we applied FAA’s proposed flight risk assessment tool to NTSB’s reports on the 93 fatal cargo accidents that occurred during our review period. To do this, we searched each accident report for the 38 risk factors in the tool, such as “pilot flight time less than 100 hours in the last 90 days.” For each factor found, we noted its corresponding risk value on an Excel spreadsheet and tabulated the total score as well as the total number of risk factors for each fatal accident. Third, for indications of other factors contributing to air cargo accidents, we surveyed a panel of air cargo experts, which is discussed in more detail later below; analyzed documents and interviewed officials from FAA, PHMSA, NTSB, air cargo industry associations, air cargo carriers, airports, an employee group, and others; and conducted site visits to Alaska, Ohio, and Texas (see the previous paragraph). To determine what FAA and the air cargo industry have done to improve safety, we interviewed FAA and industry officials, reviewed key documents, and analyzed FAA’s oversight and enforcement data for all- cargo carriers. We interviewed officials and tested the data and found it sufficiently reliable for our purposes. To obtain experts’ opinions about how FAA and the air cargo industry could further air cargo safety, we surveyed a panel of 27 air cargo safety experts. The experts rated and provided relative rankings on the effectiveness of current efforts to improve air cargo safety, the severity of safety challenges faced by the air cargo sector of aviation, and the potential improvements that additional efforts could have on air cargo safety. We selected the panel of experts with the assistance of the National Academy of Sciences to represent the perspectives of a cross-section of air cargo stakeholders. The specific experts, their affiliation, and their expert perspectives are listed below. To develop our survey of air cargo experts, we reviewed existing studies about air cargo safety, including previous and ongoing GAO work, and interviewed air cargo safety stakeholders. GAO subject matter experts designed draft questionnaires in close collaboration with a social science survey specialist. We conducted pretests with four people knowledgeable in the field of air cargo (representatives from air carriers, airports, and air transportation associations) to help further refine our questions, develop new questions, clarify any ambiguous portions of the survey, and identify any potentially biased questions. These pretests were conducted in-person and by telephone. We worked with the National Academy of Sciences and internally to develop the panel of experts and obtain contact information. We launched our Web-based survey on August 18, 2008, and received all responses by November 5, 2008. Log-in information to the Web-based survey was e-mailed to participants. We sent one follow-up e-mail message to all nonrespondents a week later, and contacted by telephone all those who had not completed the questionnaire within 3 weeks. We received responses from all 27 of our selected experts. Because our survey was not a sample survey, there are no sampling errors; however, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. As indicated above, GAO subject matter experts collaborated with a social science survey specialist to design draft questionnaires, and versions of the questionnaire were pretested with four knowledgeable people in the air cargo field. From these pretests, we made revisions as necessary. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error. A second, independent analyst checked the accuracy of all computer analyses. We worked with the National Academy of Sciences to identify air cargo experts that included carrier, pilot, airport, aircraft manufacturer, government, and human factors and safety performance perspectives. We sent our Web-based survey to 27 air cargo experts and received responses from all 27 experts. Our survey was composed of closed- and open-ended questions. In this appendix, we include all the survey questions and aggregate results of responses to the closed-ended questions; we do not provide information on responses provided to the open-ended questions. For a more detailed discussion of our survey methodology, see appendix I. Q1. In your opinion, how effective, if at all, is each of the following current efforts to improve the safety of cargo-only flights? Q2. Considering the above list of efforts (question 1 a-j), which three do you believe are the most effective in improving air cargo safety? Q3. Besides the efforts listed above, do you know of any other current significant efforts to improve cargo-only aviation safety? If so, please explain. Q4. In your opinion, how much of a challenge, if any, does each of the following issues pose to safely operating cargo-only flights? Q5. Considering the above list of challenges (question 4 a-n), which three do you believe are the greatest challenges to air cargo safety? a. Pilot fatigue related to nighttime flying, ineffective rest periods, or commuting b. Carrier policies that, by their nature, give pilots economic incentive to fly in less-than-ideal conditions c. Flights scheduled with less than 4 hours of notice given to the crew d. Low piloting experience overall, as well as low piloting experience in the specific types of cargo aircraft operated f. Variation within the cargo-only sector regarding the priority given to safety g. Availability of aircraft rescue and fire-fighting services provided by persons with cargo-specific knowledge and training h. Difficult cargo-only flight and operating conditions (e.g., airports with limited technology, mountainous terrain, nighttime operations) i. Alaska's aviation and operating j. Q6. Besides the challenges listed above, do you know of any other significant challenges to safe cargo-only air operations? If so, please explain. Q7. In your opinion, how much improvement, if any, in the safety of cargo- only air operations would be provided if each of the following measures were implemented? Q8. Considering the above list of possible measures (question 7 a-k), which three do you believe would provide the greatest improvement to air cargo safety? a. FAA increasing the amount of on-site inspections that it conducts training and knowledge of cargo-only air operations c. Carriers and flight schools providing better training for cargo-only pilots Transportation (DOT) or Transportation Security Administration (TSA) increasing shipper knowledge and declaration of hazardous materials e. DOT or TSA promoting better shipper compliance with rules for the handling and transport of declared hazardous materials standards for ramp employees involved in loading and unloading cargo g. Industry setting uniform standards for ramp employees involved in loading and unloading cargo 3 incorporating flight risk assessment checklists into their daily operations i. FAA collecting and using part 135 and part 91 cargo flight and operations data to better target safety efforts j. Q9. Besides the possible measures listed above, do you have any other suggestions for significantly improving cargo-only aviation safety? If so, please explain. Q10. Please provide any other comments that you have regarding cargo- only safety. In addition to those named above, Teresa Spisak (Assistant Director), Richard D. Brown, Keith Cunningham, Elizabeth Eisenstadt, Michele Fejfar, David Hooper, Mitchell Karpman, Valerie Kasindi, Sara Ann Moessbauer, Susie Sachs, Christine San, Pamela Vines, and Crystal Wesco made key contributions to this report. | The air cargo industry contributed over $37 billion to the U.S. economy in 2008 and provides government, businesses, and individuals with quick delivery of goods. Although part of an aviation system with an extraordinary safety record, there have been over 400 air cargo accidents and over 900 incidents since 1997, raising concerns about cargo safety. GAO's congressionally requested study addresses (1) recent trends in air cargo safety, (2) factors that have contributed to air cargo accidents, (3) federal government and industry efforts to improve air cargo safety and experts' views on the effectiveness of these efforts, and (4) experts' views on further improving air cargo safety. To perform the study, GAO analyzed agency data, surveyed a panel of experts, reviewed industry and government documents, and interviewed industry and government officials. GAO also conducted site visits to Alaska, Ohio, and Texas. From 1997 through 2008, 443 accidents involving cargo-only carriers occurred, including 93 fatal accidents. Total accidents declined 63 percent from a high of 62 in 1997 to 23 in 2008. Small cargo carriers were involved in the vast majority of the accidents--79 percent of all accidents and 96 percent of fatal accidents. Although accident rates for large cargo carriers fluctuated during this period, they were comparable to accident rates for large passenger carriers in 2007. GAO could not calculate accident rates based on operations or miles traveled for small carriers because the Federal Aviation Administration (FAA) does not collect the necessary data. Although several factors contributed to these air cargo accidents, our review of National Transportation Safety Board (NTSB) data found that pilot performance was identified as a probable cause for about 80 percent of fatal and about 53 percent of non-fatal cargo accidents. Furthermore, GAO's analysis of NTSB reports for the 93 fatal accidents, using an FAA flight-risk checklist, identified three or more risk factors in 63 of the accidents. Risk factors included low pilot experience, winter weather, and nighttime operations. Alaska's challenging operating conditions and remotely located populations who rely on air cargo are also a contributing factor. Many federal efforts to improve air cargo safety focus on large carriers. Air cargo experts that GAO surveyed ranked FAA's voluntary disclosure programs--in which participating carriers voluntarily disclose safety events to FAA--as the most effective effort to improve air cargo, but two of the three main voluntary disclosure programs are used typically by large carriers. Several industry initiatives, however, focus on carriers with smaller aircraft, such as the Medallion Foundation, which has improved small aircraft safety in Alaska through training and safety audits. The two actions experts cited most often to further improve air cargo safety were installing better technology on cargo aircraft to provide additional tools to pilots and collecting data to track small cargo carrier operations. Using flight risk checklists can also help pilots assess the accumulated risk factors associated with some cargo flights. |
From May 2003 through June 2004, the CPA was the UN-recognized coalition authority led by the United States and the United Kingdom that was responsible for the temporary governance of Iraq and for overseeing, directing, and coordinating the reconstruction effort. Within the CPA, the Project Management Office (PMO) was established to provide prioritization and management of projects and contract support of U.S.-funded reconstruction projects. In May 2004, the President issued a National Security Presidential Directive, which stated that after the transition of power to the Iraqi government, the Department of State (State) through its ambassador to Iraq would be responsible for all U.S. activities in Iraq, with the exception of U.S. efforts relating to security and military operations, which would be the responsibility of the Department of Defense (DOD). On June 28, 2004, the CPA transferred power to a sovereign Iraqi interim government, and the CPA was officially dissolved. At that time, the U.S. role—under DOD leadership—changed from being part of the coalition-recognized authority for temporary governance of Iraq to supporting the sovereign Iraqi government as an ally and friend, under State leadership. Management authority and responsibility of the U.S. reconstruction program also transitioned at that time from DOD to State. The Presidential Directive also established two temporary offices: the Iraq Reconstruction and Management Office (IRMO) to facilitate transition of reconstruction efforts to Iraq; and the Project and Contracting Office (PCO) to facilitate acquisition and project management support for U.S.-funded reconstruction projects. Iraq-based personnel from both offices are under U.S. chief of mission authority in Baghdad, although the U.S. Department of the Army funds, staffs, and oversees the operations of the PCO. IRMO is a State Department organization and its responsibilities include strategic planning, prioritizing requirements, monitoring spending, and coordinating with the military commander. Under the authority of the U.S. Chief of Mission in Baghdad, the PCO’s responsibilities include contracting for and delivering services, supplies, and infrastructure funded by $12.4 billion of the $18.4 billion for Iraq relief and reconstruction in the fiscal year 2004 emergency supplemental passed by the Congress. (See fig. 1.) Other U.S. government agencies also play significant roles in the reconstruction effort. For example, USAID is responsible for projects to restore Iraq’s infrastructure, support healthcare and education initiatives, expand economic opportunities for Iraqis, and foster improved governance. The U.S. Army Corps of Engineers (USACE) provides engineering and technical services to the PCO, USAID, and military forces in Iraq, including planning, design, and construction management support for military and civil infrastructure construction. As of March 2005, U.S. appropriations, Iraqi revenues and assets, and international donor pledges totaling about $60 billion had been made available to support the relief and reconstruction and government operations of Iraq. U.S. appropriations of more than $24 billion for relief and reconstruction activities have been used largely for security and essential services—including the repair of infrastructure, procurement of equipment, and training of Iraqis—and have been reallocated over time as priorities have changed. Iraqi revenues and assets, which totaled about $23 billion in cumulative deposits, were turned over to the new Iraqi government in June 2004 and have largely funded the operating expenses of the Iraqi government. International donor funds have been primarily used for public and essential service reconstruction activities; however, most of about $13.6 billion pledged over a 4-year period is in the form of potential loans that have not been accessed by the Iraqis. As of March 2005, of the $24 billion in appropriated U.S. funds made available for relief and reconstruction in Iraq from fiscal years 2003 through 2005, about $18 billion had been obligated and about $9 billion had been disbursed. These funds were disbursed for activities that include infrastructure repair of the electricity and oil sectors; infrastructure repair, training, and equipping of the security and law enforcement sector; and CPA and U.S. administrative expenses. Many current U.S. reconstruction efforts are consistent with initial efforts the CPA developed before June 2004. As priorities changed, particularly since the transition of power to the Iraqi Interim Government, the U.S. administration reported that it had reallocated about $4.7 billion of the $18.4 billion fiscal year 2004 emergency supplemental among the various sectors. (See fig. 2.) These reallocations were reported in October 2004, January 2005, and April 2005. As of May 2005, the administration was assessing whether additional reallocations would be needed for short-term reconstruction efforts. In October 2004, the administration reported that it had reallocated appropriated funds from the $18.4 billion fiscal year 2004 emergency supplemental based on a review of all U.S. reconstruction funding priorities. The administration reported that it had reprogrammed about $1.8 billion to security and law enforcement and about $1.2 billion to economic and private sector development and governance activities. These funds were reallocated from future water and electricity infrastructure projects. In addition, about $450 million in the oil sector had been reprogrammed from refined fuel imports to oil reconstruction projects. This review, prompted by both the transition from the CPA to a new State Department-led mission and a significant increase in insurgent activity in mid-2004, determined that the deteriorating security situation, the desire of the interim Iraqi government to quickly expand its security forces, and the need to create more jobs for the Iraqi people demanded a significant reallocation of funding. In January 2005, the administration reported that it had reallocated $457 million. The administration reported that $246 million of this amount was for smaller projects to provide immediate and visible essential services in four cities—Fallujah, Samarra, Najaf, and Sadr City—affected by coalition battles with the insurgents. According to agency documents and officials, these services included critical health needs, power distribution, and potable water projects. This funding was shifted from longer term power generation, transmission, water, and hospital projects. The remaining $211 million of the reallocated funds was redistributed within the electricity sector from longer range transmission projects to more immediate needs, such as spare parts procurements, turbine upgrades, and repair and maintenance programs. In April 2005, the administration reported that it had reallocated $832 million—$225 million for job creation activities and $607 million for essential services projects and programs. To fund these efforts, the embassy cancelled five longer term potable water projects and future energy projects. The $225 million reallocation for job creation activities primarily includes activities in targeted Baghdad neighborhoods and through USAID’s Community Action Program throughout Iraq. Of the $607 million reallocation for essential services, $444 million is for the electricity sector, including operations and maintenance projects at a number of strategic power plants to reportedly enhance the sustainability of ongoing projects, the completion of several electricity generation and rehabilitation projects, and the coverage of cost growth due to increased security costs in the electricity sector. The remaining funds allocated for essential services programs include funds for gas/oil separation plants, operations and maintenance projects for water treatment plants recently turned over to the Iraqis, and prison and courthouse security projects. Iraqi funds, which totaled about $23 billion in cumulative deposits from May 2003 through June 2004, are a mix of revenues and assets that the CPA used primarily to support the Iraqi budget for operating expenses, such as salary payments and ministry operations. A smaller portion of the $23 billion—approximately $7 billion—was allocated for relief and reconstruction projects, primarily for the import of refined fuel products, security, regional programs, and oil and power projects. These Iraqi funds came from revenues in the Development Fund for Iraq (DFI) and vested and seized assets from the previous Iraqi regime. Of the $23 billion, nearly $17 billion had been disbursed as of June 28, 2004. The DFI was initially comprised of Iraqi oil proceeds, UN Oil for Food program surplus funds, and returned Iraqi government and regime financial assets. From May 2003 to June 2004, nearly $21 billion had been deposited, $17 billion allocated, and $14 billion disbursed. The CPA turned DFI stewardship over to the new Iraqi government in June 2004. The majority of the funding had been used for Iraqi ministry operations, including salaries and other Iraqi budget support. Iraqi oil revenues continued to be deposited into the DFI after June 28, 2004. According to State Department estimates, about $18 billion in oil revenues had been deposited into the DFI since the transition from the CPA to the interim Iraqi government, as of May 31, 2005. The vested assets were former Iraqi regime funds frozen and held in U.S. financial institutions after the first Persian Gulf War and subsequently vested by the President in the U.S. Treasury in March 2003. In addition, assets of the former regime were seized by coalition forces within Iraq. These combined vested and seized assets totaled about $2.65 billion and had largely been obligated and disbursed by the time the CPA transferred authority to the Iraqi Interim Government. The vested and seized assets were used primarily on ministry operations, salaries, and regional programs, such as the Commander’s Emergency Response Program. International donors’ funds have been largely used to support public and essential service reconstruction activities; however, most of donors’ pledges are in the form of loans that have not been accessed by the Iraqis. International donors have pledged about $13.6 billion in support of Iraq reconstruction over a 4-year period from 2004 through 2007. Of this amount, about $10 billion, or 70 percent, is in the form of loans, primarily from the World Bank and International Monetary Fund (IMF). Donors have pledged the remaining $3.6 billion as grants, to be provided multilaterally or bilaterally. Of the $10 billion in loans pledged over the 4-year period, about $1 billion was pledged to be provided to Iraq in 2004. As of March 31, 2005, Iraq had accessed $436 million of the available amount. The IMF provided a $436 million emergency post-conflict assistance loan to Iraq in September 2004 to facilitate Iraqi debt relief. According to a State Department official, the Iraqi government is currently in discussions with the World Bank and the government of Japan about lending programs, which total $6.5 billion. Of the $3.6 billion in grants pledged over the 4-year period, about $700 million was pledged to be provided to Iraq in 2004, some of which would be provided multilaterally and some bilaterally. The established mechanism for channeling multilateral assistance to Iraq is the International Reconstruction Fund Facility for Iraq (IRFFI), which is composed of two trust funds, one run by the United Nations Development Group and the other by the World Bank Group. As of March 31, 2005, more than $1 billion had been deposited into these funds; the largest deposits were made by Japan ($491 million), the European Commission ($227 million), and the United Kingdom ($127 million). Of that amount, about $683 million had been obligated and about $167 million had been disbursed to individual projects. Of the $167 million disbursed by the IRFFI, the UN trust fund had disbursed about $155 million for projects in 11 categories, as of March 2005. Currently, the largest portion of UN trust fund disbursements has been made to activities that support the electoral process (about $87 million), education and culture (about $25 million), health (about $13 million), and infrastructure and housing (about $12 million). The remaining disbursements have supported activities in refugee assistance; agriculture, water resources, and the environment; food security; governance and civil society; water and sanitation; poverty reduction and human development; and mine action. Funds for projects are disbursed to participating UN agencies for implementation. The World Bank trust fund has disbursed $12 million for projects that include capacity building, textbooks, school and health rehabilitation, water and sanitation projects, and private sector development. The World Bank is implementing a capacity-building project, and the Iraqi ministries are implementing the remaining projects. Donors have also provided bilateral assistance for Iraq reconstruction activities; however, complete information on this assistance is not readily available. As of April 6, 2005, the State Department had been able to identify about $1.3 billion—of the $13.6 billion pledged—in funding that donors had provided as bilateral grants directly to Iraqi institutions, implementing contractors, and non-governmental organizations for reconstruction projects outside the International Reconstruction Fund Facility for Iraq. As we reported in June 2004, the United States was working with the Iraqis to develop a database for tracking all bilateral commitments made to reconstruction activities in Iraq. One year later, this database for tracking all donor assistance projects in Iraq remained under development with assistance from the United States and the UN. In March 2005, the UN gave Iraqi staff of the Ministry of Planning and Development Cooperation a 7-day training session in the use and management of this database. The UN plans to provide technical and management support to the ministry and additional training over the next year. According to a State Department official, the database was planned to be operational in time for the IRFFI Donor Committee meeting in Amman, Jordan, which was held July 18-19, 2005. The U.S. efforts to reconstruct Iraq’s essential services sectors have shown some progress to date yet continue to face significant challenges. Of the approximately $9 billion of appropriated funds the United States had disbursed for reconstruction, as of March 31, 2005, approximately $3.1 billion had been spent on restoring Iraq’s oil, electricity, water and health sectors. Overall, the U.S. program in these sectors has accomplished activities that focused on essential services restoration, such as refurbishing and repairing oil facilities, increasing electrical generating capacity, restoring water treatment plants, and expanding the availability of basic health care. Initial activities to restart the oil infrastructure have largely been completed; however, activities to sustain production and export levels have been slower than originally planned and these levels remained below pre- March 2003 conflict capacity, as of May 2005. Progress has been made in rehabilitating electric facilities and generation capacity has been increased. Overall production levels for the electricity sector were lower in May 2005 than before the March 2003 conflict, although power generation exceeded this level for the latter part of June 2005. While the water and sanitation program has made some progress toward completing a reduced scope of activities, this progress has been difficult to measure and some completed projects have not functioned as intended. The U.S. program to expand basic health care has made progress in helping reestablish health services in Iraq, but larger health infrastructure projects remained under way as of May 2005. Implementation of the U.S. reconstruction program in these sectors continues to face challenges, such as security, sustainability, and the measurement of program results. U.S. efforts in the oil sector have focused largely on (1) restoring Iraq’s oil infrastructure to prewar production and export capacity, (2) delivering refined fuels for domestic consumption, and (3) developing oil security and pipeline repair teams. More than $5 billion in U.S. and Iraqi funds has been made available for these efforts. Progress to date on U.S. activities has been slower than planned due to a number of factors, including the security environment and difficulties associated with funding, project prioritization, contractor reporting, the contract management processes, and Iraq’s political transitions. The oil sector faces challenges that include establishing effective infrastructure security forces and pipeline repair teams; addressing issues related to domestic refined fuel supply and consumption; and defining the oil sector’s organizational structure, foreign investment framework, and energy priorities. Iraq’s economy is highly dependent on revenues from crude oil export, and its population is dependent on having sufficient refined fuels for power generation, cooking, heating, and transport. According to the State Department, Iraq’s oil export revenues are expected to account for at least 90 percent of Iraq’s projected 2005 budget revenues. This revenue is essential to Iraq’s ability to provide for its own needs, including reconstruction. Iraq’s oil infrastructure is an integrated network that includes oil fields and wells, pipelines, pump stations, refineries, gas/oil separation plants, gas processing plants, and export terminals and ports. This infrastructure has deteriorated significantly over past decades due to war damage, inadequate maintenance, and the limited availability of spare parts, equipment, new technology, and financing. U.S. agency documents estimated Iraq’s 2003 actual pre war crude oil production at 2.6 million barrels per day (bpd) and export levels at 2.1 million bpd. Considerable looting after Operation Iraqi Freedom and continued attacks on crude and refined product pipelines have contributed to Iraq’s reduced oil production and export capacities. About $2.7 billion of U.S. appropriated funds and $2.7 billion in Iraqi funds have been made available for U.S. efforts to support Iraq’s oil sector. These efforts focus largely on (1) restoring Iraq’s oil infrastructure to sustainable prewar crude oil production and export capacity, (2) delivering and distributing refined fuels for domestic consumption, (3) developing oil security and pipeline repair teams, and (4) providing technical assistance for organizing and sustaining Iraq’s oil industry. Specific U.S. activities and projects for the restoration of Iraqi’s oil production and export capacity include restoring the Qarmat Ali water reinjection and treatment plant to create and maintain sufficient oil field pressure in the Rumailah oil field; repairing the Al-Fathah oil pipeline crossing; restoring several gas and/or oil separation plants near Kirkuk and Basrah; and repairing natural gas and liquefied petroleum gas plant facilities in southern Iraq. U.S. activities also include the restoration of wells, pump stations, compressor stations, export terminals, and refineries, and providing electrical power to many of these oil facilities. According to agency and contracting officials, the United States provides primarily procurement, engineering, technical expertise and some construction services for these projects. Iraq oil company employees conduct some repair operations and construction. In addition to infrastructure restoration activities, the United States facilitated and oversaw the purchase, delivery, and distribution of refined fuels throughout Iraq, primarily using DFI funds from late May 2003 through August 2004. Used for cooking, heating, personal transportation, and private power generation, these imports were required to supplement domestic production due to increased demand and Iraq’s limited refining capacity. The responsibility for this effort was transferred to Iraq’s State Oil Marketing Organization after August 2004. The United States also assisted in developing an oil security force and pipeline repair teams to respond to looting, sabotage, and sustained attacks, primarily on oil pipelines. Finally, the United States also provided technical assistance and support to the Iraqi Ministry of Oil to define Iraq’s operational, legal, policy, and investment frameworks for the industry. Although some activities to restart Iraq’s oil production and export have been completed, the implementation of the U.S. program to assist in restoring and sustaining Iraq’s crude oil production and export levels to pre-March 2003 capacity has been slower than originally planned. Of the $2.7 billion in appropriated funds for the oil sector, the United States had obligated about $2 billion and disbursed $1.1 billion, as of March 31, 2005. In addition, of the $2.7 billion in Iraqi funds, about $215 million had been spent on these infrastructure restoration efforts. Initial production and export targets were reached in 2003 and early 2004 as U.S. efforts were made to complete assessments and quick repair projects, provide dedicated power, and procure spare parts and equipment. Since November 2004, however, crude oil production and export levels have not been sustained primarily due to pipeline attacks and a natural decline in production resulting from years of improper reservoir management, according to U.S. and former CPA officials. From December 2004 through May 2005, estimated production and export levels remained relatively constant at about 2.1 million bpd and 1.4 to 1.6 million bpd, respectively. (See fig. 3.) Targets for December 2005 are to reach 2.8 million bpd in production and 1.8 million bpd in exports. Several U.S. government, former CPA, and contractor officials stated that funding uncertainties, project reprioritizations, inadequate contractor reporting, and frequent changes in contract management procedures or processes have impeded progress. In addition, some officials stated that the overall security environment has slowed their ability to obtain or move equipment, materials, and personnel, in some cases delaying project progress. Some officials estimated that a combination of these factors have contributed to delays of 2 to 6 months at different points in the oil sector program’s overall implementation. Some significant projects experienced further delays from late 2004 to early 2005 due to security, technical, or legal problems that over the past several months, according to agency officials, resulted in lower crude oil production or export. For example, one significant project to provide water and field pressure maintenance in southern Iraq could not be fully utilized, primarily due to associated infrastructure degradation, thus limiting the facility’s operations and Iraq’s level of crude oil production. In general, most larger scale, higher dollar projects are either under way or scheduled to begin by August 2005, and IRMO officials stated that sector efforts are focused on a defined set of projects that the Ministry of Oil agreed to in November 2004. As of May 2005, U.S. officials and reporting indicated that the overall program is scheduled to be completed by mid- to late-2006. U.S. efforts directly facilitated the CPA’s purchase and delivery of imported gasoline, liquefied petroleum gas, kerosene, and diesel for domestic use in Iraq. About $2.3 billion of the $2.7 billion in Iraqi funds was used to purchase, supply, and distribute these refined fuel products. These efforts required the coordination of significant trucking operations and military convoys to move considerable quantities of fuels and to increase the capacity to download these fuels at several supply points throughout Iraq. Although no longer responsible for the purchase and delivery of these refined fuels, U.S. agencies continue to monitor Iraq’s efforts to maintain a 15-day supply of refined fuel stocks. Although estimated national supply levels were low from November 2004 to March 2005, U.S. agency documents report that levels of these products improved and, as of May 2005, only diesel stocks remained significantly below the 15-day supply targets. However, agency reporting also noted distribution problems such as criminal attacks on delivery trucks, sabotage to domestic product lines, and black market activity related to the sale of these products. These problems continue to negatively affect the population’s access to these fuels for their daily needs. Of the $2.7 billion of Iraqi funds made available for the oil sector, about $170 million was used to develop oil security and pipeline repair teams. CPA oil security efforts included the establishment of a U.S. task force to manage the training and equipping of an oil security force. This effort began in late 2003 and focused primarily on guarding fixed facilities and, to a lesser extent, patrolling pipelines. The oil security force numbers reached over 14,000 as of June 2004, according to agency officials; however, in responding to our draft report State indicated that this force was not staffed, trained, or equipped to patrol pipelines. Because the number and intensity of pipeline attacks increased during the summer and fall of 2004, the overall effectiveness of this force has been difficult to gauge. In responding to our draft report, State indicated that this level of attacks demonstrates the effectiveness of the insurgency in Iraq and the inability of coalition forces to register the security of the oil infrastructure as a high priority. According to agency documents, the Ministry of Oil assumed responsibility for these security personnel in December 2004. In a related effort, the CPA established an emergency response organization in early 2004 to rapidly return damaged pipelines to service. The primary contractor was responsible for a certain number of repairs; it was also responsible for training repair crews and providing new tools and techniques to sustain this effort after its August 2004 contract expiration. In July 2004, the U.S. government indicated that the contractor’s performance was unsatisfactory and withheld funds. According to U.S. officials and documents, in August 2004 IRMO mobilized an emergency repair team; in February 2005, the Ministry of Oil mobilized a second emergency repair team; and responsibilities for these efforts were being transitional to the Iraqis as of June 2005. Iraq’s economy relies on oil revenues to support its budget. In the near term, Iraq is dependent on the completion of several of the U.S. program’s infrastructure projects, whose successful operations are expected to generate revenues to support Iraq’s 2005 budget. In addition to this challenge, the Iraqis face shorter and longer term oil sector challenges that include training, equipping, and funding effective infrastructure security forces and pipeline repair teams; addressing issues related to domestic refined fuel supply and consumption; and defining the oil sector’s organizational structure, foreign investment framework, and energy priorities, among others. Attacks against the oil infrastructure continue and limit Iraq’s ability to export crude oil and distribute refined products domestically. The United States and Iraq have attempted to establish infrastructure security forces as well as emergency response teams to address this issue. However, difficulties in determining organizational responsibility and funding for such efforts have impeded their completion and contributed to insufficient protection of oil infrastructure, particularly pipelines. According to agency reporting in April 2005, plans were being discussed to provide mobile security for pipelines. In addition, in response to our draft report DOD told us in July 2005 that the Iraqi government, with Coalition support, is leading an effort to enhance oil infrastructure security. CPA and U.S. officials have emphasized the importance of restoring Iraq’s refinery capacity to increase the supply of refined fuel products for domestic use and to decrease the amount spent on refined product imports. According to a former agency official, replacing existing refineries with modern technology facilities may require $6 to $7 billion over a 10-year period, while fuel imports cost over $2 billion annually. Iraq subsidizes the refined fuels it imports and produces, and the price of these fuels is less than a few cents per liter. U.S. officials have reported that low prices also encourage black market activity such as smuggling or the purchase and resale of refined products, both of which can ultimately result in local distribution shortages and insufficient access to these needed fuels. CPA and U.S. officials have provided assistance to the Iraqis in developing refined fuel pricing reform strategies. Iraq committed to increase the domestic prices of refined products to generate an estimated $1 billion in revenues in 2005, according to IMF and agency documents. However, potentially negative popular reaction may make it difficult for the Iraqis to implement any repricing strategies at this time. Iraq’s framework for managing its oil industry and the use of its energy resources is not yet defined. Decisions by Iraq’s new government may alter how the country runs its oil operations and may also influence the amount and type of capital investment that Iraqis and foreigners are willing to provide. In addition, establishing regulations for resource management and revenue distribution are part of the Iraqi government’s current effort to draft a constitution. Outcomes of these activities will affect Iraq’s overall economic goals and priorities. U.S. efforts in the electricity sector have focused on restoration and construction of Iraq’s electrical system. As of March 31, 2005, about $5.7 billion—about $4.9 billion in appropriated funds and $816 million in Iraqi funds—had been made available to provide electricity services that meet Iraq’s national needs. Some progress was made in restoring Iraq’s electricity infrastructure, reportedly adding about 1900 megawatts of generating capacity to Iraq’s power grid between March 2003 and May 2005. Iraq’s overall power generation was lower through May 2005 than before the 2003 conflict, although power generation exceeded this level for the latter part of June 2005. The causes for lower overall power generation included planned and unplanned maintenance needs for power stations and fuel shortages. The electricity sector faces a number of challenges to meeting Iraq’s electricity needs, including the lack of appropriate fuel supplies, Iraqi operation and maintenance capacity, the unstable security environment, financing needs for distribution projects, and effective management of electricity generation and distribution. According to senior U.S. agency officials, Iraq’s electricity infrastructure was in worse condition following the 2003 conflict than initially anticipated or reported in the 2003 UN/World Bank needs assessment. The report noted the severe degradation of Iraq’s generating capacity—from about 5,100 megawatts in 1990 to about 2,300 megawatts post-1991 Gulf War—largely due to war damage to generation stations. Although the report notes that production was restored to about 4,500 megawatts before the 2003 conflict, U.S. officials said that Iraq’s electrical infrastructure had experienced significant deterioration due to the war and years of neglect under Saddam’s regime. Spare parts were largely unavailable when UN sanctions were in place between 1991 and 2003. Equipment and facilities had not been maintained and required significant overhauls. In addition, some facilities and transmission lines were damaged by U.S. forces during the 1991 Gulf War or by the looting and vandalism of facilities following the 2003 conflict. About $4.9 billion in appropriated and $816 million in Iraqi funds from the DFI have been made available for U.S. reconstruction efforts in the electricity sector. These efforts focus on restoring or constructing generation, transmission, distribution, and automated monitoring and control systems in Iraq’s electrical system. Other projects have included capacity building and training security forces to protect the electrical infrastructure. According to agency documentation, the majority of financial assistance in this sector has focused on generation projects, such as rehabilitating and repairing existing equipment or procuring and installing new turbines and generators. Transmission projects, such as erecting transmission towers and stringing transmission lines, have been another significant focus. Although some progress has been made in rehabilitating many Iraqi electric facilities as of May 2005, electricity production in Iraq was lower than before the March 2003 conflict. However, for the latter part of 2005 power generation exceeded this level. Of the $4.9 billion appropriated as of March 31, 2005, the United States had obligated $3.7 billion and disbursed $1.7 billion, mostly for generation projects to repair existing equipment or procure new turbines and generators for power plants. In addition, of the $816 million in Iraqi funds authorized for U.S. activities in the electricity sector, about $758 million had been disbursed as of March 31, 2005. Two key targets of the U.S. reconstruction effort are increasing total generating capacity and daily megawatt hours of electricity produced. The first key target is to increase Iraq’s total generating capacity by 3,100 megawatts by June 2005. As of May 2005, U.S.-funded projects reportedly had added or restored about 1900 megawatts of generating capacity to Iraq’s power grid. However, U.S. program and contracting officials have raised concerns about the ability of the Ministry of Electricity and local power plant operators to sustain the added generation capacity. The other key target has been to help Iraq produce 120,000 megawatt-hours of electricity per day by June 2005. In May 2005, agency reports show this target was revised to producing 110,000 megawatt-hours by December 2005. As shown in figure 4, Iraq produced more than 100,000 megawatt-hours of electricity most days between July and November 2004; however, production dropped below prewar production levels through May 2005, varying between 51,000 and 99,800 megawatt-hours daily. Agency reports attribute the decreased production figures to several causes, including planned and unplanned maintenance on power stations, fuel shortages due to insurgent attacks on oil pipelines that provide fuel to the power plants, and limited supply of fuels allocated by the Ministry of Oil. In commenting on our draft report, State noted that planned outages are necessary operational procedures to ensure reliable and sustainable operations at the plants and that the central reason for high unplanned outages is that Ministry of Electricity workers do not yet have the necessary skills to ensure adequate operations and maintenance practices. As of June 2005, Iraq’s electricity production was increasing to meet greater summer demand and exceeded 100,000 megawatts in the latter half of the month. U.S. officials attributed the increased production to (1) power plants that were returned to service after maintenance was completed, (2) imported power and fuel supply from neighboring countries, and (3) activation of U.S. funded power projects. The electricity sector faces a number of challenges to meeting Iraq’s electricity needs. These challenges include the lack of appropriate fuel supplies, Iraqis lack of capacity in operation and maintenance, the unstable security environment, financing needs for distribution projects, and ineffective management of electricity generation and distribution. Iraq’s limited accessible supply of natural gas and diesel fuel affects the operation of the new gas combustion turbines provided by the United States and continues to affect the operations and production capacity of Iraq’s electrical power plants. The United States purchased and installed gas combustion turbines to operate several Iraqi power plants, including Bayji and Qudas. These turbines were readily available for purchase, could be installed in less than 1 year, and could also be modified to burn oil-based fuels, although with some negative effect on the turbines’ efficiency and operation. Although Iraqi power plants have largely relied on steam turbines that use crude oil or oil-derived fuels, these turbines are less readily available for purchase on the world market and require a longer installation time. Due to limited access to natural gas, some gas combustion turbines at Iraqi power plants are operating on low grade, oil-based fuels. The use of liquid fuels, without adequate equipment modification and fuel treatment, decreases the power output of the turbines by up to 50 percent, requires three times more maintenance, and could result in equipment failure and damage that significantly reduces the life of the equipment, according to U.S. and Iraqi power plant officials. U.S. agencies report they have incorporated operations and maintenance training into the reconstruction program. However, the Iraqis’ capacity to operate and maintain the power plant infrastructure and equipment provided by the United States remains a challenge. Contractors cited several instances where the Iraqis had significant problems operating and maintaining projects after they were transferred to the government. For example, in December 2004, the Iraqis’ inability to operate a recently overhauled plant at Bayji led to a widespread power outage. U.S. officials said that contractors installed the equipment and provided the Iraqis onsite training in operating the new or refurbished equipment. However, Iraqi power plant officials from 13 locations throughout Iraq, including Bayji, indicated that the training did not adequately prepare their staff to operate and maintain the new gas turbine engines. U.S. officials have acknowledged that more needs to be done to train plant operators and ensure that advisory services are provided after the turnover date of the projects. To address this issue, in February 2005, USAID implemented a project to train selected electricity plant officials (plant managers, supervisors, and equipment operators) in various aspects of plant operations and maintenance. According to DOD, PCO also has awarded one contract and is developing another to address operations and maintenance concerns. A June 29, 2005, USAID Inspector General report stated that until the operations and maintenance challenges are addressed at both the Iraqi power plants and ministry levels and practices at the power plants are significantly improved, reports of damaged equipment and infrastructure will continue and the electrical infrastructure rebuilt and refurbished by USAID’s program will remain at risk of sustaining damage following its transfer to the Ministry of Electricity. In comments on our draft report, State department said that there has not been enough focus on strengthening operations and maintenance capacity and that such strengthering had not been a U.S. government priority in the early phases of the reconstruction effort. Providing security for power plants, transmission lines, and distribution stations is another key challenge to electricity reconstruction projects and to meeting Iraq’s electricity needs. According to U.S. agency officials and contractors, insurgent attacks on people and infrastructure have increased project costs and caused scheduling delays. Our analyses of five U.S.-funded electricity sector contracts indicate that security costs to obtain private security services and security-related equipment as of December 31, 2004, ranged from 10 to 36 percent of project costs. In March 2004, the United States awarded a $19 million contract to train and equip Iraq’s Electrical Power Security Service to protect electrical infrastructure, including power plants, transmission lines, and Ministry of Electricity officials. Although the program was designed to train 6,000 guards over a 2-year period, fewer than 340 guards had been trained when the contract was terminated early. According to agency reporting in April 2005, current plans are for the Iraqi Ministry of Defense to provide mobile security for linear assets such as transmission lines and pipelines. The Iraqi electricity sector will require additional financial assistance to restore its infrastructure to meet the national needs. The Ministry of Electricity estimates that Iraq needs about $20 billion to restore its electricity sector, including over $3 billion to update the distribution network system, that provides electricity from the distribution station to the end user. The activities of the U.S. assistance program have focused on generation, transmission, and distribution projects to improve the electricity sector and have provided about $100 million to address the provision of power from the distribution station to the end user. Effective management of electricity generation, transmission, and distribution is affected by illegal connections to existing power lines and the lack of metering. According to industry officials, the inability of system operators to balance the amount of electric generation with consumer demand can cause severe failures in both equipment and service, as evidenced in January 2005 when the national grid collapsed following an electrical circuit imbalance near Bayji. Further, limited and inaccurate metering in Iraqi homes precludes the Ministry of Electricity from measuring the amount of electricity that end users consume. Experts indicate that the demand for electricity has increased dramatically since UN sanctions were removed in 2003 and estimate that the demand for electricity will exceed 8,500 megawatts this summer. In commenting on our draft report, the State department stated that the demand had passed 8,500 megawatts and may reach 9,000 megawatts. U.S. reconstruction efforts in the water and sanitation sector focus on improving Iraq’s potable water, sewage, and sanitation systems. State reallocations have reduced available U.S. funding for improving Iraq’s severely degraded water and sanitation sector from a peak of $4.6 billion to a current level of $2.4 billion. The United States has made some progress in completing large and small water and sanitation projects, but it is difficult to determine the impact of its reconstruction effort on this sector due to limited performance data and measures. The U.S. reconstruction program has also suffered from delays in completing projects, and some completed projects lack sufficient Iraqi staff and supplies to function properly or are not operating at all due to a lack of electricity and diesel fuel. Water and sanitation services in Iraq deteriorated significantly after the 1991 Gulf War due to the lack of maintenance, inadequate skilled manpower, and war damage. In 2003, post war looting destroyed equipment and materials needed to operate treatment and sewerage facilities. Before the 1991 Gulf War, Iraq produced enough water to supply more than 95 percent of urban Iraqis and 75 percent of rural Iraqis, according to the 2003 UN/World Bank needs assessment. Actual access was much lower due to significant losses from leaks in the delivery network. By 2003, these production levels had fallen to 60 percent of urban Iraqis and 50 percent of rural Iraqis. According to the same assessment, the sewage system primarily served Baghdad, where it reached about 80 percent of the population. However, according to the report the sewage system was inadequate for moving and processing waste, leading to backups of raw sewage in the streets and treatment plants were not operational. Less than 10 percent of the urban population outside Baghdad was served by sewage systems. The rural areas and northern Iraq—including the cities of Kirkuk and Erbil—had no access to piped sewage systems. According to the UN/World Bank report, some of these areas had access to pour flush latrines. U.S. reconstruction efforts in the water and sanitation sector focus on projects to improve Iraq’s potable water, sewage, and sanitation systems. Specific activities funded by the U.S. reconstruction program include repairing water and sewage treatment plants, rehabilitating dam facilities, and conducting irrigation projects. Work has been implemented through a combination of longer term, large scale projects and quick impact, smaller scale projects. Agencies are executing most of their largest efforts through five large contracts with three U.S. companies. These efforts include rehabilitation of water and sewage treatment plants, dams, pump station, and irrigation canals, as well as repairs of sewer lines and drinking water canals. Smaller scale projects include neighborhood cleanups, water supply improvements, and the rehabilitation of smaller scale sewage systems and water treatment plants. The U.S. reconstruction program in Iraq’s water and sanitation sector has made some progress toward completing a reduced scope of activities. As of April 5, 2005, the State Department had reallocated funding for water and sanitation to other priorities such as security, thus reducing available funding by 48 percent to about $2.4 billion. As of the end of March 2005, U.S. agencies had obligated about $1.2 billion, or 50 percent, and disbursed about $280 million, or 12 percent, of the U.S. funding to specific projects for the sector. USAID’s accomplishments included the repair of six sewage treatment plants, two water treatment plants, and a primary urban water supply in southern Iraq. As of April 3, 2005, State reported that 64 projects were complete and 185 were in progress. However, State was unable to provide a list of those completed projects, which would enable us to evaluate the significance of the project numbers in terms of scope of work, cost, or size. The United States has also funded a number of smaller scale, quick impact projects. The primary goals of these quick impact projects have been to meet pressing local needs and provide employment for the Iraqi people. Although they are designed to show impact more quickly in some cases small-scale projects do not have the potential long-term effect of the larger projects. Reduced funding and increased costs have limited the work done in the water and sanitation sector. As of March 2005, PCO had begun 52 projects. Although PCO initially planned to execute 137 projects with fiscal year 2004 appropriated funds, the full list of 137 projects will not be completed using appropriated funds given the funding reallocations and State’s focus on completing projects under way and sustaining completed projects. The reduction in the number of planned projects is the result of a more than $2 billion decrease in program funding and underestimates of the cost of doing business in Iraq. According to PCO, the initial CPA cost estimates for completing projects in Iraq were too low. Increased security requirements, inflation in the cost of construction materials and labor, and the unexpectedly poor condition of Iraqi facilities have all contributed to increases in project cost. In commenting on the draft of this report, the U.S. Agency for International Development (USAID) disagreed with our statement that agency metrics for tracking water projects do not show how the U.S. program affects the Iraqi people. USAID stated that the agency tracks increases in the amount of water treated and estimates increases in beneficiary numbers. However, these metrics do not address the quality of water and sanitation services in Iraq, which may hinder the U.S. ability to gauge progress toward its goal of providing essential services. The effect of U.S. water and sanitation sector reconstruction is difficult to quantify, and metrics used by U.S. agencies to track progress do not provide a complete picture of results. The program has encountered delays in execution due to security conditions and other factors, and completed projects are at risk of failing due to lack of needed staff and supplies after transfer to the Iraqis. Iraq has no comprehensive metering of water usage. Without metering, the ministries lack information on the amount of water consumed or lost. U.S. officials estimate that approximately 60 percent of water produced in Iraq is unaccounted for—lost to illegal taps, unmetered usage, and leaking water pipes. Because of water losses and the lack of metering, the extent to which clean potable water from improved facilities is reaching users is unknown. Agency metrics for tracking progress in the water and sanitation sector do not show how the U.S. program is affecting the Iraqi people. PCO and State have developed metrics to track the progress of the U.S. water and sanitation reconstruction program in terms of projects completed, treatment capacity, and agricultural area irrigated. While these measures provide some insights on progress, they do not track the contribution of projects toward the overall objective of providing essential services or measure increased access to clean water and improved sanitation in Iraq, as this data from the end user is difficult to gather. In commenting on our draft report, USAID said that the agency tracks increases in the amount of water treated and estimates increases in beneficiary numbers. However, these metrics do not address the quality of water and sanitation services in Iraq, which may hinder the U.S. ability to gauge progress toward its goal of providing essential services. For example, because of problems with the distribution network, water that is potable at the treatment plants may be contaminated by the time it reaches users. According to a senior PCO official in the water sector, potable water and sewage mains in Iraq are sometimes adjacent to each other, allowing leaking sewage to enter the water mains. In response to our draft report, State also noted that there are significant difficulties in accurately measuring water quantity and water quality delivered to Iraqi households and that the measurement of access to potable water and improved sanitation is generally done through the use of surveys. However, State commented that the department has elected not to reallocate funding away from projects to conduct regular surveys on essential services. The U.S. effort to rehabilitate Iraq’s water and sanitation sector has faced challenges from the insurgency, coordination and management difficulties, and poor onsite conditions. Contractor and agency reporting cite numerous instances of project delays due to unsafe conditions. PCO has estimated that deteriorating security has added an average of about 7 percent to project costs in the water and sanitation sector. Contractors and agency officials also cited difficulties in defining project scope and coordinating with Iraqi ministries as further impeding progress. For example, Iraqi ministry and local officials disagreed on the proper scope of one project, and PCO’s resolution of the issue was delayed by security conditions limiting its ability to meet with Iraqi officials. Unusable project sites and the unexpectedly poor condition of Iraqi facilities have also contributed to delays and increased costs. USAID abandoned one landfill project, projected to cost $20 million if completed, because the Iraqi government provided an unusable site. Contractors arriving in the field also found unanticipated conditions, such as sewer blockages and treatment equipment that required repair. Both USAID and PCO have incorporated employee and management training efforts into their reconstruction programs. However, the projects completed by USAID and PCO have encountered significant problems in facility operations and maintenance after project handover to Iraqi management. Iraqis lacked adequate resources and personnel to operate these facilities in the long term. To address these issues, in April 2005 State reallocated $25 million for a USAID pilot project to provide continuing operations, maintenance, and supply acquisition training and support at selected sites after handover. PCO has also developed a risk assessment process designed to anticipate potential sustainability issues by evaluating various factors that contribute to the successful transition of projects to the Iraqis. U.S. reconstruction efforts in the health sector focus on restoring and expanding the availability of basic health care in the country. The United States has provided about $866 million in appropriated funds for health activities to reestablish, restore, and expand the availability of health care in Iraq. The majority of this funding—about $750 million—is focused on infrastructure projects and medical equipment supplies; the remainder provides for medical staff training and management training for the Ministry of Health. While U.S. agencies have completed initial activities to reestablish Iraqi health services, larger infrastructure, equipment, and training projects to restore and expand the availability of basic health care are still under way. The Iraqi health sector faces a number of challenges in providing basic and preventive health services, including procurement and delivery of medical equipment and supplies and measuring program results. At the same time, long-term technical assistance will be required to build the management and infrastructure capacity needed to provide access to a quality health care system over time. More than 30 years ago, Iraq was a regional leader in health care, but years of neglect and mismanagement under Saddam’s regime left the Iraqi health system in a deteriorated state and a segment of the Iraqi population and the poor with little or no health care. The 2003 UN/World Bank needs assessment described the Iraqi health care system as inefficient and inequitable, noting that health care facilities and equipment were in poor condition. The Iraqi health system was a hospital-oriented model that did not emphasize sustainable health development; care was centralized in urban areas and services only partially matched the needs of the population. The 2003 UN/World Bank needs assessment further noted that the health system did not provide equitable access to basic health services; lacked cost-effective public health interventions; required large-scale imports of medicines, medical equipment, and health workers; and collected little health service data. The 2003 assessment determined that basic health care services needed to be restored and that the system needed to be transformed into a national health care system based on primary care, that provides health services reflecting population needs and priorities with a focus on prevention and treatment. According to the 2003 UN/World Bank needs assessment, Iraqi health care spending during the 1990s had fallen by as much as 90 percent and Iraq’s health outcomes were among the poorest in the region—well below the levels found in comparable income countries. Infant, child, and maternal mortality rates more than doubled from 1990 to 1996 with 65 percent of births occurring outside of health institutions; adult mortality increased, and life expectancy fell to 60 years of age. Widespread looting after Operation Iraqi Freedom, the subsequent unpredictability of electricity and the water supply, and attacks by insurgents further weakened the functional capacity of Iraqi health care services. According to the Iraqi Ministry of Health, about one-third of primary care clinics, more than 12 percent of hospitals, 30 percent of family planning clinics, and 15 percent of child care clinics were looted or damaged or both; two main public health laboratories were destroyed; and four of seven central warehouses for storage of drugs and supplies were partially looted and their vaccine supply was lost. The U.S. program for the Iraqi health sector is primarily focused on restoring and expanding the availability of basic health care, including maternal and child health care, to the majority of the population. Activities funded by the U.S. reconstruction program (1) address medical facility needs to support an evolving health care model for equitable access to basic health care; (2) provide medical equipment and training of medical staff; and (3) provide training to strengthen management by the Ministry of Health. The majority of U.S. financial assistance in this sector—over 80 percent—is focused on rehabilitating and constructing hospitals and health care centers and supplying medical equipment for hospitals and clinics. The remainder of this assistance provides for the training of medical staff and capacity building within the Ministry of Health, including management training for infectious disease control, national health policy reform, and decentralization of health care activities at the local, governorate, and ministry levels. U.S. activities in the Iraqi health sector fall into four key areas: health phase I ($80 million), nationwide hospital and clinic improvements ($439 million), equipment procurement and modernization training ($297 million), and the construction of the Basrah Pediatric Facility ($50 million). The United States has made some progress in its effort to restore and expand the availability of basic health care in Iraq; however, the majority of large-scale infrastructure projects remain under construction. As of March 31, 2005, U.S. agencies had obligated $533 million and disbursed $116 million of the $866 million allocated for health activities in Iraq. According to agency reporting, initial activities to reestablish Iraqi health services have been largely completed, including the vaccination of 70 percent of eligible Iraqi children, about 5 million Iraqi children against measles, mumps, and rubella and 3 million children against polio; rehabilitation of 110 health clinics; training of about 700 health care trainers; and the procurement of medical equipment kits for 600 health centers. However, due to the security environment and procurement delays, 37 of 600 medical equipment kits had not been delivered as of May 20, 2005, according to U.S. officials. Further efforts to improve hospitals and clinics, procure equipment, and provide training are under way. For example, according to IRMO reporting, as of April 6, 2005, of the planned renovations for 20 hospitals and new construction for 1 hospital, the United States had started planned renovations on the 20 hospitals and begun construction of the Basrah Pediatric Facility. According to agency documentation, the execution phase of these health projects took longer than expected to complete due to the complex designs for health care facilities, long lead times for medical equipment manufacturing and delivery, construction delays due to land ownership issues, the poor quality of sites, and security issues related to the contractors and the delivery of construction supplies. In addition, according to U.S. officials, the training program for the medical staff for the new primary health clinics was expected to begin in June 2005. Iraq’s health sector needs long-term financial support for its health care system. In addition, the U.S. program to restore and expand the availability of basic health care faces challenges in the procurement and delivery of medical equipment and supplies and in measuring program results. According to the UN/World Bank assessments, Iraqi and agency documents, and U.S. officials, the Iraqi health sector will require continued long-term financial assistance to restore and strengthen its health system to modern day medical levels; support infrastructure maintenance and medical supply requirements; and support management operations—assistance that is not available in the U.S. program or through the international community. The activities of the U.S. assistance program—largely focused on improving the physical infrastructure of the health system—is likely to have a longer term impact on the health sector; however, the impact of these infrastructure improvements is not likely be visible until construction is complete, new equipment is in service, and management capacity of the Iraqi health ministry has been strengthened. U.S. officials acknowledge that additional resources will be needed over the next 3 to 5 years for Iraq to address health services and strengthen the delivery of primary health care services, although the continuation of such activities is not an element of the U.S. program in Iraq at this time. The U.S. program to provide medical equipment and supplies to hospitals and health clinics across Iraq is an important element in strengthening Iraqi health service delivery. Delays in the delivery of U.S.-provided equipment may affect the Iraqis’ ability to provide primary health care. For example, the completed delivery of USAID-funded health kits, coupled with primary health care provider training, is expected to result in an increase in the capability of primary health care providers to deliver care to the Iraqi population. Although the equipment items for these health kits were received by May 2004, the delivery of these kits to Iraqi health clinics was still incomplete, as of May 2005. Agency documents and officials indicated several reasons why medical equipment had not been delivered, including long lead times for medical equipment manufacturing and delivery, the security environment, the timing of equipment delivery with the completion of infrastructure construction, and the need to obtain agreement on equipment lists from the Ministry of Health. To address the Ministry of Health’s limited capacity to accept, store, and distribute large shipments of supplies and equipment, the PCO has developed a revised distribution plan, according to a U.S. official. Further, as of May 2005, the construction plans for 150 primary health clinics did not have an identified procurement plan for backup power generators, furniture, consumable supplies, incinerators, or a security perimeter. According to a U.S. official, without full power supply—by generators or from the power grid—these clinics will be able to provide only the most basic services and limited or no maternal and/or pediatric services. In response to our draft, DOD told us that they plan to build 142 primary health clinics supplied with generators, furniture, and three months of consumables. IRMO has developed metrics to track the progress of the U.S. health reconstruction program in Iraq. Limitations to the available metrics and data make it difficult to assess the outcome of U.S. activities in the health sector. For example, IRMO’s measurements of progress track the completion of facilities, which is an indicator of increased access to health care. However, the measures available do not indicate how well these facilities are equipped or staffed to provide primary health care services. The measures used by IRMO do not relate the progress of U.S. projects to the overall effort of improving the quality and access of health care in Iraq. The United States, along with its coalition partners and various international organizations and donors, has undertaken a challenging and costly effort to stabilize and rebuild Iraq. Over the past 2 years, the United States, coalition partners, and, more recently the Iraqis have undertaken and accomplished numerous activities to stabilize and rebuild Iraq, including efforts to help restore basic essential and social services. This enormous effort has been undertaken in an unstable security environment, and is concurrent with the institutional development of Iraqis to govern and secure the country. As we reported in June 2004, these challenges continue to affect the pace and cost of reconstruction. A key challenge to the success of the rebuilding effort will be the Iraqis’ ability to sustain the rehabilitated and new infrastructure and to address continuing maintenance and basic service needs. U.S. reconstruction efforts include requirements to build operational and ministerial capacity to sustain this infrastructure. As U.S. activities that have already started reach completion by the end of the year, the options and plans developed and actions taken to address this challenge will be critical to the success of the U.S. reconstruction program and the overall reconstruction effort in Iraq. We provided drafts of this report to the Departments of Defense and State and the U.S. Agency for International Development. The Departments of Defense and State did not provide written comments; however, they provided technical comments, which we incorporated where appropriate. The U.S. Agency for International Development provided written comments, which are reprinted in appendix II. In particular, in response to our statement that agency metrics for tracking water projects do not show how the U.S. program is affecting the Iraqi people, USAID stated that the agency tracks increases in the amount of water treated and estimates increases in beneficiary numbers. However, these metrics do not address the quality of water and sanitation services in Iraq, which may hinder the U.S. ability to gauge progress toward its goal of providing essential services. For example, because of problems with the distribution network, water that is potable at the treatment plants may be contaminated by the time it reaches users. USAID also provided technical comments, which we incorporated where appropriate. We are sending copies of this report to interested congressional committees. We will also make copies available to others on request. In addition, this report is available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. In monitoring resources supporting the reconstruction of Iraq, we focused on the sources and uses of U.S., Iraqi, and international funding. U.S. agencies provided us with electronic data files for appropriated funds, the Development Fund for Iraq (DFI), vested assets, and seized assets. These files generally included objective or project descriptions with allocated, obligated, and disbursed amounts. We assigned each of the funding line items to broad categories based on the descriptive information available in the data files. To assign the data to a category, we relied on project descriptions from agency data files. In addressing the amount of U.S. funds that have been appropriated, obligated, and disbursed for the Iraq reconstruction effort, we collected funding information from the Department of Defense (DOD), including the Project and Contracting Office (PCO), the U.S. Army Corps of Engineers (USACE), and others; Department of State; the Department of the Treasury; U.S. Agency for International Development (USAID); and the Coalition Provisional Authority (CPA). Data for U.S. appropriated funds are as of March 31, 2005. We also reviewed Defense Contract Audit Agency reports, U.S. agency inspector generals’ reports, Special Inspector General for Iraq Reconstruction (SIGIR) reports, other audit agency reports, and Office of Management and Budget (OMB) documents. Although we have not audited the funding data and are not expressing our opinion on them, we discussed the sources and limitations of the data with the appropriate officials and checked them, when possible, with other information sources. We determined that the data were sufficiently reliable for broad comparisons in the aggregate and the category descriptions we have made. To identify sources and uses of DFI funds, vested assets, and seized assets, we relied on funding data from the CPA and DOD through June 28, 2004. To determine the reliability of these data, we examined the financial files and interviewed CPA officials responsible for the data. Based on these evaluations, we determined the data are sufficiently reliable to describe the major deposits to the DFI and the allocations and disbursements by major categories. We did not audit these data and are not expressing our opinion on them. After June 28, 2004, the stewardship of the DFI was turned over to the Iraqi Interim Government. We continued to obtain data from DOD regarding DFI funds obligated before June 28, 2004, and vested and seized funds balances. To address international assistance for rebuilding Iraq, we collected and analyzed information provided by the State Department’s Bureau of Economic and Business Affairs. We also collected and reviewed reporting documents from the International Reconstruction Fund Facility for Iraq (IRFFI). To describe the activities of international donors, we reviewed documents pertaining to the international donor conferences and the IRFFI and interviewed U.S. officials. To assess the reliability of the data on the pledges, commitments, and deposits made by international donors, we interviewed officials at State who are responsible for monitoring data provided by the IRFFI and donor nations. We determined that the data on donor commitments and deposits made to the IRFFI were sufficiently reliable for the purposes of reporting at the aggregate level. For the U.S. reconstruction program, we focused our effort on U.S. activities in the Iraqi oil, electricity, water, and health sectors. Specifically, we focused on the condition of the sectors, the status of the U.S. effort in these sectors, and the challenges affecting overall sector progress. To determine the condition of the sectors, we reviewed assessments made by the United Nations and World Bank, USAID, CPA, and contractors. We also discussed sector conditions with cognizant U.S. agency officials, contractors, and Iraqi officials. To determine the status of the U.S. effort in the oil, electricity, water, and health sectors, we reviewed documents obtained from the United Nations, World Bank, CPA, State’s Iraq Reconstruction Management Office (IRMO), the PCO, USAID, the USACE, agency contractors, and selected Iraqi ministries. We reviewed reports and planning documents prepared by USACE, USAID, CPA, State, PCO, and contractors. We also interviewed U.S. government and former CPA officials and contract personnel in the United States and Iraq and participated in videoconferences between USACE headquarters and Baghdad personnel. Specifically, we interviewed USAID, State, PCO, USACE, and former CPA officials, in Washington, D.C. and Iraq and their contractor representatives in the United States and Iraq. To determine the challenges affecting sector progress, we reviewed contractor and agency reporting and interviewed agency officials in the United States and Iraq. Specifically, we reviewed CPA, PCO, State, USAID, the USACE, and other reporting. We also interviewed agency officials in Washington, D.C. and Iraq from USAID, State, PCO, USACE, Defense Intelligence Agency, and former CPA officials; their contractor representatives in the United States and Iraq; and Iraqi representatives from the Ministry of Electricity, including Iraqi plant operators. To assess the reliability of the data in the oil, power, water, and health sectors, we interviewed officials at CPA, DOD, State, and USAID responsible for gathering and monitoring data on reconstruction efforts. We reviewed the data for discrepancies and checked them against other sources, when available. We determined that the data were sufficiently reliable to report general trends in each sector. Data obtained on crude oil production and refined fuels inventories are based on Iraqi estimates provided to State. Data on exports are based on U.S. agency estimates related to daily export activities at terminals. Data on revenue are based on U.S. agency estimates that use internationally recognized financial sources for pricing calculations, such as Bloomberg and Platts. According to State, the information that it periodically reports on production, export, and revenue represents analysis based on the best available information. Data obtained on daily electricity produced are from Iraqi, USAID, or DOD estimates provided to State. We conducted this part of our review from September 2004 through May 2005 in accordance with generally accepted government auditing standards. Although we did not travel to Iraq to make project site visits during this period due to security concerns; we interviewed U.S. officials via teleconference and videoconference. In addition, when possible we interviewed Iraqi officials when these officials traveled to the United States. Joseph A. Christoff, (202) 512-8979. Key contributors to this report include Monica Brym, Lynn Cothern, Aniruddha Dasgupta, Muriel Forster, Charles D. Groves, B. Patrick Hickey, John Hutton, Sarah J. Lynch, Jodi Prosser, Michael Simon, and Audrey Solis. Martin de Alteriis, Sharron Candon, Patrick Dickriede, Philip Farah, Hynek Kalkus, Mary Moutsos, Nanette Ryen, Josie Sigl, and George Taylor provided technical assistance. | Rebuilding Iraq is a U.S. national security and foreign policy priority and constitutes the largest U.S. assistance program since World War II. Billions of dollars in grants, loans, assets, and revenues from various sources have been made available or pledged to the reconstruction of Iraq. The United States, along with its coalition partners and various international organizations and donors, has embarked on a significant effort to rebuild Iraq following multiple wars and decades of neglect by the former regime. The U.S. effort to restore Iraq's basic infrastructure and essential services is important to attaining U.S. military and political objectives in Iraq and helping Iraq achieve democracy and freedom. This report provides information on (1) the funding applied to the reconstruction effort and (2) U.S. activities and progress made in the oil, power, water, and health sectors and key challenges that these sectors face. As of March 2005, the United States, Iraq, and international donors had pledged or made available more than $60 billion for security, governance, and reconstruction efforts in Iraq. The United States provided about $24 billion (for fiscal years 2003 through 2005) largely for security and reconstruction activities. Of this amount, about $18 billion had been obligated and about $9 billion disbursed. The State department has reported that since July 2004, about $4.7 billion of $18.4 billion in fiscal year 2004 funding has been realigned from large electricity and water projects to security, economic development, and smaller immediate impact projects. From May 2003 through June 2004, the Coalition Provisional Authority (CPA) controlled $23 billion in Iraqi revenues and assets, which was used primarily to fund the operations of the Iraqi government. The CPA allocated a smaller portion of these funds--about $7 billion--for relief and reconstruction projects. Finally, international donors pledged $13.6 billion over 4 years (2004 through 2007) for reconstruction activities, about $10 billion in the form of loans and $3.6 billion in the form of grants. Iraq had accessed $436 million of the available loans as of March 2005. As of the same date, donors had deposited more than $1 billion into funds for multilateral grant assistance, which disbursed about $167 million for the Iraqi elections and other activities, such as education and health projects. The U.S. reconstruction effort in Iraq has undertaken many activities in the oil, power, water, and health sectors and has made some progress, although multiple challenges confront each sector. The U.S. has completed projects in Iraq that have helped to restore basic services, such as rehabilitating oil wells and refineries, increasing electrical generation capacity, restoring water treatment plants, and reestablishing Iraqi basic health care services. However, as of May 2005, Iraq's crude oil production and overall power generation were lower than before the 2003 conflict, although power levels have increased recently; some completed water projects were not functioning as intended; and construction at hospital and clinics is under way. Reconstruction efforts continue to face challenges such as rebuilding in an insecure environment, ensuring the sustainability of completed projects, and measuring program results. |
The 100 veterans included in our review received a full mental health evaluation in an average of 4 days of the date they preferred to be seen (known as the preferred date). The full mental health evaluation is the primary entry point to mental health care. At the five VAMCs we visited, the average time in which a veteran received this full evaluation ranged from 0 to 9 days from the preferred date. However, we identified conflicting VHA policies regarding how long it should take a new veteran to receive a full mental health evaluation: (1) a 14-day policy established by VHA’s Uniform Handbook for Mental Health Services, and (2) a 30-day policy set by VHA in response to the Choice Act. To date, VHA has not provided guidance on which policy should be followed, which is inconsistent with federal internal control standards that call for management to clearly document, through management directives or administrative polices, significant events or activities, such as ensuring timely access to mental health care, to help ensure management directives are carried out properly. A number of VHA officials, including VISN and VAMC officials, told us they do not know which policy they are currently expected to meet, which makes it difficult for them to ensure timely access to care in light of increasing demand for mental health care. As a result, we recommended VHA issue clarifying guidance on the access standard for new veterans seeking mental health care. VA concurred with this recommendation, stating that it is in the process of revising the relevant policy in the Uniform Handbook to be consistent with the 30-day wait time goal established in response to the Choice Act. VHA stated that it is targeting issuance of the revised policy and clarifying guidance for March 2016. Further, although the average time between veterans’ preferred dates and their full mental health evaluations in our review were generally within several days, that time did not always reflect how long veterans may have actually waited for mental health care. Because VHA uses a veteran’s preferred date as the basis for its wait-time calculations, rather than the date that the veteran initially requests or is referred for mental health care, these calculations only reflect a portion of a veteran’s overall wait time. While some of the delay in care may be attributed to a veteran not wanting to start care immediately, we also found that some delays were because a facility did not adequately handle a referral or request for mental health care. In our review of 100 veteran records, we found that significant delays can occur if the referral or request for an appointment is not processed correctly or in a timely manner. For example, one veteran in our review waited 174 days between the initial referral for mental health care and the veteran’s preferred date due to a referral not being appropriately managed. The veteran’s primary care provider was to have placed a referral to psychology in March 2014, but our review of the medical record found no evidence of the referral ever being placed. Nonetheless, the veteran’s primary care provider alerted a VHA psychologist who reached out to the patient in March 2014, by phone, but did not leave a message. No VAMC mental health provider reached out again until September 2014, after the veteran’s primary care provider made a referral (this time appropriately requested). The veteran was then able to schedule a full mental health evaluation approximately 1 week later. On average, our review of 100 new veteran medical records found that a veteran’s preferred date was 26 days after his or her initial request or referral for mental health care, though this varied by VAMC. (See fig. 1.) In commenting on a draft of our report, VHA confirmed that they measure wait times from preferred date to when the appointment occurs. However, they disagreed with our calculations of the overall wait time for veterans to receive full mental health evaluations, noting that these calculations do not capture situations outside of their control, such as when a veteran wants to delay treatment. Our calculations illustrate that the use of the preferred date does not always reflect how long veterans are waiting for care or the variation that exists not only between, but within, VAMCs. During the period of time prior to establishing the preferred date, we found instances of veterans’ requests or referrals for care being mismanaged or lost in the system, leading to delays in veterans’ access to mental health care. Our current and previous work, along with the work of VA OIG, highlights the limitations of VHA’s current scheduling practices, including wait time calculations. In December 2012, we recommended that VHA take actions to improve the reliability of wait-time measures by clarifying the scheduling policy or identifying clearer wait- time measures that are not subject to interpretation or prone to scheduler error. VHA has not yet implemented this recommendation, and we continue to believe that implementation of this recommendation would improve the reliability of wait time measures. VHA monitors access to mental health care, but the lack of clear policies may contribute to unreliable wait-time data and precludes effective oversight. Among other reasons contributing to the potential unreliability of VHA wait-time data, we found VHA’s wait-time data may not be comparable over time or between VAMCs. Data may not be comparable over time. VHA has changed the definitions used to calculate various mental health wait-time measures, and a number of VHA officials we interviewed, including VAMC and VISN officials, told us they were not sure which definitions for new mental health patients were in effect for calculating wait-time measures or gave conflicting answers about which definitions were being used. VHA has not clearly communicated the definitions used or changes made to these definitions used in its wait-time calculations, which is contrary to federal internal controls standards that call for management to communicate reliable and relevant information in a timely manner. This limits the reliability and usefulness of these data in determining progress in meeting stated objectives for veterans’ timely access to mental health care. As a result, we recommended that VHA issue guidance about the definitions used to calculate wait times, such as how a new patient is defined, and communicate any changes in wait-time data definitions within and outside VHA. VHA concurred with our recommendation and stated that it plans to publicly provide an updated data definition document in October 2015 and will issue an information letter in November 2015 that contains sources where both internal and external stakeholders can locate the definitions used to calculate wait times, including how a new patient is defined. Data may not be comparable between VAMCs. When VAMCs use open-access appointments, data may not be comparable across VAMCs. Open-access appointments are typically blocks of time for veterans to see providers without a scheduled appointment. In these cases, because appointments are not scheduled until veterans come to the medical center, the preferred and appointment dates are the same and wait times are calculated as 0 days, regardless of when veterans initially requested or were referred for mental health care. We found inconsistencies in the implementation of these appointments, including one VAMC that was referring veterans to these open-access appointments after an initial evaluation by phone rather than rather than being given specific appointments. Those veterans who were referred to the open-access appointments were tracked using a manually maintained list outside of VHA’s scheduling system. We found that follow-up with these veterans was inconsistent, and nearly half never showed up to the open-access appointments. VHA does not have guidance that clarifies how to manage and track open-access appointments, which is inconsistent with federal internal controls that call for management to clearly document policies for significant activities to help ensure management’s directives are carried out properly. As a result, officials at the VAMCs that used open-access appointments said they were unclear about how they could be used, how they should be entered into VHA’s scheduling system, and whether local tracking mechanisms were compliant with VHA scheduling policies. Without guidance on how appointment scheduling for open-access clinics is to be managed, VAMCs can continue to implement these appointments inconsistently, and place veterans on lists outside of VHA’s scheduling system, potentially posing serious risks to veterans needing mental health care. As a result, we recommended VHA issue clarifying guidance on how open-access appointments are to be managed. VHA concurred with our recommendation, stating that it conducted training during the summer of 2015 for schedulers based on existing VHA policy that included instructions on how to schedule same-day appointments, which VHA considers to include open-access appointments. VHA further stated its plans to aggressively monitor appointment management and identify areas of local inconsistency in scheduling procedures. However, VHA’s description of same-day appointments does not capture the circumstances we observed during our review, in which veterans who would normally be given an appointment were instead referred to an open-access clinic. We reviewed the training that VHA said was provided to schedulers, but it did not address the circumstances we described. Given differences between types of same-day appointments (e.g., walk-in clinics where no prior evaluation may be required and open-access clinics that include an evaluation prior to referral), issuing specific guidance for open-access appointments would help to ensure veterans are getting their needs served and to improve data comparability across VAMCs. VHA hired about 5,300 new clinical and non-clinical mental health staff between June 2012 and December 2013 for both its inpatient and outpatient programs, meeting the goals of its hiring initiative. Officials at the five VAMCs we visited reported local improvements in access to mental health care due to the additional hiring. For example, officials at one VAMC reported being able to offer more evidence-based therapies. Officials at this VAMC, as well as officials from another VAMC and two CBOCs, cited the ability to provide mental health care at new locations where they were previously unable to do so. Although VHA considered their hiring initiative a success because it met its goals, the five VAMCs we visited still had mental health staff vacancy rates ranging from 9 to 28 percent, and 4 of the 5 VMACs were unable to meet overall demand for mental health services. Officials at the five VAMCs reported a number of challenges in hiring and placing mental health providers, including pay disparity with the private sector; competition among VAMCs filling positions at the same time; lengthy VHA hiring process; lack of space for newly hired mental health staff; lack of support staff to assist providers; and nationwide shortage of mental health professionals. Despite VHA’s hiring initiative, additional staff likely will be needed to meet VHA’s growing demand for mental health care. In an April 2015 report, VHA projected a roughly 12 percent increase in mental health staff would be needed to maintain the current veteran staffing ratios for fiscal years 2014-2017. To address some of the mental health hiring challenges, VAMCs reported using various recruitment and retention tools, including hiring and retention bonuses, student debt repayment, and using internships and academic affiliations to find potential recruits. In November 2014, VHA raised the annual salary ranges for all physicians system-wide, including psychiatrists, to enhance the agency’s recruiting, development, and retention abilities. Officials at the five VAMCs we visited also described strategies they used to manage demand for mental health care in light of staffing challenges, including (1) increasing the use of telehealth and group therapy (rather than individual therapy); (2) addressing space and staffing constraints by sharing offices or altering provider schedules; and (3) referring veterans to other VA locations when a preferred CBOC was not available. In 2013, 10 VAMCs across VHA participated in a pilot that established partnerships with 23 community mental health clinics (CMHCs), as required by an August 2012 Executive Order in an effort to help VHA meet veterans’ mental health needs; these CMHCs provided mental health care to a limited number of veterans. Veterans received approximately 2,400 mental health appointments through the CMHCs, which accounted for approximately 2 percent of the total mental health care provided across the 10 participating VAMCs. Nearly half of the care provided through the pilot program was through partnerships with the Atlanta VAMC. The most common service veterans received was individual therapy or counseling, but other commonly provided services included group therapy, medication management, and treatment for substance abuse. According to VHA’s survey of veterans who received care through the CMHCs during the pilot, veterans were generally satisfied with the care they received. VHA and CMHC officials in our review described a number of successes and challenges related to the pilot program. Successes included improved capacity and communication. For example, officials at one VAMC said they would not have been able to maintain mental health care access at current levels without the capacity provided by the pilot sites. Additionally, officials at three VAMCs said their partnerships allowed them to expand access by providing additional and more convenient care to veterans living in rural areas. VAMC and CMHC officials also said that having a VAMC liaison on site or a dedicated point of contact improved communication, which helped facilitate veterans’ access to care. Challenges with the community provider pilot included a number of administrative issues, including challenges with the timely receipt of medical documentation and payment for services, as well as technical challenges, particularly related to the transfer of medical files and the use of telemental health technology. Other challenges included confusion among some VAMC officials about the different non-VA programs available to veterans and concerns about the appropriateness of care, including whether there were a sufficient number of community providers with the necessary training and experience to provide culturally competent and high-quality care to veterans. Chairman Isakson, Ranking Member Blumenthal, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. If you or your staff members have any questions concerning this testimony, please contact Debra A. Draper at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Lori Achman, Assistant Director; Jennie F. Apter; Jacquelyn Hamilton; Eagan Kemp; Vikki L. Porter; and Malissa G. Winograd. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony summarizes the information contained in GAO's October 2015 report, entitled VA Mental Health: Clearer Guidance on Access Policies and Wait-Time Data Needed ( GAO-16-24 ). The way in which the Department of Veterans Affairs’ (VA) Veterans Health Administration (VHA) calculates veteran mental health wait times may not always reflect the overall amount of time a veteran waits for care. VHA uses a veteran’s preferred date (determined when an appointment is scheduled) to calculate the wait time for that patient’s full mental health evaluation, the primary entry point for mental health care. Of the 100 veterans whose records GAO reviewed, 86 received full mental health evaluations within 30 days of their preferred dates. On average, this was within 4 days. However, GAO also found veterans’ preferred dates were, on average, 26 days after their initial requests or referrals for mental health care, and ranged from 0 to 279 days. Further, GAO found the average time in which veterans received their first treatment across the five VA medical centers (VAMC) in its review ranged from 1 to 57 days from the full mental health evaluation. conflicting access policies for allowable wait times for a full mental health evaluation—14 days (according to VHA’s mental health handbook) versus 30 days (set in response to recent legislation) from the veteran’s preferred date—created confusion among VAMC officials about which policy they are expected to follow. These conflicting policies are inconsistent with federal internal control standards and can hinder officials’ ability to ensure veterans are receiving timely access to mental health care. VHA monitors access to mental health care, but the lack of clear policies on wait-time data precludes effective oversight. GAO found VHA’s wait-time data may not be comparable over time and between VAMCs. Specifically data may not be comparable over time. VHA has not clearly communicated the definitions used, such as how a new patient is identified, or changes made to these definitions. This limits the reliability and usefulness of the data in determining progress in meeting stated objectives for veterans’ timely access to mental health care. data may not be comparable between VAMCs. For example, when open-access appointments are used, data are not comparable between VAMCs. Open-access appointments are typically blocks of time for veterans to see providers without a scheduled appointment. GAO found inconsistencies in the implementation of these appointments, including one VAMC that manually maintained a list of veterans seeking mental health care outside of VHA’s scheduling system. Without guidance stating how to manage and track open-access appointments, data comparisons between VAMCs may be misleading. Moreover, VAMCs may lose track of patients referred for mental health care, placing veterans at risk for negative outcomes. |
This section discusses EPA’s process for assessing sites under the Superfund program and the approaches identified by EPA for conducting long-term cleanups at sites eligible for the NPL under the Superfund program and under other available approaches. Under the Superfund program, EPA assesses hazardous waste sites for long-term cleanups through a specific process. At some point after a potential hazardous waste site is reported to the Superfund program and entered into CERCLIS, EPA regional officials, their contractors, or states acting under cooperative agreements with EPA evaluate the relative potential for a site to pose a threat to human health and the environment. EPA’s 10 regional offices each are responsible for implementing Superfund within several states and, in some cases, territories. Under CERCLA, EPA may only pay for a remedial action at a site if the relevant state agrees, among other things, to pay a portion of the cleanup expenses, as well as all operations and maintenance costs. In addition, under a cooperative agreement with EPA, a state may assume the lead oversight role at a site in the Superfund program. Figure 1 shows the states included in each of the 10 EPA regions. During the initial phases of the long-term cleanup process—known as preliminary assessment and site inspection—EPA regional officials or their counterparts evaluate the potential need for additional investigation or action in connection with a release of hazardous substances from a site. Specifically, the preliminary assessment phase involves an evaluation of readily available information about a site and its surrounding area to determine if the release or potential release poses enough of a threat to human health and the environment that further investigation is needed. If further investigation is needed, a site inspection is performed. During this phase, investigators typically collect environmental and waste samples to determine what hazardous substances are present. Information collected during the preliminary assessment and site inspection is used to calculate and document a site’s preliminary Hazard Ranking System score, which indicates a site’s relative threat to human health and the environment based on potential pathways of Sites with a Hazard Ranking System score of 28.50 or contamination.greater are eligible for listing on the NPL. Information collected from the initial assessment phases to develop Hazard Ranking System scores is not intended to be sufficient to determine either the extent of contamination or how to clean up a particular site. After a site is determined to be eligible for the NPL, EPA chooses which long-term cleanup approach is best suited to the site. In some cases, EPA may conduct a short-term cleanup known as a removal action or otherwise delay selection of a long-term cleanup approach. EPA may choose among several approaches to address sites with a relative threat to human health and the environment that is sufficiently severe to make them eligible for listing on the NPL. For long-term cleanups, EPA can retain oversight of sites under the Superfund program or defer the oversight of sites to other approaches, as shown in figure 2. Under its Superfund program, EPA conducts long-term cleanups using three approaches. The first and most common approach under the Superfund program involves listing a site on the NPL. To do so, EPA first proposes the site for listing on the NPL in the Federal Register. EPA then accepts public comments on the proposal and responds to the comments in a second and final Federal Register listing announcement of the site; then the agency may place on the NPL those sites that continue to meet the requirements for listing. The second approach that EPA may use under the Superfund program is the SA approach, which began informally in the 1990s whereby some EPA regions negotiated site cleanup agreements with PRPs for sites that PRPs, states, or local government officials and communities did not want to have listed on the NPL. To promote consistency across regions, EPA issued guidance in 2002 formalizing the SA approach, which it subsequently updated in 2004 and 2012. According to EPA’s guidance, to qualify for the SA approach, (1) a site’s contamination must make it eligible for listing on the NPL; (2) EPA must anticipate a long-term cleanup at the site; and (3) there must be a willing, capable PRP who will negotiate and sign an agreement with EPA to perform the investigation or cleanup. The third approach EPA can use for long-term cleanup of sites is to address sites under the Superfund program but not list them on the NPL or address them through the SA These “Other” sites under the Superfund program can vary approach.widely and include, among others, some sites with cleanup agreements that preceded the SA approach. Under these older agreements, for which there was no guidance at the time they were negotiated, EPA agreed not to list the site on the NPL, and the PRPs agreed to conduct the cleanup, according to EPA officials. Irrespective of the approach chosen, all sites under the Superfund program approaches follow the same general phases for long-term cleanup, as shown in figure 3, and EPA officials oversee the cleanup at all of these Superfund sites. After the initial phases of the long-term cleanup process, EPA or a PRP conducts a two-part study of the site: (1) a remedial investigation to further characterize site conditions and assess the risks to human health and the environment, among other actions, and (2) a feasibility study to evaluate various cleanup options to address the problems identified in the remedial investigation. At the conclusion of these studies, EPA selects a remedy for addressing the site’s contamination and develops a cost estimate for implementing the remedy; both of these are included in a record of decision. According to EPA officials, the level of cleanup depends on site-specific conditions, not the particular approach selected. EPA or a PRP then develops the method of implementation for the selected remedy during the remedial design phase and implements it during the remedial action phase, when actual cleanup of the site occurs. Multiple cleanup activities can occur within a given phase at the same or different operable units at one site. For example, one remedial action at an operable unit may address soil contamination, while another remedial action at the same operable unit may address groundwater contamination. When EPA or a PRP finishes the cleanup remedy at a site, all immediate threats have been addressed, and all long-term threats are under control, EPA generally considers the site to be “construction complete.” For sites listed on the NPL, when EPA, in consultation with the state, determines that no further site response is appropriate, the agency may delete the site from the NPL. EPA reports achievements at NPL sites, including completion of some phases of the cleanup process, as part of the agency’s implementation of provisions under the Government Performance and Results Act of 1993 (GPRA). The act requires federal agencies to develop strategic plans with outcome oriented agency goals and objectives, performance measures to track the progress made toward achieving goals, annual goals linked to achieving the long-term goals, and annual reports on the results achieved. EPA does not report publicly the same achievements for sites that are not on the NPL. CERCLA § 122(a), 42 U.S.C. § 9622(a) (2012). Department of Justice in seeking an injunction to require PRPs to conduct cleanup. Pub. L. No. 94–580, 90 Stat. 2795 (1976) (codified as amended at 42 U.S.C. §§6901- 6992k (2012)). Although RCRA amended the Solid Waste Disposal Act, Pub. L. No. 89– 272, Title II, 79 Stat. 997 (1965), the amended law is nonetheless sometimes referred to as RCRA, a convention we follow here. Subtitle C of RCRA, 42 U.S.C. ch. 82, subch. III (§§ 6921-6939f), governs hazardous waste management. Hereinafter, references are to RCRA as amended. formal state deferral approach, the EPA region still negotiates the level of oversight appropriate for the particular site. EPA may also defer oversight of the long-term cleanup of a site eligible for the NPL through the Other Cleanup Activity (OCA) approach. OCA deferrals go to one of four types of entities (described below): states, federal agencies, tribes, or private parties. OCA deferral to a state places a site under that particular state’s environmental regulations, as opposed to CERCLA authorities. In contrast to formal state deferrals, the OCA deferral to a state involves no formal EPA oversight other than periodic discussions between EPA regional officials and state officials. Since 2012, EPA guidance has indicated regions should have these discussions. OCA deferral to federal agencies places a site under that particular federal agency’s oversight and authorities, according to EPA. Certain federal agencies, such as the Department of Defense, have responsibility and authority for some or all cleanups at their facilities. EPA assigns a status of “Other Cleanup Activity: Federal Facility Lead” to federal facilities that EPA tracks in its CERCLIS database and are being cleaned up outside of the NPL approach (these sites are eligible for listing but are not listed on the NPL). EPA periodically checks in with other federal agencies on the status of cleanup work at these sites. OCA deferral to a tribe places the site under that tribe’s environmental regulations. EPA periodically checks in with tribal regulators on the status of cleanup work at these sites. OCA deferral to private parties applies to certain sites where the cleanup is conducted by a private party. EPA most commonly addresses the cleanup of sites eligible for the NPL by “deferring” oversight to approaches outside of the Superfund program. EPA regions select the cleanup approach and defer oversight of more than half the sites eligible for the NPL to approaches outside of the Superfund program, primarily through OCA deferrals. Though OCA deferrals include the majority of NPL eligible sites, EPA’s guidance on this approach is less detailed than guidance on other approaches. EPA provides regions with discretion in selecting the cleanup approach for a given hazardous waste site. According to the Superfund Program Implementation Manual—which lists EPA’s Superfund program management priorities, procedures, and practices—each region is to select an appropriate cleanup approach after determining a site is eligible for the NPL. Officials in all 10 regions said that when they select cleanup approaches they attempt to use the most appropriate cleanup approach for a given site. For example, complex sites, such as contaminated waterways, may be more suited to the NPL approach than to deferral to a state cleanup program because EPA typically has more resources to oversee and manage such complex cleanups. Officials in four regions noted that states will sometimes request the NPL approach for large or complex sites. EPA regions can establish their own processes for selecting a cleanup approach for a given site. Three of the 10 regions have some type of regional guidance related to their decision-making process. For example, Region 7 has guidance for its regional decision team that outlines the stakeholders within the region who will participate, when the team will meet, and how decisions are to be made at the meeting. Region 10 has guidance that focuses on how the region will prioritize sites that are eligible for the NPL. All of the regions may consult with relevant stakeholders across EPA programs about a given site, whether the regions have written guidance or not. These stakeholders might include staff from the office of regional counsel or the removal program. Five of the 10 regions use regional decision teams to evaluate sites that have been found eligible for the NPL and select which approach should be used to clean up the site. The other 5 regions do not use regional decision teams, opting instead for more informal decision-making processes or meetings on an as-needed basis. For example, in Region 5 there is a practice of coordinating between the region’s long-term cleanup and removal programs on sites that may be of interest to both programs. EPA officials said the regions consider many potentially relevant factors to select the appropriate approach for each site. The major factors influencing regional officials’ choice of cleanup approach at a site include the preferences of the state regarding how the site will be addressed and the existence of a PRP that is willing to and capable of addressing the site. Specifically, officials in all 10 regions highlighted state preference as a factor they consider. State preference can be particularly important because EPA has a policy of obtaining state concurrence before listing a site on the NPL. According to Region 5 officials, if a state opposes an NPL listing, they will typically give preference to other approaches, such as an OCA deferral to the state or the SA approach. In addition to state preference, officials in 9 of 10 regions said that the existence of a willing and capable PRP can be a factor in determining the cleanup approach. For example, a willing and capable PRP is necessary for the SA approach, which requires the PRP to conduct the cleanup under an agreement with EPA. EPA officials in one region said that the existence of a PRP can also be important for cleanups under state cleanup programs because states can have very limited funding to conduct cleanups on their own. State environmental officials from four states we contacted confirmed that they had limited or, in some cases, no state funding to conduct their own long-term cleanups. Regional officials also identified other factors that can sometimes influence what cleanup approach the region will select. For example, officials in Region 5 noted that if the contamination presents an immediate threat to health and safety, they may use the Superfund removal program, which is more suited to a quick response than long- term cleanup approaches. Depending on the circumstances at a site, the removal program may be sufficient to deal with all of the contamination, or the site may need to be referred to a long-term cleanup approach for further work. Regional officials can also consider other relevant legal authorities that could apply to a site, such as RCRA. When a site is eligible for cleanup under both RCRA and Superfund, EPA policy provides that the agency generally will defer the site to the RCRA program for cleanup. Among the 3,402 sites reported to the Superfund program in CERCLIS that EPA has identified as having contamination making them eligible for EPA deferred 1,984 sites to cleanup approaches outside of the the NPL,Superfund program (see fig. 4). Sites under the Superfund program make up the 1,418 sites that remain, with the vast majority of those sites being addressed through the NPL. EPA addresses more sites eligible for the NPL through the OCA deferral approach than any other cleanup approach: 1,766 of the 3,402 sites (52 percent). Moreover, because EPA deferred most of these 1,766 sites to states, OCA deferrals to states account for about 47 percent of all identified eligible sites. EPA regions’ use of OCA deferrals to states ranges widely, from 7 sites in each of three regions (6, 7, and 8) to 470 sites in Region 1 (see app. II for a breakdown of cleanup approaches by region). According to officials in Region 1, states in the region have mature environmental programs willing and capable of overseeing many sites, which makes the OCA deferral to states well suited to that region. In contrast, officials we spoke to in some regions noted that they needed to consider states’ capacity to oversee a site before using the OCA deferral to states. Nine states have no OCA deferrals, and other states oversee hundreds of these sites, with the most in Massachusetts (247 sites), New Jersey (221), and California (180). Environmental officials in several states we contacted confirmed that states’ use of and experience with OCA deferrals can differ substantially. One state official noted that these differences are likely related to how industrialized a state may be and the extent of cleanup programs in a given state. OCA deferrals to federal agencies, private parties, and American Indian tribes account for an additional 181 sites. OCA deferrals to federal agencies primarily involve military sites: 76 percent of these deferrals were to the Army, Navy, or Air Force. In addition, a majority of OCA deferrals to private parties come from Florida where, on the basis of a state law, PRPs can conduct cleanup without any formal agreement or order from the state, according to Region 4 officials. PRPs conducting such cleanups must submit regular reports to the state on their progress, and the state reserves the right to take the PRP to court, if necessary. EPA currently addresses 1,418 sites (42 percent of those identified as eligible for listing on the NPL) through approaches under the Superfund program—most commonly, through listing the sites on the NPL. Specifically, sites listed on the NPL account for 1,313 sites, over 90 According to officials in percent of sites under the Superfund program.one region, EPA has access to more resources than states and typically addresses sites that require greater or more specialized resources through the NPL approach. For example, regional officials noted, states face different limitations that can prevent them from pursuing cleanup under their programs including: technical capacity, legal resources, and financial resources. In addition, EPA officials in four regions noted examples where a state environmental program requested that the Superfund program pursue NPL listing because the state was having trouble getting a PRP to cooperate or the PRP went bankrupt. In addition to listing sites on the NPL, EPA also oversees the long-term cleanup of sites through two other approaches under the Superfund program. First, the Superfund program currently oversees 67 sites under the SA approach. Second, EPA oversees at least 38 other sites with long-term cleanups under the Superfund program for which EPA has no documented definition and no consistently applied method of counting. EPA officials provided different estimates of the number of such sites. One EPA official provided a method to identify these sites based on a code in CERCLIS, which resulted in the 38 sites listed above. However, another EPA official provided us a list of 35 such sites that had reached the remedial action phase. Of these 35 sites, 12 matched the 38 sites identified by the code in CERCLIS. In addition, 16 of the sites on the list of 35 had a code of “status undetermined” or had no code at all. EPA regional officials also identified other specific sites under the Superfund program, but some of those sites could not be identified by the code in CERCLIS, were not on the list of 35, or had no code at all. As of December 2012, 270 sites had the “status undetermined” code, and 101 had no such code in CERCLIS, making it impossible to determine the exact number of sites that EPA oversees under the Superfund program that are not being addressed under either the NPL or SA approaches. Tracking of these sites is discussed later in this report. EPA addressed the remaining sites eligible for the NPL through different deferral approaches, primarily through deferrals to the RCRA program. Specifically, deferrals to the RCRA program account for 193 of these remaining sites (89 percent).program and a few deferrals to NRC, EPA deferred 21 sites to state programs using the formal state deferral approach in 4 of its 10 regions. EPA officials said that the OCA deferral to states approach has largely replaced the formal state deferral approach, and EPA does not anticipate using the formal deferral approach much in the future. As discussed above, EPA addresses more sites through OCA deferrals than any other approach but has less guidance to define this approach or how deferral decisions should be documented than for its other deferral approaches. Unlike OCA deferrals, EPA has guidance or other documents outlining the process for deferrals to the RCRA program, deferrals to the NRC, and formal state deferrals. These documents clearly define or provide mechanisms to define the roles of the Superfund program and the entity that will conduct oversight at the site. For example, guidance for the formal state deferral approach specifies that the EPA region and the state should enter into a memorandum of agreement in which they clarify mutual expectations for their interaction and each party’s responsibilities at deferred sites. After the deferral, the region continues to review the state’s progress and conduct any other activities required by its individual agreements with the state in each case. In contrast, EPA has not issued guidance focused on OCA deferrals that clearly defines the different types of OCA deferrals or what detail would be sufficient or appropriate to support its decisions at these sites. Instead, EPA describes OCA deferrals in the Superfund Program Implementation Manual (which is updated annually). EPA recently added to its instructions in the manual regarding sites with OCA deferrals. Specifically, in its 2012 version of the manual, EPA added more language explaining that there is to be no continuous and substantive involvement on EPA’s part while cleanup work is ongoing at OCA deferral sites. In addition, in this version of the manual, EPA added an instruction for regions to check on the status of OCA sites periodically. Officials in EPA regions noted that they use different approaches for tracking OCA sites; for example, for an OCA deferral to states, EPA regions’ tracking activities range from checking state websites to meeting with states to receive status updates every 3 months. Officials in some regions noted that they will need to modify their processes to meet this new instruction. Even with EPA’s additions to the manual, the available instruction does not clearly define each type of OCA deferral, particularly OCA deferrals to private parties, which has resulted in inconsistent identification of those OCA deferrals by different regions. While the manual defines OCA deferrals generally, it does not define each type of OCA deferral. When asked to define OCA deferrals to private parties, Superfund program officials in headquarters referred us to EPA regional officials for more information, and officials in 6 of 10 EPA regions were unsure about how to define OCA deferrals to private parties or how they should be used. Moreover, officials in another 6 regions confirmed that some sites identified as OCA deferrals to private parties in CERCLIS should have been identified as OCA deferrals to states. Without clearer guidance defining the different OCA deferrals, EPA cannot be reasonably assured that it is consistently tracking its OCA deferral sites in CERCLIS, which can make it difficult to identify what entity is responsible for conducting oversight at the site. The completion of cleanup date at an OCA deferral site is the date of the determination that cleanup was successfully completed, that cleanup was not necessary, or that the other entity will not complete cleanup and the site will be referred back to the Superfund program. regions collect to support OCA deferrals covers a broad range, including no written documentation, an e-mail from a state official, letters from state officials attesting to the cleanup, or a copy of the legal order or agreement between the state and PRP. Similarly, regions relied on different forms of documentation, including various e-mails, letters, or reports from state officials to document the completion of cleanup at OCA deferral sites. Officials in three regions reported that there was no consistent standard for documentation within their region. Moreover, Region 9 officials noted that the region had not tracked the completion of cleanups at OCA deferrals in CERCLIS in the past and may have no documentation for some of its older OCA deferral sites. Without guidance that details the documentation needed to support regions’ OCA deferral decisions, EPA cannot be reasonably assured that its regions’ documentation will be appropriate or sufficient to verify that these sites have been deferred or have completed cleanup. EPA officials noted they were working on additional guidance for OCA deferrals. However, these officials said that development of the guidance was in the planning stage; therefore, a draft of this guidance, detailed information on what will be included in the guidance, or a planned issuance date for the guidance, were not yet available. EPA provides the least detailed guidance for the small number of sites that are undergoing long-term cleanup under the Superfund program outside of the SA and NPL approaches. Such sites do not have specific guidance at the program level, regional level, or a section in the Superfund Program Implementation Manual describing how they should be defined or tracked. In contrast, EPA has developed instructions in the manual for how to track sites cleaned up under the SA approach. EPA also has guidance for the NPL approach, such as how the agency should propose, list, and delete sites from the NPL. EPA officials noted that sites that are cleaned up under the other Superfund program approach often involve unique situations, making it difficult to establish any guidance that would cover all possible situations. For example, one of these sites is using a hybrid approach under both RCRA and CERCLA authorities, according to an EPA official. However, in 10 cases, regional officials described these sites as standard cleanups under CERCLA authority that used standard procedures. While there are unique and standard cases among sites being cleaned up under the other Superfund program approach (i.e., outside of the SA and NPL approaches), EPA officials could not provide a reliable estimate of these other sites because the agency has no consistently applied method for counting them. Without a method to identify and track such sites, EPA headquarters has no way to determine the extent to which regions use this approach or evaluate regions’ use of this approach. As a result, it will be difficult for EPA headquarters to hold regions accountable for using the approach. The processes for implementing the SA and NPL approaches have similarities, but also several differences, some of which EPA has accounted for through specific provisions in its agreements with PRPs at SA agreement sites. However, some sites may not benefit from EPA’s efforts to account for these differences. Furthermore, the agency’s tracking and reporting of SA agreement sites differs significantly from its tracking and reporting of NPL sites. Using the SA approach at sites has certain potential advantages for EPA and some PRPs and states, but communities’ views on this approach are mixed. The processes for implementing the SA and NPL approaches have many similarities. According to the agency’s SA guidance, at its SA agreement sites, EPA is to generally act in accordance with the practices normally followed at sites listed on the NPL. For example, according to EPA guidance, SA agreement and NPL sites should follow the same investigation and cleanup processes, including the phases and milestones of long-term cleanups shown earlier in figure 3. EPA regions should also use the same response techniques, standards, and guidance for SA agreement sites as they do for NPL sites. According to EPA’s guidance, SA agreements should eventually achieve cleanup levels that are comparable to those required at NPL sites. EPA regions should also take steps to ensure equivalency between the SA and NPL approaches in the absence of NPL listing. Despite these similarities, there are certain differences in the overall processes and EPA’s authority under the NPL and SA approaches. Through specific provisions in its SA agreements with PRPs, EPA has sought to make the two approaches comparable by accounting for the following four key differences: First, EPA has the authority to pay for remedial actions only at sites listed on the NPL.include a provision to help ensure cleanups are not delayed by a loss of funding if the PRP ceases work during the remedial action phase of cleanup. Specifically, this provision requires the PRP to obtain a readily available source of funds that the agency can use if it needs to take over the cleanup work. EPA can use those funds to continue the work while the agency lists the site on the NPL, if necessary. To account for this difference, SA agreements Second, EPA is authorized to provide technical assistance grants that help communities participate in decision making only at sites that are listed or proposed for listing on the NPL. An initial grant of up to $50,000 is available to qualified community groups so they can contract with independent advisors to help the community understand technical information about the site. EPA includes a provision in SA agreements to help ensure that a community’s opportunity to receive technical assistance at an SA agreement site is comparable to that at an NPL site. This provision requires the PRPs, with EPA oversight, to administer and fund a technical assistance plan, under which a qualified community group can receive up to $50,000 for the same purposes as a technical assistance grant from EPA. Third, if a PRP were to clean up an SA agreement site to the extent that it no longer scored at least 28.50 on the Hazard Ranking System, according to EPA, it might lose the option of listing the site on the NPL, a concern that is not present when a site is listed on the NPL. To prevent this, SA agreements state that the PRP will not challenge listing the site on the NPL if a partial cleanup of the site results in changed site conditions. EPA officials noted that this provision gives the agency assurance that it can step in and clean up the site under the NPL approach if the PRP were to default on the SA agreement. Fourth, CERCLA states that an action for natural resource damages (NRD) at NPL sites must start within 3 years after completion of the remedial action. This period is longer than the general statute of limitations for NRD claims, which states that an action must start within 3 years after the discovery of the loss and its connection with the contamination. SA agreements contain a provision that clarifies that the longer statute of limitations for NPL sites also applies to SA agreement sites. Even with EPA’s efforts to achieve equivalence of SA agreement and NPL sites through these provisions, some sites may not benefit from these efforts because EPA regions have entered into agreements with PRPs at sites that they said were likely eligible for the SA approach without following the SA guidance. Agreements at such sites may not, for example, ensure that a community has access to funds to pay for technical assistance or that remedial action can continue if a PRP stops cooperating. Officials from some EPA regions told us they have continued to enter into agreements with PRPs since 2002 without following the SA guidance. We identified six sites where this has occurred as follows: In Region 7, officials entered into an agreement with a PRP to conduct remedial design and remedial action at a site. Regional officials stated that the SA approach, which can be suggested for a site by the PRP or the region, never came up during their discussions with the PRP. In Region 10, officials stated that the agreements they had entered into with PRPs at five sites might qualify for the SA approach but that, at the time they entered into the agreements, the officials had not focused on whether the agreements met the SA criteria; rather, they were focused on obtaining enforceable agreements. According to EPA headquarters officials, if regions are going to conduct a long-term cleanup under the Superfund program at a site, but not list it on the NPL, the agency prefers regions to use the SA approach. EPA headquarters officials said that they believed this preference was implicit in the agency’s SA guidance and stated they discussed this preference with regional officials at periodic meetings; however, they also acknowledged that this preference is not stated explicitly anywhere in guidance for the regions. If regions continue to enter into agreements for some sites without following the SA guidance, these sites may be denied some of the advantages built into the SA agreements to ensure that the cleanups will be comparable to those under the NPL approach. Some differences remain between the way EPA tracks sites under the SA and NPL approaches. In CERCLIS, EPA tracks sites’ status in relation to the NPL regardless of any changes in cleanup approach. Specifically, sites that have been proposed for listing on the NPL, are currently on the NPL, have been deleted from the NPL, or have been removed from proposal can always be identified as such in CERCLIS, which allows EPA to accurately identify sites that are or have been on the NPL. In contrast, EPA cannot similarly track an SA agreement site as such if it is subsequently listed on the NPL. Specifically, EPA currently tracks SA agreement sites through a single database code identifying only that a site has an SA agreement, and the identifying code is not maintained in the database if the site is later added to the NPL. The agency has not clarified in its guidance when to leave this SA identifying code in place, and when to remove it, even though the EPA IG recommended in a 2007 report that EPA develop specific instructions on when to use the SA designation and update the Superfund Program Implementation Manual (which is updated annually) to incorporate these instructions. According to the IG report, these instructions should specify that the SA code should not be removed even if the site is cleaned up or proposed for the NPL, so that controls over documentation of sites with SA agreements can be maintained. As the EPA IG pointed out, absence of guidance can result in poor quality data on the SA universe. While EPA indicated in 2010 that it would implement this recommendation, the 2012 manual does not include any instructions about maintaining the SA code. Because EPA has not implemented the IG’s recommendation, the manner in which the agency tracks the identity of SA agreement sites in CERCLIS is incomplete. For example, while an EPA website identifies all sites that have or have had SA agreements, three sites that had SA agreements and were later added to the NPL cannot be identified in CERCLIS as having had SA agreements. As a result, all sites that have had SA agreements are not identifiable in CERCLIS, which may hamper EPA’s ability to effectively manage long-term cleanups and track outcomes at SA agreement sites. Furthermore, the standards for specifying what documentation is sufficient to support the Hazard Ranking System score of SA agreement sites are less clear than those for NPL sites. When sites are proposed for listing on the NPL, EPA procedure requires they have a Hazard Ranking System documentation record—a specific document that includes detailed justification for the Hazard Ranking System score. In contrast, both the 2004 and 2012 SA guidance state that EPA should have “adequate documentation” supporting a Hazard Ranking System score of 28.50 or higher but do not define what is meant by “adequate” documentation or provide criteria for assessing adequacy. The guidance documents specify that regions may rely on a draft Hazard Ranking System documentation record or “other adequate documentation,” but do not provide an explanation of what other documentation might be adequate. EPA headquarters officials told us that documentation of a preliminary calculation of the Hazard Ranking System score during the initial assessment phases would qualify as adequate, and said that this has been discussed with regional officials during periodic meetings. EPA officials acknowledged, however, that this interpretation of the guidance has not been included in any written guidance to the regions. As the EPA IG pointed out in its 2007 report, consistent and reliable documentation of Hazard Ranking System scores at SA agreement sites is an internal control to ensure compliance with the SA guidance and approach. Under the federal standards of internal control, agencies are to clearly put in writing (i.e., in management directives, administrative policies, or operating manuals) internal controls, such as this interpretation of the guidance, and have them readily available for examination. Without more specific written guidance, EPA regional officials may not develop adequate documentation of Hazard Ranking System scores at SA agreement sites. In addition to the differences in its tracking, EPA has not reported the agency’s performance on the progress of cleanup at SA agreement sites as it has for NPL sites. EPA reports achievements at NPL sites, including completion of some phases of the cleanup process, as part of the agency’s implementation of provisions under GPRA, which generally aims to hold federal agencies accountable for using resources wisely and achieving program results. Two of the Superfund program’s three GPRA performance measures—sites where human exposure is under control and sites that are ready for their anticipated use—refer only to NPL sites. One additional performance measure tracks the completion of the initial assessment phases, which generally precede EPA’s decision about which cleanup approach to use at a site, including the SA or NPL approach. EPA’s Office of Solid Waste and Emergency Response, which manages the Superfund program, reports these performance measures for NPL sites in several annual reports available on EPA’s website. However, EPA does not include in these reports the cleanup milestones reached at SA agreement sites, such as how many SA sites have human exposure under control. The EPA IG recommended in 2007 that EPA track and report the same GPRA performance measures at SA agreement sites as it does at NPL sites. As the IG reported, by measuring and tracking all performance measures at SA agreement sites, EPA could demonstrate the outcomes of the Superfund program’s work and provide an incentive to regions by more thoroughly accounting for their performance. In 2010, EPA indicated that it would implement the IG’s 2007 recommendation to track and report all Superfund GPRA performance measures at SA agreement sites using an annual report. EPA officials noted that the agency has begun tracking Superfund performance measures for SA agreement sites, but they acknowledged that EPA is not reporting these results publicly. Until the agency reports performance information on the progress of cleanup at SA agreement sites as it does for NPL sites, EPA is not providing the public and Congress with a full picture of SA agreement sites. Without such information, Congress lacks complete information on the progress of the Superfund program to inform its legislative actions, including appropriations. Using the SA or the NPL approach can have advantages or disadvantages for the parties involved, including EPA, PRPs, states, and communities. Specifically, using the SA approach generally allows EPA to avoid at least some of the cost and time associated with listing a site on the NPL. For example, NPL listing requires preparation of a Hazard Ranking System documentation record, which is not required for sites with SA agreements. EPA officials estimated each such record costs an average of about $65,000 to prepare. In addition, when EPA decides to propose a site for listing on the NPL, the agency sometimes conducts an expanded site inspection if further information is necessary to document a Hazard Ranking System score. EPA officials estimated this step costs about $92,000 on average. In addition, to list a site on the NPL, EPA has to work through the formal listing process, including issuing notices in the Federal Register with a public comment period. This process takes time to complete, which may affect the progress at the site. In Region 3, EPA officials stated that the volume of comments received on a particular site proposed for the NPL, in addition to the likelihood of litigation from one or more parties if the site were finalized on the NPL, led the region to address the site through the SA approach. Some EPA regions have seen the advantages of using the SA approach more than others. As shown in figure 5, of the 67 SA agreement sites, 57 sites, or 85 percent, are in EPA Regions 4 and 5. Differences in usage of the SA approach among regions relate to a region’s specific circumstances and preferences. According to EPA headquarters officials, Regions 4 and 5 had early experience with SA agreements and may have been more comfortable in starting new ones as a result. These two regions have also listed many sites to the NPL since the SA approach was formalized in 2002. Region 4 officials told us that they have found that the SA approach is best suited to sites with one or two PRPs and no questions about the PRPs’ liability or ability to conduct the investigation or cleanup. They said they have also found that it is helpful when a PRP has a financial interest in finishing the cleanup quickly, as in the case of a potential redevelopment project at a site. Other regions have used the SA approach in limited circumstances. For example, officials in Region 9 described one case in which they pursued the SA approach because the state did not want a particular site listed on the NPL. Two regions—Regions 1 and 2—have never used the SA approach. Officials in Region 1 explained that few sites that have willing and capable PRPs and are eligible for the NPL come to the region’s attention because state programs prefer to take on oversight of such sites. Region 2 officials said they did not see a reason to use the SA approach—if a site’s contamination is severe enough, the region will propose the site to the NPL, unless the state is addressing the site. Using the SA approach allows PRPs to avoid the perceived stigma associated with an NPL site, according to EPA officials. Sites with SA agreements have to meet all of the qualifications of NPL sites, and thus may have contamination that is just as severe, but the potential stigma of NPL listing appears to influence PRPs. Officials in 7 of 10 regions mentioned the stigma of an NPL site as a concern for PRPs. Concerns about this stigma may also arise when a company is to be sold and does not want to list an NPL site as part of its liabilities, according to EPA regional officials. Related to this stigma, EPA officials said they believed that avoiding listing on the NPL may help local government officials and PRPs in some cases, such as facilitating a site’s redevelopment or its financing. Previous reports have also pointed to the potential stigma of NPL listing as motivation for pursuing a different cleanup approach. For example, an assessment of the effectiveness of the SA approach in Region 4 (hereafter referred to as the Region 4 study) found that sites using the SA approach may have a higher potential for redevelopment than comparable NPL sites if avoiding this stigma increases PRPs’ financing options and their willingness to redevelop. In addition, some states generally prefer that EPA not list sites on the NPL, according to EPA officials, which makes the SA approach more appealing. According to EPA policy, EPA typically obtains a state’s concurrence before listing a site on the NPL. Officials in all 10 EPA regions mentioned the states’ views as one of the factors they used to determine whether to pursue an NPL listing or other approaches. Moreover, officials in 4 of 10 regions said there were states in their region that were generally reluctant to have EPA list sites on the NPL. For example, Region 9 officials said two of their states generally do not want EPA to list sites on the NPL; specifically, one of these states wanted to avoid the associated stigma of having NPL sites in the state. The SA approach also has advantages and disadvantages for communities. According to an EPA official, it may be easier for communities to obtain technical assistance funds from PRPs at SA agreement sites than to obtain the equivalent funds from EPA at NPL sites. This official said obtaining funds from PRPs at SA agreement sites often involved the absence of a “match” requirement as well as fewer paperwork requirements for the communities because the technical assistance plans do not have to follow federal grant requirements. However, under the SA approach, communities have no opportunity for a formal comment process on EPA’s selection of the SA approach itself, as they do under the NPL approach. Specifically, when EPA proposes a site for the NPL in the Federal Register, the public has 60 days to comment on the proposed listing. EPA then responds in writing to significant public comments in conjunction with the final Federal Register listing announcement of the site. No such opportunity exists when EPA decides to enter into an SA agreement at a site, although EPA provides numerous opportunities under the SA approach for communities to comment on the cleanup process. Communities also may have mixed reactions to the SA approach for other reasons as well. According to EPA officials, communities may have concerns about the SA approach and may require outreach from the agency to explain the approach. For example, at one site in Region 5, the region expanded its outreach efforts after some community members protested the use of the SA approach at the site. A regional official explained that some individuals in the community believed the site would not follow the same cleanup process as an NPL site. Some community members may support listing on the NPL over the SA approach to bring increased attention to a site, helping to ensure its cleanup. Other regional officials said other community members may be more open to the SA approach and oppose listing on the NPL for fear of its effect on property values. The Region 4 study confirmed that the SA approach is often considered advantageous by community members and leaders concerned about property values and stigma. However, this report also found that other community members require confirmation that the process will not result in more limited resources or reduced remediation compared to listing on the NPL. For sites with agreements from June 2002 through December 2012, SA agreement sites and similar NPL sites we selected showed mixed results in the time needed to complete negotiations for agreements, specific cleanup activities, and achieving the construction completion milestone (see app. I for more details on our objectives, scope, and methodology and app. III for more information on our results). Specifically, SA agreement and NPL sites in our analysis showed mixed results in the average time to complete negotiations with PRPs and for specific cleanup activities, such as remedial investigation and feasibility studies, remedial designs, and remedial actions. In addition, a lower proportion of SA agreement sites have reached construction completion compared with similar NPL sites. SA agreement sites tend to be in earlier phases of the cleanup process because the SA approach began more recently than the NPL approach. For agreements finalized from June 2002 through December 2012 at sites in our analysis, SA agreement and similar NPL sites showed mixed results in the length of time to complete negotiations, with SA agreement sites taking about as long as similar NPL sites for remedial investigation and feasibility study negotiations and less time for remedial design and remedial action negotiations. EPA regional officials confirmed that negotiations can be faster at SA agreement sites because the PRPs are more cooperative. For example, Region 4 officials highlighted one SA agreement site where the PRP pushed for a quicker negotiation process by turning in documents ahead of deadlines, unlike many other PRPs. In another case, Region 5 officials said they negotiated three SA agreements for remedial investigations and feasibility studies covering 19 sites of a similar nature with the same PRP. Region 5 officials noted that these negotiations were particularly smooth and cooperative. Moreover, the Region 4 study also found, based on interviews with PRPs and EPA officials, that the tone of SA negotiations is more productive than at NPL sites. However, given the relatively limited number of negotiations for both NPL and SA agreement sites in our analysis, the differences in the average length of negotiations cannot be attributed entirely to the type of approach used at each site. The SA agreement and similar NPL sites in our analysis showed mixed results in the length of time it took to complete specific cleanup activities, remedial investigations and feasibility studies on average and about the same time for remedial designs and remedial actions. While SA agreement sites took substantially longer on average than NPL sites to complete remedial investigations and feasibility studies, these differences do not appear to be exclusively attributable to the SA and NPL approaches. For example, several remedial investigations and feasibility studies at SA agreement sites took a long time to complete due to individual circumstances at the site, such as dealing with a proposal to sell on-site materials to a manufacturing company, late participation from PRPs in the process, or coordination with other cleanup efforts. SA agreement sites and NPL sites in our analysis took about the same time on average to complete remedial designs and remedial actions. with SA agreement sites taking substantially longer for A lower proportion of SA agreement sites have reached construction completion compared with similar NPL sites in our analysis (see fig. 6). Multiple cleanup activities can occur within a given phase at the same or different operable units at one site. Completion of one cleanup activity, such as a remedial investigation and feasibility study, does not necessarily mean all work in that phase has been completed. Our analysis looks at individual activities within given phases. EPA regional officials are responsible for choosing the appropriate long- term cleanup approach for sites with contamination that makes them eligible for the NPL. To do so, they select from among several approaches, including deferring responsibility for the oversight of site cleanup outside of the Superfund program. Of these sites deferred outside of the Superfund program, EPA has deferred about 1,800 sites through OCA deferrals—more sites than any other approach—but the agency has not issued guidance focused on this long-term cleanup approach. Instead, EPA describes OCA deferrals in the Superfund Program Implementation Manual, which does not clearly define each type of OCA deferral, particularly OCA deferrals to private parties. This has led to inconsistent coding of OCA deferrals in CERCLIS by different regions. Moreover, EPA’s guidance does not specify in detail the documentation regions should have to support their decisions on OCA deferrals or completion of cleanup at these sites. As a result, EPA regions collected varying types and amounts of documentation—including, in some cases, no documentation—to support OCA deferrals. EPA officials noted they were currently working on additional guidance for OCA deferrals, but they had not set an issuance date for this guidance. Without clearer guidance on OCA deferrals, EPA does not have reasonable assurance that it can consistently track its OCA deferral sites in CERCLIS or that its regions’ documentation will be appropriate or sufficient to verify that these sites have been deferred or have completed cleanup. In addition, EPA officials could not provide a reliable estimate regarding the number of sites with long-term cleanups under the Superfund program that are being cleaned up through approaches other than the NPL and SA approaches—the “other” Superfund program sites—because there is no consistently applied method for tracking them. While the agency’s estimates of the number of such sites is relatively small, without a method to identify and track such sites, it is difficult for EPA headquarters to determine the extent to which regions use this other approach under the Superfund program, evaluate regions’ use of this approach, or hold regions accountable for using this approach. Furthermore, EPA guidance has made clear since 2002 that the agency should try to make SA agreement sites equivalent to NPL sites in terms of the level of cleanup achieved, among other things. EPA has largely accomplished this through adherence to the Superfund cleanup process and by adding certain provisions to SA agreements to address key differences between the NPL and SA approaches. The agency has not clarified to regions in its guidance that the SA approach is the preferred approach for long-term cleanup of sites under the Superfund program not listed on the NPL. Without clear guidance, agreements at such sites may be denied some of the advantages built into the SA agreements to ensure that the cleanups will be comparable to those under the NPL approach. Also, while EPA accurately identifies NPL sites in CERCLIS, the agency cannot do the same for SA agreement sites because it has not clarified in writing when the database code that identifies sites with SA agreements should remain in place and when it should be removed. In addition, EPA’s standards for specifying what documentation is sufficient to support the Hazard Ranking System score at SA agreement sites are less clear than those for NPL sites. Unless EPA improves its tracking of SA agreement sites and clarifies its policies, its ability to effectively track outcomes of the SA approach at these sites and manage long-term cleanups at sites under the Superfund program may be hampered. Finally, while EPA reports performance information for NPL sites under GPRA, it does not report performance information on the progress of cleanup at SA agreement sites in an equivalent manner. Without such information on SA agreement sites, Congress lacks complete information on the progress of the Superfund program to inform its legislative actions, including appropriations. To improve the Superfund program’s management of sites with contamination that makes them eligible for the NPL, including management of the SA approach and deferrals of cleanup oversight to other entities, we recommend that the Administrator of EPA take the following four actions: Provide guidance to EPA regions that defines each type of OCA deferral and what constitutes adequate documentation for OCA deferral and completion of cleanup. Develop a method for EPA headquarters to identify and track other sites with long-term cleanups under the Superfund program (i.e., those that are outside of the NPL and SA approaches). Update EPA’s written policies on SA agreement sites, including taking steps such as clarifying whether the SA approach is EPA’s preferred approach for long-term cleanup of sites under the Superfund program and outside of the NPL, specifying what documentation is sufficient to support the Hazard Ranking System score at SA agreement sites, and defining when the database code that identifies sites with SA agreements should remain in place. Report performance information on the progress of cleanup at SA agreement sites in a manner that is equivalent to such reporting for NPL sites. We provided a draft of this report to EPA for review and comment. In written comments, which are included in appendix IV, EPA agreed with the report’s recommendations and stated that it believes the report contains substantial useful information. Regarding the first recommendation, EPA stated that it added more detail on OCA tracking in its fiscal year 2012 Superfund Program Implementation Manual, but it acknowledged that more guidance is needed. Regarding the second recommendation, EPA stated that it agreed with the recommendation without further comment. Regarding the third recommendation, EPA said that it will clarify that the SA approach is generally the agency's preferred enforcement approach for CERCLA non-NPL sites that are “NPL-caliber,” where feasible and appropriate. Finally, regarding the fourth recommendation, EPA stated that it agrees with this recommendation as it pertains to reporting under GPRA and provided further information on how EPA reports measures at SA agreement sites. EPA also provided technical comments on the draft report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of the Environmental Protection Agency, the appropriate congressional committees, and other interested parties. In addition, the report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix provides information on the scope of the work and the methodology used to examine (1) how the Environmental Protection Agency (EPA) addresses the cleanup of sites it has identified as eligible for the National Priorities List (NPL), (2) how the processes for implementing the Superfund Alternative (SA) and NPL approaches compare, and (3) how SA agreement sites compare with similar NPL sites in completing the cleanup process. To examine how EPA addresses the cleanup of hazardous waste sites with a level of contamination that makes them eligible for the NPL, we analyzed applicable federal statutes and EPA regulations and guidance to determine the available approaches to address sites that are reported to the Superfund program. We then obtained and analyzed data from EPA’s Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS), the Superfund program’s database, as of December 2012. Specifically, we analyzed EPA’s CERCLIS database to determine how many sites EPA currently classified as undergoing long- term cleanup under each approach, both nationally and by each of EPA’s According to EPA officials, sites under each of the long-term 10 regions.cleanup approaches would have a Hazard Ranking System score of at least 28.50, otherwise the site would have been classified as “no further remedial action planned.” Thus, all sites identified as being under a long- term cleanup approach were considered to have contamination making them eligible for the NPL. This analysis involved the review of EPA’s non- NPL status code, NPL status code, and SA code. In addition, we conducted semistructured interviews with officials in all 10 EPA regions to understand each region’s processes for selecting among long-term cleanup approaches and why regions used the various approaches. We also obtained relevant supporting documentation from these regional officials. In addition, we interviewed EPA headquarters officials about the assessment process and cleanup approaches. Finally, we interviewed a nonprobability, convenience sample of officials from 13 state cleanup programs who were familiar with available cleanup approaches. The convenience sample consisted of representatives from state environmental departments taking part in an Association of State and Territorial Solid Waste Management Officials conference call who agreed to speak with us. Because this was a nonprobability sample, the results of our analysis cannot be generalized to all states; however, these officials provided important information about the cleanup process. To compare the processes for implementing the SA and NPL approaches, including the cleanup process and EPA’s oversight, we analyzed available documentation on the two approaches, including guidance and prior reviews. These reviews included an EPA Inspector General (IG) report on the SA approach, as well as several reports on the approach by EPA. We reviewed key findings and recommendations from the IG’s report, as well as the evidence provided by EPA to demonstrate its implementation of the report’s recommendations. We found the evidence to be sufficient to assess whether EPA had implemented these recommendations. In addition, we interviewed officials in all 10 EPA regions to determine how each region implemented the SA approach and obtained relevant supporting documentation. Finally, we interviewed EPA headquarters officials knowledgeable about the SA approach. To compare how SA agreement sites and similar NPL sites complete the cleanup process, we identified SA agreement sites and constructed a comparison group of 74 NPL sites with agreements between EPA and potentially responsible parties (PRP) similar to those at SA agreement sites as follows: We identified 67 SA agreement sites using the SA code and added to that 3 SA agreement sites with their SA code removed after the site was listed on the NPL for a total of 70 SA agreement sites; we identified these three sites through our interviews with EPA officials. We then obtained data on the legal actions taken at these sites from EPA officials in the Office of Site Remediation Enforcement, which included all agreements at these sites. Based on discussions with EPA officials and the SA guidance, we isolated agreements at SA agreement sites by selecting: (1) agreements entered into between June 2002 (the date of the issuance of the first SA guidance) and December 2012; (2) administrative orders on consent or consent decrees; and (3) agreements involving a PRP-led combined remedial investigation and feasibility study, remedial design, or remedial action. After excluding four sites with SA codes that had SA agreements that were not relevant to our study, we had 66 SA agreement sites for our analysis. We constructed our comparison group of 74 NPL sites starting with the approximately 1,300 sites on the NPL. Specifically, we identified the 702 sites with (1) a combined remedial investigation and feasibility study, (2) remedial design, or (3) remedial action led by a PRP. We requested data on the legal actions taken at these sites from EPA officials and identified agreements similar to SA agreements based on the date the agreement was entered into, the type of agreement, and whether it included PRP-led long-term cleanup actions. In addition, we dropped any NPL sites from Regions 1 and 2 from the analysis because neither region has used the SA approach. To more precisely align the NPL comparison group with SA agreement sites, we analyzed, for SA agreements, the number of PRPs involved and estimated costs for PRP-led actions. According to EPA officials, SA agreement sites generally tend to have fewer PRPs. Based on this analysis and EPA’s comments, we established thresholds for different variables that agreements in our NPL comparison group could not exceed. Specifically, NPL agreements could have: (1) no more than seven PRPs involved and (2) administrative orders on consent with estimated values between $100,000 and $5,000,000 or consent decrees with estimated values between $125,000 and $30,000,000.ranges covered the vast majority of SA agreements. After we identified the NPL sites with agreements similar to SA agreement sites, we merged the data on the legal actions with cleanup action data for NPL and SA agreement sites. We kept (1) combined remedial investigation and feasibility studies, (2) remedial designs, and (3) remedial actions at sites if the action was explicitly listed as a remedy in an SA agreement or an SA-similar agreement (for NPL sites). We identified negotiations related to cleanup actions of interest by comparing the completion date of the negotiation with the completion date of the agreement in EPA’s legal action data. For remedial investigation and feasibility study negotiations, we kept any negotiation with a completion date up to 180 days before the date of an administrative order on consent for that site. For remedial design and remedial action negotiations, we kept any negotiation with a completion date up to 2 years before the date After keeping these cleanup actions of interest, we of a consent decree.computed the durations of specific cleanup activities by calculating the difference in months between the start and completion dates of identified actions included in CERCLIS. We then calculated the mean and median durations for the SA and NPL groups, as well as related ranges. We compared the means and medians of the durations to assess whether reported results are affected by a possible skewed distribution. We decided to report the median because it is less sensitive to extreme values and provides a better estimate of the “average” duration for this analysis. Because only three SA agreement sites had reached the construction completion milestone, we were unable to compare the groups across the entire cleanup process; instead, we compared completion of specific activities, such as remedial designs. The results of our analysis cannot be generalized to all NPL sites because the 74 sites were a subset of all NPL sites selected to be as similar as possible to SA agreement sites based on key characteristics related to cleanup durations such as having a PRP that agreed to conduct at least some part of the cleanup. The comparison group was created for purposes of assessing whether alternative approaches for addressing the long-term cleanup of hazardous waste sites under the Superfund program can make a difference in cleanup durations and not for making generalizations about the larger universe of all NPL sites. We conducted additional analyses on our SA and NPL groups to determine if there were any unaccounted distributional differences within each group that would materially affect our results. Specifically, we examined the sensitivity of our results to differences in regional distribution because the SA approach has different regional usage patterns than the NPL approach. While 85 percent of SA agreement sites are in Regions 4 and 5, only 34 percent of the similar NPL sites are in Regions 4 and 5. In one analysis, we restricted SA agreement and similar NPL sites to Regions 4 and 5, and the results were generally similar to the analysis using the full set of SA agreement and similar NPL sites. In addition, we examined the sensitivity of our results to differences in the complexity of SA agreement sites and similar NPL sites measured through the distribution of megasites and single operable unit sites in each group. The results for length of negotiations were not sensitive to differences between SA agreement sites and similar NPL sites in the distribution of megasites, though the results for the length of cleanup activities were somewhat sensitive to distributional differences. The results for length of negotiation and cleanup durations were, in general, not sensitive to differences in the distribution of sites with one or more operable units. To assess the reliability of the data from EPA’s CERCLIS database used in this report, we analyzed related documentation, examined the data for errors or inconsistencies, and interviewed agency officials about any known data problems and to learn more about their procedures for maintaining the data. Where there were discrepancies in the data, we worked with EPA officials to clarify. For example, we identified certain SA agreement sites that did not appear to have agreements with long-term cleanup actions and reviewed these with EPA officials. Miscoded data were corrected, and EPA officials provided explanations for unique circumstances with certain agreements. We determined the data to be sufficiently reliable for calculating durations for completing different cleanup activities, including negotiations, at SA and NPL sites. We conducted this performance audit from November 2011 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 1 and 2 provide a breakdown of cleanup approaches by region. Table 1 shows the number of sites within each region that are being cleaned up under the various cleanup approaches. Table 2 shows each region’s percentage of the total number of sites cleaned up under each approach. In this appendix, we discuss the results of our analysis of the median length of negotiations and the median length of cleanup activities at SA agreement sites and similar NPL sites, which consisted of NPL sites with agreements similar to SA agreements. Appendix I includes more information on our methodology. As shown in table 3, for agreements with PRPs finalized from June 2002 through December 2012, SA agreement sites and similar NPL sites in our analysis showed mixed results in the length of time to complete negotiations, with SA agreement sites taking about as long as similar NPL sites for remedial investigation and feasibility study negotiations and less time for remedial design and remedial action negotiations. Given the relatively limited number of negotiations for both NPL and SA agreement sites in our analysis and the effect of unique sites, the differences in the median length of negotiations cannot be attributed entirely to the type of approach used at each site. Unique conditions at each site have the potential to affect negotiations between EPA and the PRP beyond the cleanup approach selected. As shown in table 4, the SA agreement sites and similar NPL sites in our analysis showed mixed results in the length of time it took to complete specific cleanup activities, with SA agreement sites taking longer for remedial investigations and feasibility studies on average and about the same time for remedial designs and remedial actions on average. Twelve of the 14 remedial investigations and feasibility studies at SA sites took longer than 50 months to complete, which is greater than the median for NPL sites in our analysis, as well as the median of 51 estimated by EPA for PRP-led remedial investigation and feasibility studies that began after June 2002. However, given the relatively small number of cleanup activities for both NPL and SA agreement sites in our analysis and differences at the site level, the differences in the median length of cleanup activities cannot be attributed entirely to the type of approach used at each site. For example, several remedial investigations and feasibility studies at SA sites took a long time to complete due to individual circumstances at the site, such as dealing with a proposal to sell on-site materials to a manufacturing company, late participation from PRPs in the process, or coordination with other cleanup efforts. SA agreement sites and NPL sites in our analysis took slightly less than 2 years on average to complete remedial designs and slightly less than 3 years on average to complete remedial actions. In addition to the individual named above, Vincent P. Price, Assistant Director; Elizabeth Beardsley; Eric Charles; Pamela Davidson; Armetha Liles; Cynthia Norris; and Nico Sloss made key contributions to this report. | Under the Superfund program, EPA may address the long-term cleanup of certain hazardous waste sites by placing them on the NPL and overseeing the cleanup. To be eligible for the NPL, a site must be sufficiently contaminated, among other things. EPA regions have discretion to choose among several other approaches to address sites eligible for the NPL. For example, under the Superfund program, EPA regions may enter into agreements with PRPs using the SA approach. EPA may also defer the oversight of cleanup at eligible sites to approaches outside of the Superfund program. GAO was asked to review EPA's implementation of the SA approach and how it compares with the NPL approach. This report examines (1) how EPA addresses the cleanup of sites it has identified as eligible for the NPL, (2) how the processes for implementing the SA and NPL approaches compare, and (3) how SA agreement sites compare with similar NPL sites in completing the cleanup process. GAO reviewed applicable laws, regulations, and guidance; analyzed program data as of December 2012; interviewed EPA officials; and compared SA agreement sites with 74 NPL sites selected based on their similarity to SA agreement sites. The Environmental Protection Agency (EPA) most commonly addresses the cleanup of sites it has identified as eligible for the National Priorities List (NPL) by deferring oversight of the cleanup to approaches outside of the Superfund program. As of December 2012, of the 3,402 sites EPA identified as potentially eligible, EPA has deferred oversight of 1,984 sites to approaches outside the Superfund program, including 1,766 Other Cleanup Activity (OCA) deferrals to states and other entities. However, EPA has not issued guidance for OCA deferrals as it has for the other cleanup approaches. Moreover, EPA's program guidance does not clearly define each type of OCA deferral or specify in detail the documentation EPA regions should have to support their decisions on OCA deferrals. Without clearer guidance on OCA deferrals, EPA cannot be reasonably assured that its regions are consistently tracking these sites or that their documentation will be appropriate or sufficient to verify that these sites have been deferred or have completed cleanup. Under the Superfund program, EPA oversees the cleanup of 1,313 sites on the NPL, 67 sites under the Superfund Alternative (SA) approach, and at least 38 sites under another undefined approach. The processes for implementing the SA and NPL approaches, while similar in many ways, have several differences. EPA has accounted for some of these differences in its SA guidance by listing specific provisions for SA agreements with potentially responsible parties (PRP), such as owners and operators of a site. One such provision helps ensure cleanups are not delayed by a loss of funding if the PRP stops cleaning up the site. However, some EPA regions have entered into agreements with PRPs at sites that officials said were likely eligible for the SA approach without following the SA guidance. Such agreements may not benefit from EPA's provisions for SA agreements. EPA headquarters officials said the agency prefers regions to use the SA approach at such sites, but EPA has not stated this preference explicitly in its guidance. In addition, EPA's tracking and reporting of certain aspects of the process under the SA approach differs from that under the NPL approach. As a result, EPA's tracking of SA agreement sites in its Superfund database is incomplete; the standards for documenting the NPL eligibility of SA agreement sites are less clear than those for NPL sites; and EPA is not publicly reporting a full picture of SA agreement sites. Unless EPA makes improvements in these areas, its management of the process at SA agreement sites may be hampered. The SA agreement sites showed mixed results in completing the cleanup process when compared with 74 similar NPL sites GAO analyzed. Specifically, SA agreement and NPL sites in GAO's analysis showed mixed results in the average time to complete negotiations with PRPs and for specific cleanup activities, such as remedial investigation and feasibility studies, remedial designs, and remedial actions. In addition, a lower proportion of SA agreement sites have completed cleanup compared with similar NPL sites. SA agreement sites tend to be in earlier phases of the cleanup process because the SA approach began more recently than the NPL approach. Given the limited number of activities for both NPL and SA agreement sites in GAO's analysis, these differences cannot be attributed entirely to the type of approach used at each site. GAO recommends, among other things, that EPA issue guidance to define and clarify documentation requirements for OCA deferrals and clarify its policies on SA agreement sites. EPA agreed with the report's recommendations. |
Vaccination is the primary method for preventing influenza and its more severe complications. Flu vaccine is produced and administered annually to provide protection against particular influenza strains expected to be prevalent that year. When the match between the vaccine and the circulating viruses is close, vaccination may prevent illness in about 70-90 percent of healthy people aged 64 or younger. It is somewhat less effective for the elderly and those with certain chronic diseases but, according to CDC, it can still prevent secondary complications and reduce the risk for influenza-related hospitalization and death. CDC estimates that during the average flu season, for every 1 million elderly persons that are vaccinated approximately 1,300 hospitalizations and 900 deaths are prevented. Information on which groups are at highest risk for medical complications associated with influenza and recommendations on who should receive a flu shot are issued by CDC’s Advisory Committee on Immunization Practices (ACIP). Because the flu season generally peaks between December and early March, and because immunity takes about 2 weeks to establish, most medical providers administer vaccinations between October and mid- November. CDC’s ACIP recommended this period as the best time to receive a flu shot. However, if flu activity peaks in February or March, as it has in 10 of the past 19 years, vaccination in January or later can still be beneficial. Producing the vaccine is a complex process that involves growing viruses in millions of fertilized chicken eggs. This process, which requires several steps, generally takes at least 6 to 8 months between January and August each year. Each year’s vaccine is made up of three different strains of influenza viruses, and typically each year, one or two of the strains is changed to better protect against the strains that are likely to be circulating during the next flu season. FDA decides which strains to include and also licenses and regulates the manufacturers that produce the vaccine. Three manufacturers—two in the United States and one in the United Kingdom—produced the vaccine used during the 2000-01 flu season. Much like other pharmaceutical products, flu vaccine is sold to thousands of purchasers by manufacturers, numerous medical supply distributors, and other resellers such as pharmacies. Purchasers then administer flu shots in medical offices, public health clinics, nursing homes and pharmacies, as well as in less traditional settings such as grocery stores and other retail outlets, senior centers, and places of employment. For the 1999-2000 flu season, about 77 million doses of vaccine were distributed nationwide. CDC estimates that about half of the vaccine was administered to people with high-risk conditions and to health care workers, and the balance was administered to healthy people younger than 65 years. Overall, manufacturing problems led to vaccine production and distribution delays of about 6-8 weeks in 2000-01. Although the eventual supply was about the same as the previous year’s, the delay limited the amount of vaccine available during October and early November, the period when most people normally receive their flu shot. While the effect of the delay and initial shortage in terms of the number of high-risk persons vaccinated will not be known for some time, other effects can be observed, particularly in terms of the price of the vaccine. Providers who decided to purchase vaccine from those distributors who had it available during the October and November period of limited supply and higher demand often found prices that were several times higher than expected. Many providers who decided to wait for their orders placed earlier eventually received them, and at the lower prices they had initially contracted for. By December, as vaccine supply increased and demand dropped, prices declined. For the 2000-01 flu season, manufacturers collectively took about 6-8 weeks longer than normally expected to produce and distribute all of the flu vaccine. This delay meant that the bulk of the vaccine was not ready for market during the period of October and early November that CDC recommended as the best time to receive flu shots. This is also the time when most practitioners are used to administering the vaccine and when most people are used to receiving it. In 1999, more than 70 million doses of vaccine were available by the end of October; in 2000, fewer than 28 million doses were available by that date. Two main factors contributed to the delay. The first was that two manufacturers had unanticipated problems growing one of the two new influenza strains introduced into the vaccine for 2000-01. Because manufacturers must produce a vaccine that includes all three strains selected for the year, delivery was delayed until sufficient quantities of this difficult strain could be produced. The second factor was that two of the four manufacturers that produced vaccine the previous season shut down part of their manufacturing facilities because of FDA concerns about compliance with good manufacturing practices. One manufacturer temporarily closed on its own initiative to make facility improvements and address quality control issues raised during an FDA inspection; the other was ordered by FDA to cease production until certain actions were taken to address a number of concerns, including issues related to safety and quality control. The former reopened its facilities but the other manufacturer, which had been expected to produce 12-14 million doses for the 2000-01 flu season, announced in September 2000 that it would cease production altogether and, as a result, supplied no vaccine for 2000-01. These problems did not affect every manufacturer to the same degree. In particular, the manufacturer that produced the smallest volume of vaccine did not experience production problems or delays in shipping its vaccine. By the end of October, this manufacturer had distributed nearly 85 percent of its vaccine, while the two other manufacturers had shipped only about 40 percent and less than 15 percent, respectively. Purchasers who ordered their vaccine from the manufacturer with no major production problems were far more likely to receive their vaccine on time. For example, the state of Alabama ordered vaccine directly from all three manufacturers before July 2000 at a similar price per dose. As table 1 shows, the state received its shipments at markedly different times, reflecting how soon each manufacturer was able to get its vaccine to market. Purchasers that contracted only with the late-shipping manufacturers were in particular difficulty. For example, health departments and other public entities in 36 states banded together under a group purchasing contract and ordered nearly 2.6 million doses from the manufacturer that ended up having the greatest delays from production difficulties. Some of these public entities, which ordered vaccine for high-risk people in nursing homes or clinics, did not receive most of their vaccine until December, according to state health officials. The 2000-01 experience illustrates the fragility of the vaccine supply. Because influenza virus strains take a certain period of time to grow, the process cannot be accelerated to make up for lost time. When manufacturers found that one strain for the vaccine was harder to produce than expected, they adjusted their procedures to achieve acceptable yields, but it still took months to produce. Because only three manufacturers remain, the difficulties associated with vaccine production, and the need to formulate a new vaccine involving one or more new strains each year, the future vaccine supply is uncertain. Problems at one or more manufacturers can significantly upset the traditional fall delivery of influenza vaccine. Because supply was limited during the usual vaccination period, distributors and others who had supplies of the vaccine had the ability— and the economic incentive—to sell their supplies to the highest bidder during this time rather than filling lower-priced orders they had already received. According to distributors, and purchasers, a vaccine order’s price, quantity, and delivery might not be guaranteed. When no guarantee or meaningful penalty applies, orders can be cancelled or cut and deliveries can be delayed when vaccine is in short supply. Because of the production delays, many purchasers found themselves with little or no vaccine when the peak time came for vaccinations. Many of these purchasers had ordered vaccine months earlier at agreed-upon prices, with delivery scheduled for early fall. While some orders were cancelled outright or cut substantially, many purchasers were told that the vaccine was still being produced and that their full order would be delayed but delivered as soon as possible. This left many purchasers with a choice: they could take a risk and wait for the vaccine they had ordered, or they could try to find vaccine immediately to better ensure that patients were vaccinated before the flu season struck. Most of the physician groups and state health departments that we contacted reported that they waited for delivery of their early orders. For example, of the 53 physician group practices we surveyed that ordered vaccine before the end of June 2000, 34 groups waited for delivery of these original orders. Those who purchased vaccine in the fall—because they did not want to wait for their early orders to be delivered later, had orders canceled or reduced, or just ordered later—found themselves paying much higher prices. The following examples illustrate the higher prices paid to make up for reduced orders or delayed delivery: The state of Hawaii initially ordered 12,000 doses of vaccine from one distributor in June at $2.80 per dose. When the distributor cut the order by one-third, the state purchased vaccine from another distributor in September at a price between $5.00 and $6.00 per dose. One physician practice ordered flu vaccine from a supplier in April 2000 at $2.87 per dose. When it received none of that vaccine by November 1, the practice placed three smaller orders in November with a different supplier at the escalating prices of $8.80, $10.80, and $12.80 per dose. By the first of December, the practice ordered more vaccine from a third supplier at $10.80 per dose. The four more expensive orders were delivered immediately, before any vaccine had been received from the original April order. The data we collected from 58 physician group practices around the country provide another indication of how prices spiked during the period of high demand in October and November. Overall, the price paid by these practices averaged $3.71 per dose. However, as table 2 shows, the average price paid for orders placed by these practices in October and November was about $7 per dose, compared with about $3 per dose for advance orders placed in June or before. While some vaccine was available to those willing to pay a higher price in October and November, some purchasers trying to buy vaccine reported that they were unable to find vaccine from any supplier at any price during that time. For example, one large health maintenance organization told us that when delivery of its early order was delayed, it could not find any source with the large number of doses it needed and ended up waiting until November and December for delivery of more than a million doses it had ordered in the spring. Vaccine prices came down as a large quantity of vaccine was delivered in December, after the prime period for flu vaccinations had passed. Vaccine became increasingly available in December and manufacturers and distributors delivered the orders or parts of orders that had been postponed. In addition, recognizing the potential shortfall in production, CDC contracted in September 2000 with one manufacturer to extend production into late December for 9 million additional doses. Providers buying vaccine in December could do so at prices similar to those in place during the spring and summer. Among the physician groups we contacted, none of which ordered under the CDC contract, the price for orders placed in December or later averaged about $3.50 per dose—somewhat above the average price paid through June, but about half of the average price of orders placed in October and November. Although vaccine was plentiful by December, fewer people were seeking flu shots at that time. According to manufacturers and several large distributors, demand for influenza vaccine typically drops by November and it is difficult to sell vaccine after Thanksgiving. Despite efforts by CDC and other public health officials to encourage people to obtain flu shots later in the 2000-01 season, providers and other purchasers still reported a drop in demand for flu shots in December 2000. A reason people did not continue to seek flu shots in December and later may have been that the 2000-01 flu season was unusually light. Data collected by CDC’s surveillance system showed relatively low influenza activity and mortality. While mortality due to influenza and pneumonia— one indicator of the severity of a flu season—had surpassed CDC’s influenza epidemic thresholds every year since 1991, it had not done so by April of the 2000-01 season. Had a flu epidemic hit in the fall or early winter, the demand for influenza vaccine may have increased substantially. As a result of the waning demand, manufacturers and distributors reported having more vaccine than they could sell. Manufacturers reported shipping about 70 million doses, or about 9 percent less than the previous year. More than 7 million additional doses produced under the CDC contract were never shipped at all because of lack of demand. None of the physician practices that we contacted had ordered from the CDC contract, mainly because they were waiting for earlier orders to arrive or they had already received some or all of their vaccine. In addition, some physicians’ offices, employee health clinics, and other organizations that administered flu shots reported having unused doses in December and later. For example, the state of Oklahoma reported having more than 75,000 unused doses of vaccine. While it is difficult to determine if any of these events will affect the price of vaccine in the future, prices for early orders for the upcoming 2001-02 flu season have increased substantially over prior years’ prices. Physician practices, state public health departments, and other purchasers reported that their suppliers are quoting prices of $4 to $5 per dose, or about 50 to 100 percent higher than the early order prices for the 2000-01 season. Citing expenses associated with expanding the production capacity and the costs of maintaining a modern and compliant facility, one manufacturer notified customers of a significant price increase for 2001- 02. There is no mechanism currently in place to distribute flu vaccine to high- risk individuals before others. In a typical year, there is enough vaccine available in the fall to give a flu shot to anyone who wants one. When the supply was not sufficient in the fall of 2000, focusing distribution on high- risk individuals was difficult because all types of providers served at least some high-risk people. Lacking information to identify which orders should be filled first to serve the population most in need, manufacturers and distributors who did attempt to target higher-risk persons used a variety of approaches to distribute the limited vaccine. According to public health officials and providers, there was confusion in many communities as some providers were able to administer flu shots to anyone requesting one, while at the same time, other providers had no vaccine for even their highest-risk patients. Like other pharmaceutical products, influenza vaccine is distributed largely through multiple channels in the private sector that have evolved to meet the specific needs of different types of purchasers. Those selling and delivering vaccine include the manufacturers themselves, distributors of general medical supplies and pharmaceuticals, and other types of resellers such as pharmacies. According to data from the manufacturers, about half of all flu vaccine is purchased by providers directly from manufacturers and roughly half is purchased through distributors and resellers. As a general practice, manufacturers said they pre-sell almost all of their planned production volume by May or June of each year. Major distributors and other large volume purchasers, including state health departments, can obtain the most favorable prices by ordering directly from manufacturers during this early order period. The distributors and other resellers can then offer smaller purchasers such as physicians’ offices the convenience and flexibility of buying flu vaccine along with their other medical supplies. Most experts we interviewed agreed that when the supply of vaccine is sufficient, reliance on these varied distribution channels allows for the successful delivery of a large volume of influenza vaccine in time for the annual fall vaccination period. Providers of flu vaccine also represent a diverse group. The annual influenza vaccine is widely available as a convenience item outside the usual medical settings of physicians’ offices, clinics, and hospitals. Millions of individuals, including those who are not at high risk, receive flu shots where they work or in retail outlets such as drugstores and grocery stores. Some of these providers order their own flu vaccine from a manufacturer or distributor, others participate in different types of purchasing groups, and others contract with organizations such as visiting nurse agencies to come in and administer the vaccine. The widespread availability of flu shots at both traditional medical settings and at convenience locations where people shop, work, and play may contribute to increased immunization rates. HHS survey data show that between 1989 and 1999, influenza immunization rates more than doubled for individuals aged 65 and older (see table 3). During that same period, however, immunization rates increased more than five-fold for the 18-49 year age group, which includes individuals who are likely to be at lower risk and to receive flu shots in nonclinical settings. While access to flu shots in a wide range of settings is an established mass immunization strategy, some physicians and public health officials view it as less than ideal for targeting high-risk individuals. Because of the expected delay or possible shortage of vaccine for the 2000-01 season, CDC and ACIP recommended in July 2000 that mass immunization campaigns be delayed until early to mid-November. CDC issued updated guidelines in October 2000 which stated that vaccination efforts should be focused on persons aged 65 and older, pregnant women, those with chronic health conditions that place them at high risk, and health care workers who care for them. Regarding mass immunization campaigns, these updated guidelines stated that while efforts should be made to increase participation by high-risk persons and their household contacts, other persons should not be turned away. Although some vaccination campaigns open to both high-risk and lower- risk individuals were delayed as recommended by CDC, many private physicians and public health departments raised concerns that they did not have vaccine to serve their high-risk patients at the time these campaigns were underway. The following are a few examples of promotional campaigns held across the nation that created controversy: One radio station sponsored a promotional event where a flu shot and a beer were available at a local restaurant and bar for $10 to whoever wanted one. One grocery store chain offered a discounted flu shot for anyone bringing in three soup can labels. Flu shots were available for purchase at a professional football stadium to all fans attending the game. We interviewed several retail outlets and employers and the companies they contract with to conduct mass immunization clinics. While some reported that they disseminated information on who was at high risk and stressed the need for priority vaccination among high-risk groups, they generally did not screen flu shot recipients for risk. The perspective of these companies was that the burden lies with the individual to determine his or her own level of risk, not with the provider. Moreover, they said that the convenience locations provide an important option for high-risk individuals, because physicians’ offices would have difficulty vaccinating all high-risk individuals during the optimal time period of October through mid-November. Other organizations held flu clinics open to lower-risk individuals in the early fall before realizing the extent of the vaccine supply problems. Because there generally has been enough vaccine to meet demand in recent years, there was little practical need for the fragmented distribution process to develop the capability to determine which purchasers might merit priority deliveries based on serving high-risk individuals. When the supply of vaccine was delayed in the fall of 2000, the manufacturers and distributors we interviewed reported that it was difficult to determine which of their purchasers should receive priority vaccine deliveries in response to the ACIP’s July and October 2000 recommendations to vaccinate high-risk groups first. Although some types of providers are more likely than others to serve high-risk individuals, it is likely that all types of providers serve at least some high-risk individuals. CDC and ACIP did not provide guidance about how to implement priority deliveries, and manufacturers and some distributors reported that they often did not have enough information about their customer base to make such decisions. As a result, they reported using various approaches in distributing their vaccine. One manufacturer reported that it initially followed its usual policies of distributing vaccine on the basis of initial order date—that is, orders were filled on a first in, first out basis—and honoring contracts with specific delivery dates. According to the manufacturer, a few contracts in which purchasers paid a premium price for an early delivery date received priority in distribution. However, less than halfway through its season’s distribution, this company notified customers at the end of October that it changed its policy in order to make partial shipments to all purchasers as a way of ensuring more equitable treatment for all. One manufacturer reported that it first shipped vaccine to nursing home customers (where such customers could be identified) and then made partial shipments to other customers. One manufacturer sold all of its vaccine in the United States through one distributor. That distributor, which also sold vaccine from the other manufacturers, told us that it attempted to give priority to orders from physicians and then orders from state and local governments. Other distributors we contacted also used varied approaches to distribute vaccine in 2000. For example, officials from one large medical supply distributor said that after a manufacturer cut its order substantially, the distributor gave priority to the medical practices that ordered early. The distributor reported that it cancelled all orders from resellers and pharmacies, cancelled all orders that came in after June 21, and reduced all orders from medical practices that came in before June 21 by an equal percentage. Another medical supply distributor said it did not sell vaccine to any providers that were not regular customers until it had filled the early orders of its regular customers. Officials from the Health Industry Distributors Association, a national trade association representing medical products distributors, said that distributors are limited in their ability to target certain types of people because they can only target distribution by type of provider, such as physicians’ offices, nursing homes, or hospitals. All of the manufacturers and distributors we talked to said that once they distributed the vaccine it would be up to the purchasers and health care providers to target the available vaccine to high-risk groups. The success of these various approaches to reach high-risk groups was limited by the wide variety of paths the vaccine takes from the manufacturers to the providers who administer the flu shots. For example, although one manufacturer shipped available vaccine to the nursing homes it could identify in its customer base as first priority, this did not ensure that all nursing homes received vaccine for their high-risk patients on a priority basis. State health officials reported that nursing homes often purchase their flu vaccine from local pharmacies or rely on public health officials to provide the vaccine. In those cases, how quickly nursing homes received vaccine for their high-risk residents depended on the practices along the distribution chain—in some cases involving the practices of manufacturers, distributors, pharmacies, and public health providers. Physicians also reported that they did not receive priority, even though nearly two-thirds of the elderly who had flu shots in 1998-99 received them in medical offices. The American Medical Association and other physicians told us that in some communities vaccine was available at retail outlets and other sources before physicians’ offices. The 58 physician group practices we surveyed, which received nearly 90,000 doses from manufacturers, distributors, and other resellers reported receiving their vaccine at about the same time or slightly later than when manufacturers shipped more than 70 million doses (see table 4). Thus as a group these physician practices appeared to experience no priority in vaccine distribution. While HHS has no direct control over how influenza vaccine is purchased and distributed by the private sector and local governments during the annual influenza season, it has several initiatives under way to help mitigate the adverse effects of any future shortages and delays. Success of these various efforts, however, relies on collaboration between the public and private sectors. Completion of HHS’ national plan to respond to an influenza pandemic could help foster this type of collaboration and provide a foundation to deal with vaccine shortages or delays in non- pandemic years. In the meantime, increasing immunization rates against pneumococcal pneumonia, which can follow the flu, may help reduce influenza-related illness and death. In response to the production and distribution problems experienced with flu vaccine for the 2000-01 flu season, HHS has undertaken several initiatives. As shown in table 5, these initiatives include (1) conducting clinical trials on the feasibility of using smaller doses of vaccine for healthy 18- to 49-year-olds, (2) working with public and private sector entities involved in vaccine distribution to explore ways of better targeting vaccine to high-risk groups, (3) recommending state and local health department actions to prepare for a vaccine delay or shortage, and (4) revising guidelines to expand the recommended timing of influenza immunizations. Success of these initiatives relies to a great extent on the willingness of manufacturers, distributors, private physicians, other vaccine providers, and the public to cooperate. For example, if manufacturers requested and FDA approved the use of half-doses of vaccine for certain healthy adults while full-doses of vaccine were given to high-risk adults, implementation strategies may have to address provider concerns about any associated administrative burden. And if distribution guidelines are agreed upon and implemented, vaccine sellers may have to sacrifice the additional revenue of selling to those willing to pay higher prices regardless of relative need. The importance of collaboration between the public and private sector to develop and implement initiatives to address flu vaccine shortages at the state and local level was highlighted by state public health officials we interviewed. States where public- and private-sector entities collaborated early to deal with the delay in vaccine shipments reported some success in targeting high-risk people for vaccination. For example: Before the fall 2000 vaccination period, health officials in Utah had partnered with Medicare’s local Peer Review Organization (PRO) and a private managed care organization and others to form an Adult Immunization Coalition. This coalition had already identified the number and location of high-risk people living in the state and worked to target vaccine first to these locations. New Mexico health officials participated in a consortium with public and private providers that purchased about 90 percent of vaccine in the state. After nursing home residents were vaccinated, this consortium implemented a three-tiered vaccination strategy. This strategy first targeted the elderly, people with chronic disease and health care workers. Next it targeted household members or close contacts of the first group. Finally, it targeted vaccine to everyone else. CDC officials acknowledge that outreach and educational efforts are needed to change the behavior of both providers and the public to recognize the benefit of flu shots administered after mid-November. For the 2000-01 flu season, CDC undertook several outreach and educational efforts, including issuing guidelines and notices in its Morbidity and Mortality Weekly Report, posting information on a CDC web site, and conducting a media campaign in selected cities. However, the relative effectiveness of these various efforts remains unknown. In addition, CDC has planned various projects to evaluate the impact of the delay of flu vaccine availability on immunization rates and the vaccination practices of providers for the 2000-01 season. For example, CDC is surveying providers about the risk level of the people they vaccinated, providers’ responses to the delays in obtaining vaccine, and the methods they used to target vaccinations. HHS has been working since 1993 to develop a national response plan that would outline actions to be taken to address vaccine delays or shortages during an influenza pandemic. While such a plan is expected to be used only in cases of public health emergencies, advance preparation by manufacturers, distributors, physicians, and public health officials to respond to a pandemic could provide a foundation to deal with some of the problems experienced during the 2000-01 flu season. For example, while some manufacturers and distributors tried various methods to target vaccine first to people who were at high risk for complications, they were often unable to identify these populations. The development of a methodology to identify and target various population groups under the pandemic plan could be a useful tool in this regard. In addition, pandemic planning activities could build collaborative relationships among affected parties that could be useful in dealing with vaccine shortages in non- pandemic years. As we reported in October 2000, HHS has not completed a national pandemic response plan that would, among other things, address how to deal with shortages of vaccine. While HHS has set a completion date of June 2001 for the body of the plan, it has not set specific dates for completing the detailed appendixes needed to implement the plan should vaccine be delayed or in short supply. Another ongoing HHS effort that could mitigate the impact of an influenza vaccine shortage is to increase adult immunization rates against pneumococcal disease, which causes a type of pneumonia that frequently follows influenza. The population most at risk for pneumococcal pneumonia includes the elderly and those with chronic illnesses—the same groups at high-risk for complications or death following infection with influenza. Because pneumococcal vaccine provides immunity for at least 5 to 10 years, it can provide some protection against one of the serious complications associated with influenza if the annual influenza vaccine is unavailable. Although pneumococcal vaccine provides added protection against a major influenza-related illness, widespread use among the high-risk population remains relatively low. HHS has set its goal for 2010 to achieve 90 percent immunization against pneumococcal disease among the elderly and 60 percent among other high-risk adults. Available data show that only 54 percent of the elderly and 13 percent of younger high-risk adults have been vaccinated against pneumococcal disease. For the population 65 years and older, HCFA, which administers the Medicare program, has activities directed toward increasing both pneumococcal and influenza vaccination rates. For example, HCFA has contracted with its 53 PROs to work within communities to raise immunization rates. The extent that state immunization rates for pneumococcal vaccine and influenza vaccine improve over time is a factor that HCFA will consider in evaluating PRO performance. CDC also supports efforts to increase adult immunizations, such as influenza and pneumococcal immunizations, for people aged 65 and older and others with medical conditions placing them at high risk for influenza and pneumococcal pneumonia. In 2001, CDC awarded $159 million for Preventive Health Services Immunization grants to support state infrastructures for childhood and adult immunization. However, because CDC considers activities to support childhood immunization a priority for these grants, only 5 of the 64 grantees targeted more than 10 percent of grant funds to support adult immunization efforts. While HCFA and CDC have taken some steps to coordinate many of their adult immunization activities, including efforts to increase pneumococcal immunization, their performance goals may differ. For example, in their fiscal year 2001 performance plans, HCFA set a target of vaccinating 55 percent of those 65 years and older against pneumococcal disease, while CDC set a more ambitious target of 63 percent. The circumstances that led to the delay and early shortage of flu vaccine during the 2000-01 flu season could repeat themselves in the future. Ensuring an adequate and timely supply of vaccine, already a difficult task given the current manufacturing process, has become even more difficult as the number of manufacturers has decreased. Now, a production delay or shortfall experienced by even one of the three remaining manufacturers can significantly impact overall vaccine availability. The effects of production delays in 2000-01 were exacerbated by the expectation of providers and the public that flu shots should be received by Thanksgiving or not at all, even though a flu shot after this time would provide a reasonable level of protection in most years. In the event of a future delay or shortage, determining the most effective means of changing this traditional behavior will be beneficial. The purchase, distribution, and administration of flu vaccine are mainly private-sector responsibilities. Consequently, HHS’ actions to help mitigate any adverse effects of vaccine delays or shortages need to rely to a great extent on collaboration with private-sector participants. By completing its own planning efforts for dealing with these issues during a pandemic, as we previously recommended, HHS would provide a foundation for building collaboration among suppliers and purchasers of flu vaccine that could help improve the vaccine distribution process. The March 2001 meeting with public health officials, vaccine manufacturers, distributors, physicians, and others is a potentially useful first step towards developing voluntary guidelines for distribution in the event of a future delay or shortage, but more work is needed before consensus is achieved. Success is contingent on consensus and continued commitment by all parties. In addition, to maximize results federal and state agencies need to fully coordinate their pneumococcal vaccination efforts to set and achieve common goals. While pneumococcal vaccination is not a substitute for the annual flu shot, it can provide protection against a major complication of influenza if the flu vaccine is not available. In the event that future shortages of influenza vaccine cannot be avoided, coordination among HCFA, CDC, and state programs designed to increase pneumococcal immunizations now may contribute to lowering future hospitalization and death rates due to influenza-related pneumonia. We recommend that the Secretary of HHS take the following actions: To prepare for potential delays or shortages in flu vaccine, instruct the Director of CDC to assess the relative success of its past outreach and education efforts and identify those means that are most effective in changing behavior to meet public health priorities. When appropriate, these means should be used as the primary method to educate flu vaccine providers and the general public well before the start of the traditional fall vaccination period. To improve response to future vaccine delays or shortages, instruct the Director of CDC to continue to take a leadership role in organizing and supporting efforts to bring together all stakeholders to formulate voluntary guidelines for vaccine distribution. Specifically, in formulating guidelines for getting vaccine to high-risk individuals first in times of need, work with stakeholders to pursue the feasibility of steps that showed promise in the 2000-01 flu season. To maximize use of federal resources, instruct the Director of CDC to work to complement HCFA’s ongoing activities to improve pneumococcal immunization rates among the Medicare population and focus CDC’s funded efforts on increasing pneumococcal immunization in the high-risk non-Medicare population. We provided a draft of this report to HHS for review. In its written comments (see app. II), HHS identified actions that it had initiated or planned to undertake related to two of our recommendations. For example, HHS stated that CDC had efforts underway to assess the relative success of the outreach and educational efforts for the 2000-01 flu season, and that it was working with stakeholders to try to develop contingency plans for vaccine distribution in the event of future supply problems. Regarding our third recommendation, HHS stated that pneumococcal immunization could be part of a broader plan for the government to reduce the overall impact of influenza in case of vaccine supply problems. HHS also commented that our draft report overstated HHS’ authority to exercise greater control over vaccine purchase and distribution in the event of a public health emergency such as an influenza pandemic. We have revised the report language to better reflect our point, which was not about the extent of HHS’ authority to respond to a pandemic, but rather about using pandemic planning activities to better prepare for vaccine shortages in non-pandemic years as well. HHS also provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Honorable Tommy G. Thompson, Secretary of HHS; the Honorable Jeffrey P. Koplan, Director of CDC; the Honorable Bernard A. Schwetz, Acting Principal Deputy Commissioner of FDA; Michael McMullan, Acting Deputy Administrator of HCFA; Martin G. Myers, Director of NVPO; and others who are interested. We will also make copies available to others on request. If you or your staffs have any questions, please contact me at (202) 512- 7119. An additional GAO contact and the names of other staff who made major contributions to this report are listed in appendix III. For the 2000-01 flu season, the CDC Advisory Committee on Immunization Practices (ACIP) issued guidance in April 2000 that strongly recommended influenza vaccination for those persons who—because of age or underlying medical condition—are at increased risk for complications of influenza. For the first time, the committee lowered the age for universal vaccination from 65 years to 50 years of age, adding an estimated 28 to 31 million persons to the target population. The reason for this expansion was to increase vaccination rates among persons aged 50-64 with high-risk conditions, since age-based strategies have been more successful than strategies based on medical condition. The committee also recommended that health-care workers and other individuals in close contact with persons in high-risk groups should be vaccinated to decrease the risk of transmitting influenza to persons at high risk. Because of expected delays or possible shortages of influenza vaccine for the 2000-01 flu season, the committee issued adjunct recommendations on July 14, 2000. In addition to recommending that mass immunization campaigns be delayed, these adjunct recommendations said that (1) vaccination of high-risk individuals should proceed with available vaccine, (2) provider-specific contingency plans should be developed for possible vaccine shortages, and (3) vaccine administered after mid-November can still provide substantial benefits. Updated recommendations were issued on October 6, 2000, stating that a shortage had been averted but distribution would be delayed. These updated recommendations placed highest priority on those persons aged 65 and older, pregnant women and those persons with chronic health conditions that placed them at high risk, and health care workers who care for them. Table 6 shows the target groups for influenza immunization from these updated recommendations. The update also recommended that mass vaccination campaigns should be scheduled later in the season and that these campaigns should try to enhance coverage among those at greatest risk for complications of influenza and their household contacts. However, the recommendations stated that other persons should not be turned away. The updated recommendations also emphasized that special efforts should be made in December and later to vaccinate persons aged 50-64 and that vaccination efforts for all groups should continue into December and later when vaccine was available. Other major contributors to this report were Lacinda Ayers, George Bogart, Ellen M. Smith, Stan Stenersen, and Kim Yamane. | Until the 2000-2001 flu season, the production and the distribution of flu vaccine generally went smoothly. In the fall of 2000, however, stories began to circulate about delays in obtaining flu vaccines. GAO reviewed (1) the circumstances that contributed to the delay and the effects the delay had on prices paid for vaccine, (2) how effectively current distribution channels ensure that high-risk populations receive vaccine on a priority basis, and (3) what the federal government is doing to better prepare for possible disruptions of influenza vaccine supply. GAO found that manufacturing difficulties resulted in an overall delay of about 6-8 weeks in shipping vaccine to most customers and a temporary price spike. Manufacturers experienced unprecedented problems growing a new viral strain, while two of four manufacturers halted production--one permanently--to address safety and quality control concerns. There is currently no system to ensure that high-risk patients have priority when the supply of vaccine is short. Although the federal government has no direct control over how influenza vaccine is purchased and distributed by the private sector and state and local governments, the Department of Health and Human Services (HHS) has several efforts underway to help cope with future influenza vaccine shortages and delays. For example, the Centers for Disease Control and Prevention (CDC) revised guidelines to extend the recommended timeframe for receiving immunizations. CDC is also helping to bring together manufacturers, distributors, providers, and others in the private and public sectors to explore ways to improve distribution to high-risk individuals. |
The Navy, with reported assets totaling $321 billion in fiscal year 2004, would be ranked among the largest corporations in the world if it were a private sector entity. According to the Navy, based upon the reported value of its assets, it would be ranked among the 15 largest corporations on the Fortune 500 list. Additionally, in fiscal year 2004 the Navy reported that its inventory was valued at almost $73 billion and that it held property, plant, and equipment with a reported value of almost $156 billion. Furthermore, the Navy reported for fiscal year 2004 that its operations involved total liabilities of $38 billion, that its operations had a net cost of $130 billion, and that it employed approximately 870,000 military and civilian personnel—including reserve components. The primary mission of the Navy is to control and maintain freedom of the seas, performing an assortment of interrelated and interdependent business functions to support its military mission with service members and civilian personnel in geographically dispersed locations throughout the world. To support its military mission and perform its business functions, the Navy requested for fiscal year 2005 almost $3.5 billion for the operation, maintenance, and modernization of its business systems and related infrastructure—the most of all the DOD components—or about 27 percent of the total $13 billion DOD fiscal year 2005 business systems budget request. Of the 4,150 reported DOD business systems, the Navy holds the largest inventory of business systems—with 2,353 reported systems or 57 percent of DOD’s reported inventory of business systems. The Secretary of Defense recognized that the department’s business operations and systems have not effectively worked together to provide reliable information to make the most effective business decisions. He challenged each military service to transform its business operations to support DOD’s warfighting capabilities and initiated the Business Management Modernization Program (BMMP) in July 2001. Further, the Assistant Secretary of the Navy for Financial Management and Comptroller (Navy Comptroller) testified that transforming the Navy’s business processes, while concurrently supporting the Global War on Terrorism, is a formidable but essential task. He stated that the goal of the transformation is to “establish a culture and sound business processes that produce high-quality financial information for decision making.” One of the primary elements of the Navy’s business transformation strategy is the Navy ERP. The need for business processes and systems transformation to provide management with timely information to make important business decisions is clear. However, none of the military services, including the Navy, have passed the scrutiny of an independent financial audit. Obtaining a clean (unqualified) financial audit opinion is a basic prescription for any well- managed organization, as recognized by the President’s Management Agenda. For fiscal year 2004, the DOD Inspector General issued a disclaimer on the Navy’s financial statements—Navy’s General Fund and Working Capital Fund—citing eight material weaknesses and six material weaknesses respectively, in internal control and noncompliance with the Federal Financial Management Integrity Act of 1996 (FFMIA). The inability to obtain a clean financial audit opinion is the result of weaknesses in the Navy’s financial management and related business processes and systems. Most importantly, the Navy’s pervasive weaknesses have (1) resulted in a lack of reliable information to make sound decisions and report on the status of activities, including accountability of assets, through financial and other reports to the Navy and DOD management and the Congress; (2) hindered its operational efficiency; (3) adversely affected mission performance; and (4) left the Navy and DOD vulnerable to fraud, waste, and abuse, as the following examples illustrate. The Navy’s lack of detailed cost information hinders its ability to monitor programs and analyze the cost of its activities. We reported that the Navy lacked the detailed cost and inventory data needed to assess its needs, evaluate spending patterns, and leverage its telecommunications buying power. As a result, at the sites we reviewed, the Navy paid for telecommunications services it no longer required, paid too much for services it used, and paid for potentially fraudulent or abusive long-distance charges. In one instance, we found that DOD paid over $5,000 in charges for one card that was used to place 189 calls in one 24-hour period from 12 different cities to 12 different countries. Ineffective controls over Navy foreign military sales using blanket purchase orders placed classified and controlled spare parts at risk of being shipped to foreign countries that may not be eligible to receive them. For example, we identified instances in which Navy country managers (1) overrode the system to release classified parts under blanket purchase orders without filing required documentation justifying the release; and (2) substituted classified parts for parts ordered under blanket purchase orders, bypassing the control-edit function of the system designed to check a country’s eligibility to receive the parts. The Naval Inventory Control Point and its repair contractors have not followed DOD and Navy procedures intended to provide the accountability for and visibility of inventory shipped to Navy repair contractors. Specifically, Navy repair contractors are not routinely acknowledging receipt of government-furnished material received from the Navy. A DOD procedure requires repair contractors to acknowledge receipt of government-furnished material that has been shipped to them from the Navy’s supply system. However, Naval Inventory Control Point officials are not requiring repair contractors to acknowledge receipt of these materials. By not requiring repair contractors to acknowledge receipt of government-furnished material, the Naval Inventory Control Point has also departed from the procedure to follow up with the contractor within 45 days when the contractor fails to confirm receipt for an item. Without material receipt notification, the Naval Inventory Control Point cannot be assured that its repair contractors have received the shipped material. This failure to acknowledge receipt of material shipped to repair contractors can potentially impair the Navy’s ability to account for shipments leading to possible fraud, waste, or abuse. A limited Naval Audit Service audit revealed that 53 of 118 erroneous payment transactions, valued at more than $990,000, occurred because Navy certifying officials did not ensure accurate information was submitted to the Defense Finance and Accounting Service (DFAS) prior to authorizing payment. In addition, certifying officials submitted invoices to DFAS authorizing payment more than once for the same transaction. Brief Overview of Navy ERP To address the need for business operations reform, in fiscal year 1998 the Navy established an executive committee responsible for creating a “Revolution in Business Affairs” to begin looking at transforming business affairs and identifying areas of opportunity for change. This committee, in turn, set up a number of working groups, including one called the Commercial Business Practices (CBP) Working Group, which consisted of representatives from financial management organizations across the Navy. This working group recommended that the Navy should use ERP as a foundation for change and identified various ERP initiatives that were already being developed or under consideration within the Navy. Ultimately, the Navy approved the continuation of four of these initiatives, using funds from existing resources from each of the sponsors (i.e., commands) to test the feasibility of ERP solutions within the Navy. From 1998 to 2003, four different Navy commands began planning, developing, and implementing four separate ERP pilot programs to address specific business areas. A CBP Executive Steering Group was created in December 1998 to monitor the pilot activities. As the pilots progressed in their development and implementation, the Navy identified issues that had to be addressed at a higher level than the individual pilots, such as the integration between the pilots as well as with other DOD systems, and decided that one program would provide a more enterprisewide solution for the Navy. In August 2002, the Assistant Secretary of the Navy for Research, Development, and Acquisition established a Navy-wide ERP program to “converge” the four ongoing pilots into a single program. This Navy-wide program is expected to replace all four pilots by fiscal year 2008 and to be “fully operational” by fiscal year 2011. The Navy estimates that the ERP will manage about 80 percent of the Navy’s estimated appropriated funds—after excluding appropriated funds for the Marine Corps and military personnel and pay. Based on the Navy’s fiscal years 2006 to 2011 defense planning budget, the Navy ERP will manage approximately $74 billion annually. According to a Navy ERP official, while the Navy ERP would account for the total appropriated amount, once transactions occur at the depots, such as when a work order is prepared for the repair of an airplane part, the respective systems at the depots will execute and maintain the detailed transactions. This accounts for about 2 percent, or approximately $1.6 billion, being executed and maintained in detail by the respective systems at the aviation and shipyard depots—not by the Navy ERP. The remaining 20 percent that the ERP will not manage comprises funds for the Navy Installations Command, field support activity, and others. Each of the Navy’s four ERP pilot projects was managed and funded by different major commands within the Navy. The pilots, costing over $1 billion in total, were limited in scope and were not intended to provide corporate solutions to any of the Navy’s long-standing financial and business management problems. The lack of centralized management oversight and control over all four pilots allowed the pilots to be developed independently. This resulted in four more DOD stovepiped systems that could not operate with each other, even though each carried out many of the same functions and were based on the same ERP commercial-off-the- shelf (COTS) software. Moreover, due to the lack of high-level departmentwide oversight from the start, the pilots were not required to go through the same review process as other acquisition projects of similar magnitude. Four separate Navy organizations began their ERP pilot programs independently of each other, at different times, and with separate funding. All of the pilots implemented the same ERP COTS software, and each pilot was small in scale—relative to the entire Navy. For example, one of the pilots, SMART, was responsible for managing the inventory items and repair work associated with one type of engine, although the organization that implemented SMART—the Naval Supply Systems Command— managed the inventory for several types of engines. As of September 2004, the Navy estimated that the total investment in these four pilots was approximately $1 billion. Table 1 summarizes each of the pilots, the cognizant Navy organization, the business areas they address, and their reported costs through September 2004. Even after the pilots came under the purview of the CBP Executive Steering Group in December 1998, they continued to be funded and controlled by their respective organizations. We have previously reported that allowing systems to be funded and controlled by component organizations has led to the proliferation of DOD’s business systems. These four pilots are prime examples. While there was an attempt made to coordinate the pilots, ultimately each organization designed its ERP pilot to accommodate its specific business needs. The Navy recognized the need for a working group that would focus on integration issues among the pilots, especially because of the desire to eventually extend the pilot programs beyond the pilot organizations to the entire Navy. In this regard, the Navy established the Horizontal Integration Team in June 1999, consisting of representatives from all of the pilots to address this matter. However, one Navy official described this team as more of a “loose confederation” that had limited authority. As a result, significant resources have been invested that have not and will not result in corporate solutions to any of the Navy’s long-standing business and financial management problems. This is evident as noted in the DOD Inspector General’s audit reports of the Navy’s financial statements discussed above. In addition to the lack of centralized funding and control, each of the pilots configured the software differently, which, according to Navy ERP program officials, caused integration and interoperability problems. While each pilot used the same COTS software package, the software offers a high degree of flexibility in how similar business functions can be processed by providing numerous configuration points. According to the Navy, over 2.4 million configuration points exist within the software. The pilots configured the software differently from each other to accommodate differences in the way they wanted to manage their functional area focus. These differences were allowed even though they perform many of the same types of business functions, such as financial management. These configuration differences include the levels of complexity in workflow activities and the establishment of the organizational structure. For example, the primary work order managed by the NEMAIS pilot is an intricate ship repair job, with numerous tasks and workers at many levels. Other pilots had much simpler work order definitions, such as preparing a budget document or procuring a single part for an engine. Because of the various inconsistencies in the design and implementation, the pilots were stovepiped and could not operate with each other, even though they performed many of the same business functions. Table 2 illustrates the similar business functions that are performed by more than one pilot. By definition, an ERP solution should integrate the financial and business operations of an organization. However, the lack of a coordinated effort among the pilots led to a duplication of efforts and problems in implementing many business functions and resulted in ERP solutions that carry out redundant functions in different ways from one another. The end result of all of the differences was a “system” that could not successfully process transactions associated with the normal Navy practices of moving ships and aircraft between fleets. Another configuration problem occurred because the pilots generally developed custom roles for systems users. Problems arose after the systems began operating. Some roles did not have the correct transactions assigned to enable the users with that role to do their entire job correctly. Further, other roles violated the segregation-of-duties principle due to the inappropriateness of roles assigned to individual users. The pilots experienced other difficulties with respect to controlling the scope and performance schedules due to the lack of disciplined processes, such as requirements management. For example, the pilots did not identify in a disciplined manner the amount of work necessary to achieve the originally specified capabilities—even as the end of testing approached. There were repeated contract cost-growth adjustments, delays in delivery of many planned capabilities, and initial periods of systems’ instabilities after the systems began operating. All of these problems have been shown as typical of the adverse effects normally associated with projects that have not effectively implemented disciplined processes. The Navy circumvented departmentwide policy by not designating the pilots as major automated information systems acquisition programs. DOD policy in effect at the time stipulated that a system acquisition should be designated as a major program if the estimated cost of the system exceeds $32 million in a single year, $126 million in total program costs, or $378 million in total life-cycle costs, or if deemed of special interest by the DOD Chief Information Officer (CIO). According to the Naval Audit Service, all four of the pilots should have been designated as major programs based on their costs—which were estimated to be about $2.5 billion at the time—and their significance to Navy’s operations. More specifically, at the time of its review, SMART’s total estimated costs for development, implementation, and sustainment was over $1.3 billion—far exceeding the $378 million life- cycle cost threshold. However, because Navy management considered each of its ERP programs to be “pilots,” it did not designate the efforts as major automated information systems acquisitions, thereby limiting departmental oversight. Consistent with the Clinger-Cohen Act of 1996, DOD acquisition guidance requires that certain documentation be prepared at each milestone within the system life cycle. This documentation is intended to provide relevant information for management oversight and in making decisions as to whether the investment of resources is cost beneficial. The Naval Audit Service reported that a key missing document that should have been prepared for each of the pilots was a mission needs statement. A mission needs statement was required early on in the acquisition process to describe the projected mission needs of the user in the context of the business need to be met. The mission needs statement should also address interoperability needs. As noted by the Naval Audit Service, the result of not designating the four ERP pilots as major programs was that program managers did not prepare and obtain approval of this required document before proceeding into the next acquisition phase. In addition, the pilots did not undergo mandatory integrated reviews that assess where to spend limited resources departmentwide. The DOD CIO is responsible for overseeing major automated information systems and a program executive office is required to be dedicated to executive management and not have other command responsibilities. However, because the pilots were not designated major programs, the oversight was at the organizational level that funded the pilots (i.e., command level). Navy ERP officials stated that at the beginning of the pilots, investment authority was dispersed throughout the Navy and there was no established overall requirement within the Navy to address systems from a centralized Navy enterprise level. The Navy ERP is now designated a major program under the oversight of the DOD CIO. The problems identified in the failed implementation of the four pilots are indicative of a system program that did not adhere to the disciplined processes. The successful development and implementation of systems is dependent on an organization’s ability to effectively implement best practices, commonly referred to as disciplined processes, which are essential to reduce the risks associated with these projects to acceptable levels. However, the inability to effectively implement the disciplined processes necessary to reduce risks to acceptable levels does not mean that an entity cannot put in place a viable system that is capable of meeting its needs. Nevertheless, history shows that the failure to effectively implement disciplined processes and the necessary metrics to understand the effectiveness of processes implemented increases the risk that a given system will not meet its cost, schedule, and performance objectives. In past reports we have highlighted the impact of not effectively implementing the disciplined processes. These results are consistent with those experienced by the private sector. More specifically: In April 2003, we reported that NASA had not implemented an effective requirements management process and that these requirement management problems adversely affected its testing activities. We also noted that because of the testing inadequacies, significant defects later surfaced in the production system. In May 2004, we reported that NASA’s new financial management system, which was fully deployed in June 2003 as called for in the project schedule, still did not address many of the agency’s most challenging external reporting issues, such as external reporting problems related to property accounting and budgetary accounting. The system continues to be unable to produce reliable financial statements. In May 2004, we reported that the Army’s initial deployments for its Logistics Modernization Program (LMP) did not operate as intended and experienced significant operational difficulties. In large part, these operational problems were due to the Army not effectively implementing the disciplined processes that are necessary to manage the development and implementation of the systems in the areas of requirements management and testing. The Army program officials have acknowledged that the problems experienced in the initial deployment of LMP could be attributed to requirements and testing. Subsequently, in June 2005, we reported that the Army still had not put into place effective management control and processes to help ensure that the problems that have been identified since LMP became operational in July 2003 are resolved in an efficient and effective manner. The Army’s inability to effectively implement the disciplined processes provides it with little assurance that (1) system problems experienced during the initial deployment that caused the delay of future deployments have been corrected and (2) LMP is capable of providing the promised system functionality. The failure to resolve these problems will continue to impede operations at Tobyhanna Army Depot, and future deployment locations can expect to experience similar significant disruptions in their operations, as well as having a system that is unable to produce reliable and accurate financial and logistics data. We reported in February 2005 that DOD had not effectively managed important aspects of the requirements for the Defense Integrated Military Human Resources System, which is to be an integrated personnel and pay system standardized across all military components. For example, DOD had not obtained user acceptance of the detailed requirements nor had it ensured that the detailed requirements were complete and understandable. Based on GAO’s review of a random sample of the requirements documentation, about 77 percent of the detailed requirements were difficult to understand. The problems experienced by DOD and other agencies are illustrative of the types of problems that can result when disciplined processes are not properly implemented. The four Navy pilots provide yet another example. As discussed previously, because the pilots were four stovepiped efforts, lacking centralized management and oversight, the Navy had to start over when it decided to proceed with the current ERP effort after investing about $1 billion. Figure 1 shows how organizations that do not effectively implement disciplined processes lose the productive benefits of their efforts as a project continues through its development and implementation cycle. Although undisciplined projects show a great deal of productive work at the beginning of the project, the rework associated with defects begins to consume more and more resources. In response, processes are adopted in the hopes of managing what later turns out to be, in reality, unproductive work. However, generally these processes are “too little, too late,” and rework begins to consume more and more resources because the adequate foundations for building the systems were not done or not done adequately. In essence, experience shows that projects that fail to implement disciplined processes at the beginning are forced to implement them later, when it takes more time and they are less effective. As can be seen in figure 1, a major consumer of project resources in undisciplined efforts is rework (also known as thrashing). Rework occurs when the original work has defects or is no longer needed because of changes in project direction. Disciplined organizations focus their efforts on reducing the amount of rework because it is expensive. Studies have shown that fixing a defect during testing is anywhere from 10 to 100 times more expensive than fixing it during the design or requirements phase. To date, Navy ERP management has followed a comprehensive and disciplined requirements management process, as well as leveraged lessons learned from the implementation of the four ERP pilot programs to avoid repeating past mistakes. Assuming that the project continues to effectively implement the processes it has adopted, the planned functionality of the Navy ERP has the potential to address at least some of the weaknesses identified in the Navy’s financial improvement plan. However, the project faces numerous challenges and risks. Since the program is still in a relatively early phase—it will not be fully operational until fiscal year 2011, at a currently estimated cost of $800 million—the project team must be continually vigilant and held accountable for ensuring that the disciplined processes are followed in all phases to help achieve overall success. For example, the project management office will need to ensure that it effectively oversees the challenges and risks associated with developing interfaces with 44 Navy and DOD systems and data conversion—areas that were troublesome in other DOD efforts we have audited. Considering that the project is in a relatively early phase and DOD’s history of not implementing systems on time and within budget, the projected schedule and costs estimates are subject to, and very likely will, change. Furthermore, a far broader challenge, which lies outside the immediate control of the Navy ERP program office, is that the ERP is proceeding without DOD having clearly defined its BEA. As we have recently reported, DOD’s BEA still lacks many of the key elements of a well-defined architecture. The real value of a BEA is that it provides the necessary content for guiding and constraining system investments in a way that promotes interoperability and minimizes overlap and duplication. Without it, rework will likely be needed to achieve those outcomes. Although the four pilot projects were under the control of different entities and had different functional focuses, a pattern of issues emerged that the Navy recognized as being critical for effective development of future projects. The Navy determined that the pilots would not meet its overall requirements and concluded that the best alternative was to develop a new ERP system—under the leadership of a central program office—and use efforts from the pilots as starting points by performing a review of their functionality and lessons learned, eliminating redundancies, and developing new functionalities that were not addressed by the pilots. The lessons learned from the pilots cover technical, organizational, and managerial issues and reinforce the Navy’s belief that it must effectively implement the processes that are necessary to effectively oversee and manage the ERP efforts. Navy ERP project management recognizes that the failure to do so would, in all likelihood, result in this ERP effort experiencing the same problems as those resulting in the failure of the four earlier pilots. One of the most important lessons learned from the earlier experiences by the Navy ERP project management is the need for following disciplined processes to identify and manage requirements. As discussed later in this report, the ERP program is following best practices in managing the system’s requirements. A key part of requirements identification is to have system users involved in the process to ensure that the system will meet their needs. Additionally, the inclusion of system users in the detailed requirement development process creates a sense of ownership in the system, and prepares system users for upcoming changes to the way they conduct their business. Moreover, the experience from the pilots demonstrated that the working-level reviews must be cross functional. For example, the end-to-end process walkthroughs, discussed later, reinforce the overall business effect of a transaction throughout the enterprise, and help to avoid a stovepiped view of an entity’s operations. Another lesson learned is the need to adopt business processes to conform with the types of business practices on which the standard COTS packages are based, along with the associated transaction formats. Just the opposite approach was pursued for the pilots, during which the Navy customized many portions of the COTS software to match the existing business process environment. However, the current Navy ERP management is restraining customization to the core COTS software to allow modifications only where legal or regulatory demands require. Obviously, minimizing the amount of customization reduces the complexity and costs of development. Perhaps more importantly, holding customization to a minimum helps an entity take advantage of two valuable benefits of COTS software. First, COTS software provides a mature, industry-proven “best practices” approach to doing business. The core elements of work-flow management, logistics, financial management, and other components have been optimized for efficiency and standardization in private industry over many years. According to program officials, the Navy ERP will adhere to the fundamental concepts of using a COTS package and thus take advantage of this efficiency benefit by modifying their business practices to match the COTS software rather than vice versa as was done in the four pilots. Having the software dictate processes is a difficult transition for users to accept, and Navy ERP officials recognize the challenge in obtaining buy-in from system users. To meet this challenge, they are getting users involved early in requirements definition, planning for extensive training, and ensuring that senior level leadership emphasize the importance of process change, so the entire chain of command understands and accepts its role in the new environment. In effect, the Navy is taking the adopted COTS process and then presenting it to the users. As a result, the Navy is attempting to limit the amount of customization of the software package. One important consideration in doing this is that if the standard COTS components are adopted, the maintenance burden of upgrades remains with the COTS vendor. Finally, the Navy learned from the pilots that it needed to manage its system integrators better. The ERP officials also found that they could significantly reduce their risk by using the implementation methodology of the COTS vendor rather than the specific approach of a system integrator. Each of the pilots had separate system integrators with their own particular methodology for implementing the COTS software. According to Navy ERP officials, using the implementation methodology and tool set of the COTS vendor maintains a closer link to the underlying software, and provides more robust requirements management by easily linking requirements from the highest level down to the COTS transaction level. Navy ERP is focused on staying as close as possible to the delivered COTS package, both in its avoidance of customization and its use of tools provided by the COTS vendor. In contrast, with the pilots, the Navy allowed the system integrators more latitude in the development process, relying on their expertise and experience with other ERP efforts to guide the projects. Navy ERP management realized they needed to maintain much better control over the integrators’ work. As a result, the Navy established the Strategy, Architecture, and Standards Group to structure and guide the effort across the Navy. Our review found that the ERP development team has so far followed an effective process for managing its requirements development. Documentation was readily available for us to trace selected requirements from the highest level to the lowest, detailed transaction level. This traceability allows the user to follow the life of the requirement both forward and backward through the documentation, and from origin through implementation. Traceability is also critical to understanding the parentage, interconnections, and dependencies among the individual requirements. This information in turn is critical to understanding the impact when a requirement is changed or deleted. Requirements represent the blueprint that system developers and program managers use to design, develop, test, and implement a system. Improperly defined or incomplete requirements have been commonly identified as a cause of system failure and systems that do not meet their cost, schedule, or performance goals. Without adequately defined requirements that have been properly reviewed and tested, significant risk exists that the system will need extensive and costly changes before it will achieve its intended capability. Because requirements provide the foundation for system testing, specificity and traceability defects in system requirements preclude an entity from implementing a disciplined testing process. That is, requirements must be complete, clear, and well documented to design and implement an effective testing program. Absent this, an organization is taking a significant risk that its testing efforts will not detect significant defects until after the system is placed into production. Industry experience indicates that the sooner a defect is recognized and corrected, the cheaper it is to fix. As shown in figure 2, there is a direct relationship between requirements and testing. Although the actual testing activities occur late in the development cycle, test planning can help disciplined activities reduce requirements-related defects. For example, developing conceptual test cases based on the requirements derived from the concept of operations and functional requirements stages can identify errors, omissions, and ambiguities long before any code is written or a system is configured. Disciplined organizations also recognize that planning testing activities in coordination with the requirements development process has major benefits. As we have previously reported, failure to effectively manage requirements and testing activities has posed operational problems for other system development efforts. The Navy ERP requirements identification process began with formal agreement among the major stakeholders on the scope of the system, followed by detailed, working-level business needs from user groups and legacy systems. The high-level business or functional requirements identified initially are documented in the Operational Requirements Document (ORD). The ORD incorporates requirements from numerous major DOD framework documents and defines the capabilities that the system must support, including business operation needs such as acquisition, finance, and logistics. In addition, the ORD also identifies the numerous policy directives to which the Navy ERP must conform, such as numerous DOD infrastructure systems, initiatives, and policies. The ORD was distributed to over 150 Navy and DOD reviewers. It went through seven major revisions to incorporate the comments and suggestions provided by the reviewers before being finalized in April 2004. According to Navy ERP program officials, any requested role for the Navy ERP to perform that was not included in the ORD will not be supported. This is a critical decision that reduces the project’s risks since “requirements creep” is another cause of projects that do not meet their cost, schedule, and performance objectives. We selected seven requirements from the ORD that related to specific Navy problem areas, such as financial reporting and asset management, and found that the requirements had the expected attributes, including the necessary detail one would normally expect to find for the requirement being reviewed. For example, a requirement stated that the ERP will provide reports of funds expended versus funds allocated. We found this requirement was described in a low-level requirement document called a Customer Input Template, which included a series of questions that must be addressed. The documentation further detailed the standard reports that were available based on the selection of configuration options. Further, the documentation of the detailed requirements identified the specific COTS screen number that would be used and described the screen settings that would be used when a screen was “activated.” While the ORD specifies the overall capabilities of the system at a high level, more specific, working-level requirements also had to be developed to achieve a usable blueprint for configuration and testing of the system. To develop these lower-level requirements, the Navy ERP project held detailed working sessions where requirements and design specifications were discussed, refined, formalized, and documented. Each high-level requirement was broken down into its corresponding business processes, which in turn drove the selection of transactions (COTS functions) to be used for configuration of the software. For each selected transaction, comprehensive documentation was created to capture the source information that defines how and why a transaction must be configured. This documentation is critical for ensuring accurate configuration of the software, as well as for testing the functionality of the software after configuration. Table 3 describes the kinds of documentation used to maintain these lower-level detailed requirements. Additionally, the Navy ERP program is using a requirements management tool containing a database that links each requirement from the highest to the lowest level and maintains the relationship between the requirements. This tool helps to automate the linkage between requirements and helps provide the project staff reasonable assurance that its stated processes have been effectively implemented. This linkage is critical to understanding the scope of any potential change. For example, the users can utilize the tool to (1) determine the number of transactions affected by a proposed change and (2) identify the detailed documentation necessary for understanding how this change will affect each business process. To further ensure that the individual transactions ultimately support the adopted business process, Navy ERP officials conducted master business scenarios or end-to-end process walkthroughs. This end-to-end view of the business process ensures that the business functionality works across the various subsystems of the COTS package. For instance, the requirements for a purchase order could be viewed simply from the vantage point of a logistics person or the acquisition community. However, a purchase order also has financial ramifications, and therefore must be posted to financial records, such as the general ledger. The master business scenarios provide a holistic review of the business process surrounding each transaction. The Navy expects the new ERP project to address a number of the weaknesses cited in the Department of the Navy Financial Improvement Plan—a course of action directed towards achieving better financial management and an unqualified audit opinion for the Department of the Navy annual financial statements. According to ERP officials, the COTS software used for the ERP program will improve the Navy’s current financial controls in the areas of asset visibility, financial reporting, and full cost accounting. However, the currently planned ERP is not intended to provide an all-inclusive end-to-end corporate solution for the Navy. The COTS software offers the potential for real-time asset visibility for the Navy, limited by two factors beyond the project’s scope. First, items in transit fall under the authority of the U.S. Transportation Command (TRANSCOM). Once the Navy hands off an item to TRANSCOM, it does not retain visibility of that asset until it arrives at another Navy location. The second factor is the limited ability for communication with ships at sea. Once the currently planned ERP is fully implemented, it will cover all inventories, including inventory on ships. However, the data for shipboard inventory will be current only as of when the ship leaves port. Those data will typically not be updated until the ship docks in another port and can transmit updated information to the ERP system. This lag time for some ships could be as much as 3 to 4 months. While the ERP has the capability to maintain real-time shipboard inventory, the Navy has yet to decide whether to expand the scope of the ERP and build an interface with the ships, which could be extensive and costly, or install the ERP on the ships. Both options present additional challenges that necessitate thorough analysis of all alternatives before a decision is made. According to the program office, a time frame for making this critical decision has not been established. The COTS software is also intended to provide standardized government and proprietary financial reporting at any level within the defined organization. According to Navy ERP officials, full cost accounting will be facilitated by a software component integrated with the ERP. For example, the Navy expects that this component will provide up-to-date cost information—including labor, materials, and overhead—for its numerous, and often complicated, maintenance jobs. Full cost information is necessary for effective management of production, maintenance, and other activities. According to Navy ERP program officials, when fully operational in fiscal year 2011, the Navy ERP will be used by organizations comprising approximately 80 percent of Navy’s estimated appropriated funds—after excluding the Marine Corps and military pay and personnel. Based on fiscal years’ 2006 through 2011 defense planning budget, the Navy ERP will manage approximately $74 billion annually. The organizations that will use Navy ERP include the Naval Air Systems, the Naval Sea Systems, the Naval Supply Systems, the Space and Naval Warfare Systems, and the Navy Facilities Engineering Commands, as well as the Office of Naval Research, the Atlantic and Pacific Fleets, and the Strategic Systems Programs. However, the Navy ERP will not manage in detail all of the 80 percent. About 2 percent, or approximately $1.6 billion, will be executed and maintained in detail by respective financial management systems at the aviation and shipyard depots. For example, when a work order for a repair of an airplane part is prepared, the respective financial management system at the depot will execute and maintain the detailed transactions. The remaining 20 percent that the Navy ERP will not manage comprises the Navy Installations Command, field support activities, and others. Navy ERP officials have indicated that it is the Navy’s intent to further expand the system in the future to include the aviation and shipyard depots, but definite plans have not yet been made. According to Navy ERP officials, the software has the capability to be used at the aviation and shipyard depots, but additional work would be necessary. For example, the desired functionality and related requirements—which as discussed above, are critical to the success of any project—would have to be defined for the aviation and shipyard depots. While the Navy’s requirements management process is following disciplined processes and comprises one critical aspect of the overall project development and implementation, by itself, it is not sufficient to provide reasonable assurance of the ERP’s success. Going forward, the Navy faces very difficult challenges and risks in the areas of developing and implementing 44 system interfaces with other Navy and DOD systems, and accurately converting data from the existing legacy systems to the ERP. As previously noted, financial management is a high-risk area in the department and has been designated as such since 1995. One of the contributing factors has been DOD’s inability to develop integrated systems. As a result, the Navy is dependent upon the numerous interfaces to help improve the accuracy of its financial management data. Navy ERP program managers have recognized the issues of system interfaces and data conversion in their current list of key risks. They have identified some actions that need to be taken to mitigate the risks; however, they have not yet developed the memorandums of agreement with the owners of the systems which the Navy ERP will interface. According to the Navy ERP program office, they plan to complete these memorandums of agreement by October 2005. One of the long-standing problems within DOD has been the lack of integrated systems. This is evident in the many duplicative, stovepiped business systems among the 4,150 that DOD reported as belonging to its systems environment. Lacking integrated systems, DOD has a difficult time obtaining accurate and reliable information on the results of its business operations and continues to rely on either manual reentry of data into multiple systems, convoluted system interfaces, or both. These system interfaces provide data that are critical to day-to-day operations, such as obligations, disbursements, purchase orders, requisitions, and other procurement activities. Testing the system interfaces in an end-to-end manner is necessary in order for the Navy to have reasonable assurance that the ERP will be capable of providing the intended functionality. The testing process begins with the initial requirements development process. Furthermore, test planning can help disciplined activities reduce requirements-related defects. For example, developing conceptual test cases based on the requirements can identify errors, omissions, and ambiguities long before any code is written or a system is configured. The challenge now before Navy ERP is to be sure its testing scenarios accurately reflect the activities of the “real users,” and the dependencies of external systems. We previously reported that Sears and Wal-Mart, recognized as leading- edge inventory management companies, have automated systems that electronically receive and exchange standard data throughout the entire inventory management process, thereby reducing the need for manual data entry. As a result, information moves through the data systems with automated ordering of inventory from suppliers; receiving and shipping at distribution centers; and receiving, selling, and reordering at retail stores. Unlike DOD, which has a proliferation of nonintegrated systems using nonstandard data, Sears and Wal-Mart require all components and subsidiaries to operate within a standard systems framework that results in an integrated system and does not allow individual systems development. For the first deployment, the Navy has to develop interfaces that permit the ERP to communicate with 44 systems—27 that are Navy specific and 17 systems belonging to other DOD entities. Figure 3 illustrates the numerous required system interfaces. Long-standing problems regarding the lack of integrated systems and use of nonstandard data within DOD pose significant risks for the Navy ERP to successfully interface with these systems. Even if integration is successful, if the information within the 44 systems is not accurate and reliable, the overall information on Navy’s operation provided by the ERP to Navy management and the Congress will not be useful in the decision-making process. While the Navy ERP project office is working to develop agreements with system owners for the interfaces and has been developing the functional specifications for each system, officials acknowledged that, as of May 2005, they are behind schedule in completing the interface agreements due to other tasks. The Navy ERP is dependent on the system owners to achieve their time frames for implementation. For example, the Defense Travel System (DTS) is one of the DOD systems with which the Navy ERP is to interface and exchange data. DTS is currently being implemented, and any problems that result in a DTS schedule slippage will, in turn, affect Navy ERP’s interface testing. We have previously reported that the lack of system interface testing has seriously impaired the operation of other system implementation efforts. For example, in May 2004, we reported that because the system interfaces for the Defense Logistics Agency’s Business Systems Modernization (BSM) program and the Army’s LMP were not properly tested prior to deployment, severe operational problems were experienced. Such problems have led BSM, LMP, and organizations with which they interface—such as DFAS—to perform costly manual reentry of transactions, which can cause additional data integrity problems. For example: BSM’s functional capabilities were adversely affected because a significant number of interfaces were still in development or were being executed manually once the system became operational. Since the design of system interfaces had not been fully developed and tested, BSM experienced problems with receipts being rejected, customer orders being canceled, and vendors not being paid in a timely manner. At one point, DFAS suspended all vendor payments for about 2 months, thereby increasing the risk of late payments to contractors and violations of the Prompt Payment Act. In January 2004, the Army reported that due to an interface failure, LMP had been unable to communicate with the Work Ordering and Reporting Communications System (WORCS) since September 2003. WORCS is the means by which LMP communicates with customers on the status of items that have been sent to the depot for repair and initiates procurement actions for inventory items. The Army has acknowledged that the failure of WORCS has resulted in duplicative shipments and billings and inventory items being delivered to the wrong locations. Additionally, the LMP program office has stated that it has not yet identified the specific cause of the interface failure. The Army is currently entering the information manually, which, as noted above, can cause additional data integrity errors. Besides the challenge of developing the 44 interfaces, the Navy ERP must also develop the means to be compliant with DOD’s efforts to standardize the way that various systems exchange data with each other. As discussed in our July 2004 report, DOD is undertaking a huge and complex task (commonly referred to as the Global Information Grid or GIG) that is intended to integrate virtually all of DOD’s information systems, services, applications, and data into one seamless, reliable, and secure network. The GIG initiative is focused on promoting interoperability throughout DOD by building an Internet-like network for DOD-related operations based on common standards and protocols rather than on trying to establish interoperability after individual systems become operational. DOD envisions that this type of network would help ensure systems can easily and quickly exchange data and change how military operations are planned and executed since much more information would be dynamically available to users. DOD’s plans for realizing the GIG involve building a new core network and information capability and successfully integrating the majority of its weapon systems; command, control, and communications systems; and business systems with the new network. The effort to build the GIG will require DOD to make a substantial investment in a new set of core enterprise programs and initiatives. To integrate systems such as the Navy ERP into the GIG, DOD has developed (1) an initial blueprint or architecture for the GIG and (2) new policies, guidance, and standards to guide implementation. According to project officials, the Navy ERP system will be designed to support the GIG. However, they face challenges that can result in significant cost and schedule risks depending on the decisions reached. One challenge is the extent to which other DOD applications with which the Navy ERP must exchange data are compliant with the GIG. While traditional interfaces with systems that are not GIG compliant can be developed, these interfaces may suboptimize the benefits expected from the Navy ERP. The following is one example of the difficulties faced by the Navy ERP project. As mentioned previously, one system that will need to exchange data with the Navy ERP system is DTS. However, the DTS program office and the Navy ERP project office hold different views of how data should be exchanged between the two systems. The travel authorization process exemplifies these differences. DTS requires that funding information and the associated funds be provided to DTS in advance of a travel authorization being processed. In effect, DTS requires that the financial management systems set aside the funds necessary for DTS operations. Once a travel authorization is approved, DTS notifies the appropriate financial management system that an obligation has been incurred. The Navy ERP system, on the other hand, only envisions providing basic funding information to DTS in advance, and would delay providing the actual funds to DTS until they are needed in order to (1) maintain adequate funds control, (2) ensure that the funds under its control are not tied up by other systems, and (3) ensure that the proper accounting data are provided when an entry is made into its system. According to the Software Engineering Institute (SEI), a widely recognized model evaluating a system of systems interoperability is the Levels of Information System Interoperability. This model focuses on the increasing levels of sophistication of system interoperability. According to Navy ERP officials, the GIG and the ERP effort are expected to accomplish the highest level of this model—enterprise-based interoperability. In essence, systems that achieve this level of interoperability can provide multiple users access to complex data simultaneously, data and applications are fully shared and distributed, and data have a common interpretation regardless of format. This is in contrast to traditional interface strategies, such as the one used by DTS. The traditional approach is more aligned with the lowest level of the SEI model. Data exchanged at this level rely on electronic links that result in a simple electronic exchange of data. A broader challenge and risk that is out of the Navy ERP project’s control, but could significantly affect it, is DOD’s development of a BEA. As we recently reported, DOD’s BEA still lacks many of the key elements of a well-defined architecture and no basis exists for evaluating whether the Navy ERP will be aligned with the BEA and whether it would be a corporate solution for DOD in its “To Be” or target environment. An enterprise architecture consists of snapshots of the enterprise’s current environment and its target environment, as well as a capital investment road map for transitioning from the current to the target environment. The real value of an enterprise architecture is that it provides the necessary content for guiding and constraining system investments in a way that promotes interoperability and minimizes overlap and duplication. At this time, it is unknown what the target environment will be. Therefore, it is unknown what business processes, data standards, and technological standards the Navy ERP must align to, as well as what legacy systems will be transitioned into the target environment. The Navy ERP project team is cognizant of the BEA development and has attempted to align to prior versions of it. The project team analyzed the BEA requirements and architectural elements to assess Navy ERP’s compliance. The project team mapped the BEA requirements to the Navy ERP functional areas and the BEA operational activities to the Navy ERP’s business processes. The Navy ERP project team recognizes that architectures evolve over time, and analysis and assessments will continue as requirements are further developed and refined. The scope of the BEA and the development approach are being revised. As a result of the new focus, DOD is determining which products from prior releases of the BEA could be salvaged and used. Since the Navy ERP is being developed absent the benefit of an enterprise architecture, there is limited, if any, assurance that the Navy ERP will be compliant with the architecture once it becomes more robust in the future. Given this scenario, it is conceivable that the Navy ERP will be faced with rework in order to be compliant with the architecture, once it is defined, and as noted earlier, rework is expensive. At the extreme, the project could fail as the four pilots did. If rework is needed, the overall cost of the Navy ERP could exceed the Navy’s current estimate of $800 million. The ability of the Navy to effectively address its data conversion challenges will also be critical to the ultimate success of the ERP effort. A Joint Financial Management Improvement Program (JFMIP) white paper on financial system data conversion noted that data conversion (that is, converting data in a legacy system to a new system) was one of the critical tasks necessary to successfully implement a new financial system. The paper further pointed out that data conversion is one of the most frequently underestimated tasks. If data conversion is done right, the new system has a much greater opportunity for success. On the other hand, converting data incorrectly or entering unreliable data from a legacy system can have lengthy and long- term repercussions. The adage “garbage in, garbage out” best describes the adverse impact. Accurately converting data, such as account balances, from the pilots, as well as other systems that the Navy ERP is to replace, will be critical to the success of the Navy ERP. While data conversion is identified in the Navy ERP’s list of key risks, it is too early in the ERP system life cycle for the development of specific testing plans. However, our previous audits have shown that if data conversion is not done properly, it can negatively impact system efficiency. For example, the Army’s LMP data conversion effort has proven to be troublesome and continues to affect business operations. As noted in our recent report, when the Tobyhanna Army Depot converted ending balances from its legacy finance and accounting system—the Standard Depot System (SDS)—to LMP in July 2003, the June 30, 2003, ending account balances in SDS did not reconcile to the beginning account balances in LMP. Accurate account balances are important for producing reliable financial reports. Another example is LMP’s inability to transfer accurate unit-of-issue data— quantity of an item, such as each number, dozen, or gallon—from its legacy system to LMP. This resulted in excess amounts of material ordered. Similar problems could occur with the Navy ERP if data conversion issues are not adequately addressed. The agreements between the Navy ERP and the other systems owners, discussed previously, will be critical to effectively support Navy’s ERP data conversion efforts. Navy officials could take additional actions to improve management oversight of the Navy ERP effort. For example, we found that the Navy does not have a mechanism in place to capture the data that can be used to effectively assess the project management processes. Best business practices indicate that a key facet of project management and oversight is the ability to effectively monitor and evaluate a project’s actual performance, cost, and schedule against what was planned. Performing this critical task requires the accumulation of quantitative data or metrics that can be used to evaluate a project’s performance. This information is necessary to understand the risk being assumed and whether the project will provide the desired functionality. Lacking such data, the ERP program management team can only focus on the project schedule and whether activities have occurred as planned, not whether the activities achieved their objectives. Additionally, although the Navy ERP program has a verification and validation function, it relies on in-house subject matter experts and others who are not independent to provide an assessment of the Navy ERP to DOD and Navy management. The use of an IV&V function is recognized as a best business practice and can help provide reasonable assurance that the system satisfies its intended use and user needs. Further, an independent assessment of the Navy ERP would provide information to DOD and Navy management on the overall status of the project, including the effectiveness of the management processes being utilized and identification of any potential risks that could affect the project with respect to cost, schedule, and performance. Given DOD’s long-standing inability to implement business systems that provide users with the promised capabilities, an independent assessment of the ERP’s performance is warranted. The Navy’s ability to understand the impact of the weaknesses in its processes will be limited because it has not determined the quantitative data or metrics that can be used to assess the effectiveness of its project management processes. This information is necessary to understand the risk being assumed and whether the project will provide the desired functionality. The Navy has yet to establish the metrics that would allow it to fully understand (1) its capability to manage the entire ERP effort; (2) how its process problems will affect the ERP cost, schedule, and performance objectives; and (3) the corrective actions needed to reduce the risks associated with the problems identified. Experience has shown that such an approach leads to rework and thrashing instead of making real progress on the project. SEI has found that metrics identifying important events and trends are invaluable in guiding software organizations to informed decisions. Key SEI findings relating to metrics include the following. The success of any software organization depends on its ability to make predictions and commitments relative to the products it produces. Effective measurement processes help software groups succeed by enabling them to understand their capabilities so that they can develop achievable plans for producing and delivering products and services. Measurements enable people to detect trends and anticipate problems, thus providing better control of costs, reducing risks, improving quality, and ensuring that business objectives are achieved. The lack of quantitative data to assess a project has been a key concern in other projects we have reviewed. Without such a process, management can only focus on the project schedule and whether activities have occurred as planned, not whether the activities achieved their objectives. Further, such quantitative data can be used to hold the project team accountable for providing the promised capability. Defect-tracking systems are one means of capturing quantitative data that can be used to evaluate project efforts. Although HHS had a system that captured the reported defects, we found that the system was not updated in a timely manner with this critical information. More specifically, one of the users identified a process weakness related to grant accounting as a problem that will affect the deployment of HHS’s system—commonly referred to as a “showstopper.” However, this weakness did not appear in the defect-tracking system until about 1 month later. As a result, during this interval the HHS defect-tracking system did not accurately reflect the potential problems identified by users, and HHS management was unable to determine (1) how well the system was working and (2) the amount of work necessary to correct known defects. Such information is critical when assessing a project’s status. We have also reported that while NASA had a system that captured the defects that have been identified during testing, an analysis was not performed to determine the root causes of reported defects. A critical element in helping to ensure that a project meets its cost, schedule, and performance goals is to ensure that defects are minimized and corrected as early in the process as possible. Understanding the root cause of a defect is critical to evaluating the effectiveness of a process. For example, if a significant number of defects are caused by inadequate requirements definition, then the organization knows that the requirements management process it has adopted is not effectively reducing risks to acceptable levels. Analysis of the root causes of identified defects allows an organization to determine whether the requirements management approach it has adopted sufficiently reduces the risks of the system not meeting cost, schedule, and functionality goals to acceptable levels. Root-cause analysis would also help to quantify the risks inherent in the testing process that has been selected. Further, the Navy has not yet implemented an earned value management system, which is another metric that can be employed to better manage and oversee a system project. Both OMB and DOD require the use of an earned value management system. The earned value management system attempts to compare the value of work accomplished during a given period with the work scheduled for that period. By using the value of completed work as a basis for estimating the cost and time needed to complete the program, management can be alerted to potential problems early in the program. For example, if a task is expected to take 100 hours to complete and it is 50 percent complete, the earned value management system would compare the number of hours actually spent to complete the task to the number of hours expected for the amount of work performed. In this example, if the actual hours spent equaled 50 percent of the hours expected, then the earned value would show that the project’s resources were consistent with the estimate. Without an effective earned value management system, the Navy and DOD management have little assurance that they know the status of the various project deliverables in the context of progress and the cost incurred in completing each of the deliverables. In other words, an effective earned value management system would be able to provide quantitative data on the status of a given project deliverable, such as a data conversion program. Based on this information, Navy management would be able to determine whether the progress of the data conversion effort was within the expected parameters for completion. Management could then use this information to determine actions to take to mitigate risk and manage cost and schedule performance. According to Navy ERP officials, they intend to implement the earned value management system as part of the contract for the next phase of the project. The Navy has not established an IV&V function to provide an assessment of the Navy ERP to DOD and Navy management. Best business practices indicate that use of an IV&V function is a viable means to provide management reasonable assurance that the planned system satisfies its planned use and users. An effective IV&V review process would provide independent information to DOD and Navy management on the overall status of the project, including a discussion of any impacts or potential impacts to the project with respect to cost, schedule, and performance. These assessments involve reviewing project documentation, participating in meetings at all levels within the project, and providing periodic reports and recommendations, if deemed warranted, to senior management. The IV&V function should report on every facet of a system project such as: Testing program adequacy. Testing activities would be evaluated to ensure they are properly defined and developed in accordance with industry standard and best practices. Critical-path analysis. A critical path defines the series of tasks that must be finished in time for the entire project to finish on schedule. Each task on the critical path is a critical task. A critical-path analysis helps to identify the impact of various project events, such as delays in project deliverables, and ensures that the impact of such delays is clearly understood by all parties involved with the project. System strategy documents. Numerous system strategy documents that provide the foundation for the system development and operations are critical aspects of an effective system project. These documents are used for guidance in developing documents for articulating the plans and procedures used to implement a system. Examples of such documents include the Life-cycle Test Strategy, Interface Strategy, and Conversion Strategy. The IV&V reports should identify the project management weaknesses that increase the risks associated with the project to senior management so that they can be promptly addressed. The Navy ERP program’s approach to the verification and validation of its project management activities relies on in- house subject matter experts and others who work for the project team’s Quality Assurance leader. The results of these efforts are reported to the project manager. While various approaches can be used to perform this function, such as using the Navy’s approach or hiring a contractor to perform these activities, independence is a key component to successful verification and validation activities. The system developer and project management office may have vested interests and may not be objective in their self-assessments. Accordingly, performing verification and validation activities independently of the development and management functions helps to ensure that verification and validation activities are unbiased and based on objective evidence. The Navy’s adoption of verification and validation processes is a key component of its efforts to implement the disciplined processes necessary to manage this project. However, Navy and DOD management cannot obtain reasonable assurance that the processes have been effectively implemented since the present verification and validation efforts are not conducted by an independent party. In response to the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, DOD has established a hierarchy of investment review boards from across the department to improve the control and accountability over business system investments. The boards are responsible for reviewing and approving investments to develop, operate, maintain, and modernize business systems for their respective business areas. The various boards are to report to the Defense Business Systems Management Committee (DBSMC), which is ultimately responsible for the review and approval of the department’s investments in its business systems. To help facilitate this oversight responsibility, the reports prepared by the IV&V function should be provided to the appropriate investment review board and the DBSMC to assist them in the decision- making process regarding the continued investment in the Navy ERP. The information in the reports should provide reasonable assurance that an appropriate rate of return is received on the hundreds of millions of dollars that will be invested over the next several years and the Navy ERP provides the promised capabilities. To help ensure that the Navy ERP achieves its cost, schedule, and performance goals, the investment review should employ an early warning system that enables it to take corrective action at the first sign of slippages. Effective project oversight requires having regular reviews of the project’s performance against stated expectations and ensuring that corrective actions for each underperforming project are documented, agreed to, implemented, and tracked until the desired outcome is achieved. The lack of management control and oversight and a poorly conceived concept resulted in the Navy largely wasting about $1 billion on four ERP system projects that had only a limited positive impact on the Navy’s ability to produce reliable, useful, and timely information to aid in its day-to-day operations. The Navy recognizes that it must have the appropriate management controls and processes in place to have reasonable assurance that the current effort will be successful. While the current requirements management effort is adhering to the disciplined processes, the overall effort is still in the early stages and numerous challenges and significant risks remain, such as validating data conversion efforts and developing numerous systems interfaces. Given that the current effort is not scheduled to be complete until 2011 and is currently estimated by the Navy to cost about $800 million, it is incumbent upon Navy and DOD management to provide the vigilant oversight that was lacking in the four pilots. Absent this oversight, the Navy and DOD run a higher risk than necessary of finding, as has been the case with many other DOD business systems efforts, that the system may cost more than anticipated, take longer to develop and implement, and does not provide the promised capabilities. In addition, attempting large-scale systems modernization programs without a well-defined architecture to guide and constrain business systems investments, which is the current DOD state, presents the risk of costly rework or even system failure once the enterprise architecture is fully defined. Considering (1) the large investment of time and money essentially wasted on the pilots and (2) the size, complexity, and estimated costs of the current ERP effort, the Navy can ill afford another business system failure. To improve the Navy’s and DOD’s oversight of the Navy ERP effort, we recommend that the Secretary of Defense direct the Secretary of the Navy to require that the Navy ERP Program Management Office (1) develop and implement the quantitative metrics needed to evaluate project performance and risks and use the quantitative metrics to assess progress and compliance with disciplined processes and (2) establish an IV&V function and direct that all IV&V reports be provided to Navy management and to the appropriate DOD investment review board, as well as the project management. Furthermore, given the uncertainty of the DOD business enterprise architecture, we recommend that the Secretary of Defense direct the DBSMC to institute semiannual reviews of the Navy ERP to ensure that the project continues to follows the disciplined processes and meets its intended costs, schedule, and performance goals. Particular attention should be directed towards system testing, data conversion, and development of the numerous system interfaces with the other Navy and DOD systems. We received written comments on a draft of this report from the Deputy Under Secretary of Defense (Financial Management) and the Deputy Under Secretary of Defense (Business Transformation), which are reprinted in appendix II. While DOD generally concurred with our recommendations, it took exception to our characterization that the pilots were failures and a waste of $1 billion. Regarding the recommendations, DOD agreed that it should develop and implement quantitative metrics that can be used to evaluate the Navy ERP and noted that it intends to have such metrics developed by December 2005. The department also agreed that the Navy ERP program management office should establish an IV&V function and noted that the IV&V team will report directly to the program manager. We continue to reiterate the need for the IV&V to be completely independent of the project. As noted in the report, performing IV&V activities independently of the development and management functions helps to ensure that the results are unbiased and based on objective evidence. Further, rather than having the IV&V reports provided directly to the appropriate DOD investment review boards as we recommended, DOD stated that the Navy management and/or the project management office shall inform the Office of the Under Secretary of Defense for Business Transformation of any significant IV&V results. We reiterate our support for the recommendation that the IV&V reports be provided to the appropriate investment review board so that it can determine whether any of the IV&V results are significant. Again, by providing the reports directly to the appropriate investment review board, we believe there would be added assurances that the results were objective and that the managers who will be responsible for authorizing future investments in the Navy ERP will have the information needed to make the most informed decision. With regard to the reviews by the DBSMC, DOD partially agreed. Rather than semiannual reviews by the DBSMC as we recommended, the department noted that the components (e.g., the Navy) would provide briefings on their overall efforts, initiatives, and systems during meetings with the DBSMC. Given the significance of the Navy ERP, in terms of dollars and its importance to the overall transformation of the department’s business operations, and the failure of the four ERP pilots, we continue to support more proactive semiannual reviews by the DBSMC. As noted in the report, the Navy’s initial estimate is that the ERP will cost at least $800 million, and given the department’s past difficulties in effectively developing and implementing business systems, substantive reviews by individuals outside of the program office that are focused just on the Navy ERP by the highest levels of management within the department are warranted. Further, we are concerned that the briefings contemplated to the DBSMC may not necessarily discuss the Navy ERP, nor provide the necessary detailed discussions to offer the requisite level of confidence and assurance that the project continues to follow disciplined processes with particular attention to numerous challenges, such as system interfaces and system testing. In commenting on the report, the department depicted the pilots in a much more positive light than we believe is merited. DOD pointed out that it viewed the pilots as successful, exceeding initial expectations, and forming the foundation upon which to build a Navy enterprise solution, and took exception to our characterization that the pilots were failures and largely a waste of $1 billion. As discussed in the report, the four pilots were narrow in scope, and were never intended to be a corporate solution for resolving any of the Navy’s long-standing financial and business management problems. We characterized the pilots as failures because the department spent $1 billion on systems that did not result in marked improvement in the Navy’s day-to-day operations. While there may have been marginal improvements, it is difficult to ascertain the sustained, long-term benefits that will be derived by the American taxpayers for the $1 billion. Additionally, the pilots present an excellent case study as to why the centralization of the business systems funding would be an appropriate course of action for the department, as we have previously recommended. Each Navy command was allowed to develop an independent solution that focused on its own parochial interest. There was no consideration as to how the separate efforts fit within an overall departmental framework, or, for that matter, even a Navy framework. As noted in table 2, the pilots performed many of the same functions and used the same software, but yet were not interoperable because of the various inconsistencies in the design and implementation. Because the department followed the status quo, the pilots, at best, provided the department with four more stovepiped systems that perform duplicate functions. Such investments are one reason why the department reported in February 2005 that it had 4,150 business systems. Further, in its comments the department noted one of the benefits of the pilots was that they “proved that the Navy could exploit commercial ERP tools without significant customization.” Based upon our review and during discussions with the program office, just the opposite occurred in the pilots. Many portions of the pilots’ COTS software were customized to accommodate the existing business processes, which negated the advantages of procuring a COTS package. Additionally, the department noted that one of the pilots—SMART, on which, as noted in our report, the Navy spent approximately $346 million through September 30, 2004—has already been retired. We continue to question the overall benefit that the Navy and the department derived from these four pilots and the $1 billion it spent. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its issuance date. At that time, we will send copies to the Chairmen and Ranking Minority Members, Senate Committee on Armed Services; Senate Committee on Homeland Security and Governmental Affairs; Subcommittee on Defense, Senate Committee on Appropriations; House Committee on Armed Services; House Committee on Government Reform; and Subcommittee on Defense, House Committee on Appropriations. We are also sending copies to the Under Secretary of Defense (Comptroller); the Under Secretary of Defense (Acquisition, Technology and Logistics); the Under Secretary of Defense (Personnel and Readiness); the Assistant Secretary of Defense (Networks and Information Integration); and the Director, Office of Management and Budget. Copies of this report will be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact Gregory D. Kutz at (202) 512-9505 or kutzg@gao.gov or Keith A. Rhodes at (202) 512- 6412 or rhodesk@gao.gov. Key contributors to this report are listed in appendix IV. Contact points for the Offices of Congressional Relations and Public Affairs are shown on the last page of the report. To obtain a historical perspective on the planning and costs of the Navy’s four Enterprise Resource Planning (ERP) pilot projects, and the decision to merge them into one program, we reviewed the Department of Defense’s (DOD) budget justification materials and other background information on the four pilot projects. We also reviewed Naval Audit Service reports on the pilots. In addition, we interviewed Navy ERP program management and DOD Chief Information Officer (CIO) officials and obtained informational briefings on the pilots. To determine if the Navy has identified lessons learned from the pilots, how they are being used, and the challenges that remain, we reviewed program documentation and interviewed Navy ERP program officials. Program documentation that we reviewed included concept of operations documentation, requirements documents, the testing strategy, and the test plan. In order to determine whether the stated requirements management processes were effectively implemented, we performed an in-depth review and analysis of seven requirements that relate to the Navy’s problem areas, such as financial reporting and asset management, and traced them through the various requirements documents. These requirements were selected in a manner that ensured that the requirements selected were included in the Navy’s Financial Improvement Plan. Our approach to validating the effectiveness of the requirements management process relied on a selection of seven requirements from different functional areas. From the finance area, we selected the requirement to provide reports of funds expended versus funds allocated. From the intermediate-level maintenance management area, we selected the requirement related to direct cost per job and forecasting accuracy. From the procurement area, we selected the requirement to enable monitoring and management of cost versus plan. In the plant supply functions area, we reviewed the requirement related to total material visibility and access of material held by the activity and the enterprise. From the wholesale supply functions area, we selected the requirements of in-transit losses/in-transit write-offs and total material visibility and access of material held by the activity and the enterprise. Additionally, we reviewed the requirement that the ERP be compliant with federal mandates and requirements and the U.S. Standard General Ledger. In order to provide reasonable assurance that our test results for the selected requirements reflected the same processes used to document all requirements, we did not notify the project office of the specific requirements we had chosen until the tests were conducted. Accordingly, the project office had to be able to respond to a large number of potential requests rather than prepare for the selected requirements in advance. Additionally, we obtained the list of systems the Navy ERP will interface with and interviewed selected officials responsible for these systems to determine what activities the Navy ERP program office is working with them on and what challenges remain. To determine if there are additional business practices that could be used to improve management oversight of the Navy ERP, we reviewed industry standards and best practices from the Institute of Electrical and Electronics Engineers, the Software Engineering Institute, the Joint Financial Management Improvement Program, GAO executive guides, and prior GAO reports. Given that the Navy ERP effort is still in the early stages of development, we did not evaluate all best practices. Rather, we concentrated on those that could have an immediate impact in improving management’s oversight. We interviewed Navy ERP program officials and requested program documentation to determine if the Navy ERP had addressed or had plans for addressing these industry standards and best practices. We did not verify the accuracy and completeness of the cost information provided by DOD for the four pilots or the Navy ERP effort. We conducted our work from August 2004 through June 2005 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. We received written comments on a draft of the report from the Deputy Under Secretary of Defense (Financial Management) and the Deputy Under Secretary of Defense (Business Transformation), which are reprinted in appendix II. Configuration Data Manager’s Database – Open Architecture Common Rates Computation System/Common Allowance Development System Department of the Navy Industrial Budget Information System Integrated Technical Item Management & Procurement Maintenance and Ship Work Planning Naval Aviation Logistic Command Management Information System (2 different versions) In addition to the contacts above, Darby Smith, Assistant Director; J. Christopher Martin, Senior Level Technologist; Francine DelVecchio; Kristi Karls; Jason Kelly; Mai Nguyen; and Philip Reiff made key contributions to this report. | The Department of Defense's (DOD) difficulty in implementing business systems that are efficient and effective continues despite the billions of dollars that it invests each year. For a decade now--since 1995--we have designated DOD's business systems modernization as "high-risk." GAO was asked to (1) provide a historical perspective on the planning and costs of the Navy's four Enterprise Resource Planning (ERP) pilot projects, and the decision to merge them into one program; (2) determine if the Navy has identified lessons from the pilots, how the lessons are being used, and challenges that remain; and (3) determine if there are additional best business practices that could be used to improve management oversight of the Navy ERP. The Navy invested approximately $1 billion in four ERP pilots without marked improvement in its day-to-day operations. The planning for the pilots started in 1998, with implementation beginning in fiscal year 2000. The four pilots were limited in scope and were not intended to be corporate solutions for any of the Navy's long-standing financial and business management problems. Furthermore, because of the various inconsistencies in the design and implementation of the pilots, they were not interoperable, even though they performed many of the same business functions. In short, the efforts were failures and $1 billion was largely wasted. Because the pilots would not meet its overall requirements, the Navy decided to start over and develop a new ERP system, under the leadership of a central program office. Using the lessons learned from the pilots, the current Navy ERP program office has so far been committed to the disciplined processes necessary to manage this effort. GAO found that, unlike other systems projects it has reviewed at DOD and other agencies, Navy ERP management is following an effective process for identifying and documenting requirements. The strong emphasis on requirements management, which was lacking in the previous efforts, is critical since requirements represent the essential blueprint that system developers and program managers use to design, develop, test, and implement a system and are key factors in projects that are considered successful. While the Navy ERP has the potential to address some of the Navy's financial management weaknesses, as currently planned, it will not provide an all-inclusive end-to-end corporate solution for the Navy. For example, the current scope of the ERP does not include the activities of the aviation and shipyard depots. Further, there are still significant challenges and risks ahead as the project moves forward, such as developing and implementing 44 system interfaces with other Navy and DOD systems and converting data from legacy systems into the ERP system. The project is in its early phases, with a current estimated completion date of 2011 at an estimated cost of $800 million. These estimates are subject to, and very likely will, change. Broader challenges, such as alignment with DOD's business enterprise architecture, which is not fully defined, also present a significant risk. Given DOD's past inability to implement business systems that provide the promised capability, continued close management oversight--by the Navy and DOD--will be critical. In this regard, the Navy does not have in place the structure to capture quantitative data that can be used to assess the effectiveness of the overall effort. Also, the Navy has not established an independent verification and validation (IV&V) function. Rather, the Navy is using in-house subject matter experts and others within the project. Industry best practices indicate that the IV&V activity should be independent of the project and report directly to agency management in order to provide added assurance that reported results on the project's status are unbiased. |
The Palestinian territories, comprising the West Bank and Gaza, cover 2,402 square miles and have a combined population of over 4 million people. (See fig. 1.) Both the PA and Israel administer areas within the West Bank. The U.S. government, along with other countries, has provided intermittent security assistance to the Palestinians since 1993. In 1993, the Oslo Accord called for limited Palestinian self-rule and security responsibilities in the West Bank and Gaza. The subsequent 1995 Interim Agreement on the West Bank and the Gaza Strip divided the West Bank into three zones and allotted civil and security responsibilities, to varying degrees, to the Israeli government and the PA. The Government of Israel allowed the PA to establish some security forces and coordinated with the PA on the establishment of limited self-rule in the West Bank and Gaza. The United States provided some non-lethal equipment and a small amount of funding for salaries to help the newly created PA security forces improve their professionalism and combat terrorism. Other countries provided the PA with security assistance focused on training and equipping the security forces. The outbreak of the second intifada (insurrection), which State reports resulted in the death of more than 3,000 Palestinians and about 1,000 Israelis between 2000 and 2004, disrupted security assistance efforts. In response to the intifada, the Israeli security forces reoccupied much of the West Bank previously ceded to PA control, set up hundreds of checkpoints and roadblocks throughout the territory, erected a wall separating Israel and some Palestinian territory from the rest of the West Bank, and destroyed much of the Palestinian security infrastructure. As a result, Israeli-Palestinian security cooperation ceased and other governments curtailed or halted their security assistance to the PA. Amid the violence, efforts to negotiate a Middle East peace agreement began in 2000 at Camp David and continued until 2003. Security assistance efforts did not resume until after the PA, Israel, United States, United Nations, European Union, and Russia agreed in 2003 to implement the Roadmap for Peace, a U.S.-proposed performance-based strategy, which calls for an independent Palestinian state coexisting peacefully with the State of Israel and, among other things, provides a plan for establishing the security preconditions necessary to create an independent Palestinian state. The Roadmap, among other things, obligates the PA and Israel to undertake specific actions to improve security as part of the ongoing Middle East peace process. In particular, the Roadmap obligates the PA to perform the following actions: Issue an unequivocal statement reiterating Israel’s right to exist in peace and security and calling for an immediate and unconditional ceasefire to end armed activity and all acts of violence against Israelis anywhere. All official Palestinian institutions end incitement against Israel. Have its rebuilt and refocused security apparatus begin sustained, targeted, and effective operations aimed at confronting all those engaged in terror and dismantlement of terrorist capabilities and infrastructure. This includes commencing confiscation of illegal weapons and consolidation of security authority, free of association with terror and corruption. Consolidate all Palestinian security organizations into three services reporting to an empowered Interior Minister. In return, the Roadmap obligates the Israelis to perform the following actions: Issue an unequivocal statement affirming its commitment to the two-state vision of an independent, viable, sovereign Palestinian state living in peace and security alongside Israel, as expressed by President Bush, and calling for an immediate end to violence against Palestinians everywhere. All official Israeli institutions are also to end incitement against Palestinians. Take no actions undermining trust, including deportations, attacks on civilians; confiscation and/or demolition of Palestinian homes and property, as a punitive measure or to facilitate Israeli construction; destruction of Palestinian institutions and infrastructure; and other measures specified in the Tenet work plan; and Progressively withdraw the Israeli Defense Forces from areas occupied since September 28, 2000, and the two sides restore the status quo that existed prior to September 28, 2000, as comprehensive security performance moves forward. Palestinian security forces to progressively redeploy to areas vacated by the Israeli Defense Forces. To help the PA and Israel meet their Roadmap obligations and pave the way for a two-state solution, the Secretary of State created the office of the USSC in 2005. The USSC, which operated with no project funding until mid-2007, initially focused on providing advice and guidance to the PASF on its reform efforts while also coordinating the programs of several other security donors. In addition, USSC officials coordinated and consulted with Israeli and Palestinian authorities in connection with the PA’s assumption of responsibility for security in Gaza following Israel’s August 2005 withdrawal. In January 2006, the Palestinian people elected a Hamas majority to the Palestinian Legislative Council. Following the results of the January 2006 election and the subsequent formation of a Fatah-Hamas unity government in 2007, the Quartet on the Middle East announced it would continue to provide support and assistance to the Hamas-led government only if the government would agree to nonviolence, recognize the State of Israel, and respect previous Israeli-Palestinian peace agreements. Hamas never accepted these conditions. U.S. direct assistance to the Palestinians was reduced and restructured, with the focus shifting to providing humanitarian and project assistance indirectly through international and non- governmental organizations. During this time, USSC focused on coordinating international assistance aimed at improving Gaza’s economy and helped coordinate the efforts of Israel, Egypt, and the PA to regulate and control the key Gaza border crossings. USSC also coordinated with Britain and Canada to provide training assistance to the PA’s Presidential Guard, a security organization under the control of the PA president with responsibility for protecting PA officials and facilities and manning the border crossings. In June 2007, Hamas forcibly took control of the Gaza Strip. This led the PA President to issue an emergency decree suspending the operation of the PA government and appointing a new government, without Hamas participation, to administer the affairs of the West Bank during the state of emergency, under a politically independent Prime Minister. As a result, the United States decided to re-engage with the PA directly and increased the amount of U.S. assistance aimed at improving the economic and security climate in the West Bank and increasing the capacity of the PA. As described by USSC and State officials, the USSC’s current mission is to (1) facilitate PA-Israeli cooperation and allay Israeli fears about the nature and capabilities of the PASF; (2) lead and coordinate international assistance for the PASF provided by the United States and other international donors to eliminate duplication of effort; and (3) help the PA rightsize, reform, and professionalize its security sector by advising the PA and by training and equipping the PASF to meet the Palestinians’ obligations outlined in the Roadmap. The head of the USSC, a lieutenant general in the U.S. Army, also serves as the deputy for security issues to the U.S. Special Envoy for Middle East Peace. The office of the USSC has a core staff of approximately 45 personnel as of March 2010. Headquartered in Jerusalem, the USSC includes up to 16 U.S. military personnel and several U.S. civilians. About 17 military staff provided to USSC by Canada operate in the West Bank, and two or more British military personnel from Britain support the USSC at the PA Ministry of Interior in Ramallah. USSC also maintains staff at the U.S. Embassy in Tel Aviv as liaisons to the government of Israel. State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) maintains staff in Jerusalem to implement INL program funds, most of which underwrite USSC activities. About 28 INL-directed Dyncorps International contractors assist with USSC training programs in the West Bank and Jordan. Other INL staff and contractors manage equipment warehouse operations in Jerusalem and oversee construction projects in the West Bank under this program. Under the current president and prime minister, the PA formalized plans to reorganize and rebuild ministries and security forces in the West Bank with donor assistance between 2008 and 2010. The PA has consolidated a 23,000-strong security force under Presidential and Interior Ministry control, as called for in the Roadmap Agreement. As shown in figure 2, the PASF is comprised of uniformed services, civilian organizations, and intelligence offices. The U.S. government, through USSC and INL has allocated over $160 million in funding for the training of certain units of the PASF’s uniformed services, primarily the NSF, since 2007. USSC has also helped provide State-funded vehicles and nonlethal individual and unit equipment to both the NSF and Presidential Guard, totaling about $89 million. In addition, State has allocated approximately $99 million toward the renovation or construction of numerous PASF installations. Finally, USSC and INL have undertaken activities to increase the PA’s capacity, including building the Ministry of Interior’s capacity to plan and oversee the PASF and coordinate international donor assistance. State allocated $22 million in funding for these programs since 2007. (See table 1.) State also has requested a total of $150 million in additional INCLE funding for security assistance to the PA for fiscal year 2011, including $56 million for training activities, $33 million for equipping, $53 million for infrastructure activities, and $3 million for strategic capacity building activities. Since 2007, State, primarily through USSC, has allocated more than $160 million to support training of PASF units in Jordan and the West Bank. USSC has focused its training programs mainly on the NSF, and to a lesser extent, the Presidential Guard. The main component of the USSC training-related activities is battalion- level basic law enforcement and security training conducted at the Jordanian International Police Training Center outside Amman, Jordan. As of January 2010, the Jordanian International Police Training Center had trained four NSF battalions and one Presidential Guard battalion totaling about 2,500 personnel. The trained units include both existing units (Presidential Guard 3rd and NSF 2nd special battalions) and newly recruited battalions (NSF 3rd and 4th special battalions). This training consists of 19 weeks of basic training for all members of a battalion, which usually comprise approximately 500 troops. USSC officials told us that they currently plan to train a total of 10 NSF battalions at the Jordan center. State department officials reported that U.S. security assistance allocated to training from 2007 through 2010 covers the training of 7 of the proposed 10 battalions as shown in figure 2. This would allow one trained NSF special battalion to be deployed in each PA governorate in the West Bank (except in the municipality of Jerusalem, where the PA does not have security control) and one battalion to serve as a reserve for use as needed in any governorate. The basic training includes a mix of classroom and practical exercises focused on the broad areas of firearms operations, crowd control, close quarters operations, patrolling, detainee operations, and checkpoint operations. This training is also designed to help the PA transform and professionalize its security forces by producing well trained, capable graduates able to perform security-related duties supporting the Palestinian Civil Police or other duties as directed by the PA. According to U.S. officials, the training is structured to train by battalion to foster unity of command and build camaraderie among the troops. Although USSC and INL designed the syllabus for this training in consultation with the PA, instructors from Jordan’s Public Security Directorate conduct the training under the supervision of INL contractors. According to State and USSC officials, the United States fully vet all troops participating in USSC- sponsored training to ensure that no U.S. assistance is provided to or through individuals or entities that advocate, plan, sponsor, engage in, or have engaged in, terrorist activities. In addition, the PA, Israel, and Jordan also vet participants. Prior to the commencement of each battalion’s basic training course, the program trainers conduct three concurrent 4-week preliminary training courses for the battalion’s officers, noncommissioned officers, and drivers. These preliminary courses, intended to provide personnel the fundamental skills needed during the battalion training, focus on leadership skills for the officers and noncommissioned officers and advanced driving skills for the drivers. The Jordanian International Police Training Center also offers four 4-week concurrent specialized training courses for following their completion of the basic training course. The USSC also supports and INL funds specialized courses in the West Bank to train and assist members of the NSF special battalions and some other PASF organizations in areas such as leadership, human rights, media awareness, equipment maintenance, and food service operations. Some of the courses continue specialized training for selected members of the NSF battalions that received basic training at the Jordanian International Polic e Training Center. However, other courses—including a senior leadership course and an intermediate leadership course—are open to all branches of the PA security services. The senior leadership course, first offered in 2008, is a 2-month course for about 40 commanding officers from all branches of the PASF; as of February 2010, USSC had offered the c times. International trainers taught the initial sessions, and a team comprised of PA and international instructors conducted the most rece senior leadership course. The intermediate leadership course is a ne class for middle ranking and noncommissioned officers that adapts principles taught in the senior leaders’ course. Altogether, USSC ha conducted or supported 24 different specialized courses for PASF personnel in the West Bank between mid-2008 and March 2010, and plans to continue sessions of many of these courses while offering at least two new courses by the end of 20 and planned USSC courses. 10. Appendix II provides details on current Other, smaller U.S. programs train Presidential Guard and Civil Defense troops. In 2008, State’s Bureau of Diplomatic Security provided limited training exclusively to the Presidential Guard through its Anti-Terrorism Assistance program. This training focused on police tactical unit operations, leadership development at the middle and senior levels, investigative skills, and crisis response capabilities to enhance the operational effectiveness of the Presidential Guard. Finally, USSC and INL plan to support limited civil defense training for the Palestinian Civil Defense corps at Jordan’s regional civil defense training academy. Since 2007, State has allocated approximately $89 million to provide nonlethal equipment to 7 NSF battalions and the Presidential Guard. State plans to equip 10 NSF special battalions, as shown in figure 3. USSC is working to ensure that these security forces are properly equipped while garrisoned in their operations camps, and while operating throughout the West Bank. USSC intends to accomplish this by providing an initial issuance of nonlethal equipment to the battalions that have received basic training at the Jordanian International Police Training Center. As of March 2010, the USSC had provided the 3rd Presidential Guard battalion with an initial issuance of equipment, and had provided partial issuances of equipment to the 1st, 2nd, 3rd, and 4th NSF special battalions. State also reported that they have submitted the 5th NSF special battalion’s equipment package list to the Israeli government for approval. USSC and INL, in consultation with the PA, developed the lists of equipment provided to each battalion, which the Government of Israel must also approve. The initial issuance of nonlethal individual and unit equipment for each NSF special battalion includes uniforms with protective gear and operational equipment, including riot shields, batons, and handcuffs as well as computers, tents, basic first aid kits, and support vehicles. (For a list of the specific equipment provided to an NSF battalion see app. III.) USSC also provided the Presidential Guard battalion with a similar initial issuance of nonlethal individual and unit equipment, adapted for their mission specific needs. The USSC and INL also plan to provide search-and-rescue vehicles to the PA civil defense forces. Because all U.S.-provided equipment is subject to end-use monitoring, INL officials and documents note that State maintains the right to examine the property and inspect the records that govern its use. In addition, the United States provided the PA with equipment and training to implement and maintain an inventory system to record and track all U.S. equipment deliveries and disbursements. Since 2007, State has allocated approximately $99 million to renovate or construct PASF installations. The main focus of USSC and INL infrastructure activities is to fund and oversee construction of operations camps for 9 of the 10 NSF battalions trained at the Jordanian International Police Training Center. U.S. security assistance allocated to infrastructure from 2007 through 2010 covers the renovation or construction of six of the proposed nine camps (see fig. 4). The operations camps will serve as garrison facilities for the battalion as well as function as a base for conducting operations in the West Bank. USSC and INL are also funding and overseeing the construction or renovation of training and ministry facilities in the West Bank. Figure 5 shows the six planned and six ongoing or completed infrastructure projects, including the two NSF operations camps, as of March 2010. The six ongoing or completed infrastructure projects include NSF operations camps, training facilities, and Ministry of Interior facilities, and account for about $41 million of the total allocated for infrastructure. They are in varying stages of completion; however, INL officials expect that all ongoing infrastructure projects will be completed by early 2011. Jericho NSF Operations Camp. This operations camp is to serve as the garrison for the 2nd Special Battalion. The camp is to accommodate approximately 750 personnel and provide workspaces, basic vehicle maintenance facilities, parking for approximately 145 squad vehicles and 40 large vehicles, clinical facilities, tactical communications facilities, separate officer berthing and accommodation spaces, a logistic warehouse facility, and K-9 animal housing spaces. State allocated about $11.3 million to this project, which USSC and INL expect to complete by mid-2010. (See figs. 6 and 7.) Jenin NSF Operations Camp. This operations camp will consist of two barracks buildings that will accommodate approximately 576 troops, one officers’ accommodations building that will house over 100 officers, an operational center, mess hall, and gym. State allocated $11 million to this renovation, which USSC and INL officials expect to complete by the end of 2010. Hebron NSF and Special Police Force (SPF) Building. In this joint NSF and Special Police Force building, the police occupy the ground floor and the NSF the first floor. The goal is to make this building habitable by units from the NSF special battalions deployed to Hebron and make it usable for its intended security functions, including the provision of a safe and secure operating environment, that is capable of being shared with other PA security services. State allocated $170,000 for this renovation, which USSC and INL expect to complete in mid-2010. Nuweimah Training Center. The current training facility in Nuweimah is being refurbished and expanded with funding from INL, to serve as an NSF training facility. The facility is to include accommodations for approximately 2,000 troops and 24 classrooms for approximately 1,500 students. The PA’s initial plan was to renovate two NSF basic training facilities in Jericho—Nuweimah and Alami. However, according to an INL official the PA and USSC decided not to renovate the Alami site, owing to difficulties in securing needed land titles, and instead to shift all funds to Nuweimah. State allocated about $8 million to this project, which USSC and INL expect to complete by early 2011. Presidential Guard Training College (Jericho). The PA intends to use the college to house and train 500 law, order, and security personnel at any given time. This facility has classroom space and accommodations for 250 personnel, as well as dining and support facilities for 500 personnel. State has allocated about $9 million to construct this facility. According to an INL official, original work on the facility was carried out by the UN Office for Project Services, under INL supervision. This facility is currently fully operational, and USSC and INL expect construction to be complete by mid-2010. Ministry of Interior’s Strategic Planning Directorate renovation (Ramallah). USSC is renovating space in the Ministry of Interior to provide additional office space and a training hub for the Strategic Planning Directorate. The renovation is to add 90 spaces for new staff and two large classrooms, a meeting room, and a security room. State allocated $1.1 million to this renovation, which was completed in February 2010. To spend the remaining $58 million in infrastructure funding, USSC and INL proposed additional projects, including building NSF operations camps in Hebron, Bethlehem, Ramallah, Tulkarm, Tubas, and a civil defense center in or near Ramallah. However, one U.S. official told us that, because of difficulties in obtaining suitable land and other delays, the USSC is reviewing other options for the NSF operations camps, including constructing temporary operations camps until a permanent site can be identified or renovating existing joint security force facilities to allow them to be used to garrison NSF special battalions. As of March 2010, preliminary design work had begun on a temporary operations camp near Tubas. Since 2007, State has allocated approximately $22 million for capacity- building activities, focused mainly on creating the Ministry of Interior’s Strategic Planning Directorate. The Minister of Interior oversees all the security forces reporting to the PA prime minister. According to an INL document, the directorate conducts strategic planning to support security decision making at the executive and ministry level to help the PA establish law and order and facilitate other longer-term security-sector reforms. The directorate is staffed by individuals with strategic planning, logistics capability, and other expertise. According to USSC officials, when Gaza fell to Hamas in mid-2007 and the PA President issued presidential decrees declaring a state of emergency, suspending the current government, and forming a new, more moderate government, the Ministry of Interior lost its entire staff, leaving the newly appointed minister the task of building an entirely new ministry. INL-funded activities include providing technical assistance to the Strategic Planning Directorate, in particular, funding and assigning six international technical advisors to work within the directorate, and training for Ministry and Strategic Planning Directorate staff. According to State documents and officials, as of April 2010, after 2 years of service, the contracts of all six of these advisors had expired, and, at the request of the Minister of Interior, State did not renew them. According to a State official, the Minister of Interior stated in January 2010 that this effort had been concluded to the Ministry’s satisfaction, so there are no plans to replace these advisors. He noted that State has offered to make technical assistance available on an ad hoc basis and at the request of the Minister, and, along with other international donors, plans to continue to fund other training and equipping efforts at the Ministry in fiscal year 2011. In addition to forming the Strategic Planning Directorate, USSC and INL have undertaken other programs to increase the PA’s capacity. Examples include the following: USSC and INL are providing assistance in building the PA’s capacity to coordinate international security assistance. As part of this effort, USSC serves as a technical advisor to a security sector aid-coordinating body co- chaired by the Interior Minister and the government of the United Kingdom. USSC and INL are supporting a Canadian-funded effort to develop PASF capacity at the governorate level through the creation of Joint Operations Centers, which are intended to give PASF area commanders in each governorate the command and communications facilities necessary to conduct integrated security operations. In support of the PA justice sector, INL launched a $1.5 million small scale justice sector assistance project in Jenin. The program provides technical assistance, training, and modest amounts of equipment to improve capacity of the police to conduct criminal investigations and help the public prosecutor’s office manage its caseload. A U.S. official reported that this program could be replicated if successful in other governorates. U.S. and international officials have observed improved security conditions in some areas of the West Bank since the PA began deploying units trained and equipped with USSC assistance, although they acknowledge these improvements may not be directly or wholly attributable to USSC programs. However, State and USSC have not assessed how their programs contribute to the achievement of the PA’s Roadmap obligations because they have not developed clear and measurable outcome-based performance indicators and targets linking their program activities to stated U.S. program objectives. Numerous U.S., PA, Israeli, and other government officials stated that both the PA and the Government of Israel are satisfied with the impact USSC- trained and -equipped PASF battalions have had on improving the security conditions in the West Bank. PA and U.S. officials cited these improvements as examples of how U.S. security assistance is aiding PA progress toward attaining its security obligations under the Roadmap, including having its rebuilt and refocused security apparatus begin sustained, targeted, and effective operations to dismantle terrorist capabilities and infrastructure. These improved conditions include the following. Better PASF capacity to control potentially violent situations. According to U.S., international, PA and Government of Israel officials, USSC-supported and -trained PASF units contributed successfully to restored security and conducted counterterrorism operations in Jenin, Hebron, Bethlehem, and other areas between 2008 and 2009. Several U.S. and international officials also noted the lack of spontaneous or organized violence in PASF-controlled areas in response to the December 2008 through January 2009 Israeli incursion into Gaza was an indicator of the PASF’s growing capacity to anticipate and handle large scale demonstrations and limit potential violence. Fewer Israeli government checkpoints. Several U.S. officials suggested that the USSC also could point to some indicators as measures of the growing effectiveness of USSC-supported security forces, including the decline in the key manned Israeli security checkpoints in the West Bank. However, the officials stated they could not independently verify the validity or accuracy of the reported declines, nor would they directly attribute these outcomes to USSC activities. Revived economic activity. According to PA and U.S. officials and documents, the subsequent revival of private investment in Jenin, Hebron, Bethlehem, and other areas where USSC-trained and -equipped PASF battalions were deployed is another indicator that USSC assistance has influenced the security situation, although a senior PA official noted that PA fiscal policies may have also contributed to this revival. Improved public attitudes toward security forces. In addition, both State and PA officials noted that Palestinian polling suggests people’s views of the PASF have improved, and a State report cited such a poll as indicating growing understanding and confidence of the West Bank populace in their security forces. Although State and USSC report on PASF program outputs such as the number of personnel vetted, trained, and equipped, USSC has not defined or established outcome-based performance measures to assess the progress, impacts, and estimated costs of achieving USSC objectives. For example, USSC documents and officials note that USSC objectives include helping the PA create right-sized, professional security forces in support of its Roadmap obligations but do not specify measurable outcome-based program performance indicators. USSC and State officials attributed the lack of clear and measurable outcome-based performance indicators and their associated targets for their programs to three factors—(1) changing force requirements, (2) the early stages of PA planning and its limited capacity to rebuild and sustain its security forces, and (3) lack of detailed guidance from State about USSC program objectives, time frames, and reporting requirements. First, the PA’s planned force requirements have undergone several revisions. According to a U.S. official, and as U.S. and PA documents demonstrate, the planned size and composition of the NSF has changed from seven special battalions (five in the West Bank, two in Gaza) in early 2007, to five special battalions solely in the West Bank after the Hamas takeover of Gaza in June 2007. According to State officials and documents, the PA increased the number of battalions for the West Bank from five to seven by mid-2008, although the projected total number of personnel remained at 3,500 as each battalion was reduced in size from 700 to 500 personnel in part to create smaller units better suited for the urban environment in which they would operate. In 2009, the PA raised the projected size of the NSF to 10 special battalions, according to USSC officials. Some State officials and documents also noted that the PASF has not clarified the role of the Presidential Guard and that some of its units had assumed gendarmerie tasks beyond its original mandate, which may overlap with NSF responsibilities. The revised and unclear requirements reflect that the parties to the Roadmap agreement—the PA and Israel—have not agreed on common measures to assess progress in meeting their Roadmap security obligations, according to USSC officials. For example, a September 2008 USSC report noted that the Government of Israel and the PA have not developed “effects-based metrics” needed to define a successful PASF security or counterterrorism effort under the Roadmap. State officials stated that the Government of Israel prefers not to establish objectives or measures that might limit its flexibility to conduct security operations within the West Bank. Second, the PA’s plans and capacity to reform, rebuild, and sustain its security forces are still in a relatively early stage of development. As a result, State and USSC officials said it is difficult to set outcome-based targets to measure the progress or outcomes of their programs. For example, the PA’s capacity to direct its own transformation was lacking until recently. According to a senior PA official, the Minister of the Interior did not consolidate within the ministry the authority to request, accept, and coordinate all foreign donor security assistance until August 2009. Third, USSC officials said that State did not give USSC a “blueprint” for attaining defined and measurable objectives for its programs within a set period of time, or for estimating the amount and type of resources needed to achieve such USSC goals as aiding in the transformation of the PA security sector and the creation of a professional, right-sized security force. According to State, this stemmed from the absence of a requirements-based budget allocation process for USSC programs until 2008. Since then, however, State officials said they required USSC and INL to provide performance indicators beginning with the fiscal year 2009 Jerusalem mission strategic plan. Furthermore, a senior USSC official said they had little incentive to emphasize or develop performance targets because State had shown little interest in tracking performance in the past; in fact, regular monthly reports from USSC to State on its activities resumed only in November 2009 after a hiatus of more than a year. Despite these factors, deriving indicators to measure and manage performance against an agency’s results-oriented goals has been identified by GAO as good management practice because it would help provide objective and useful performance information for decision makers when faced with limited resources and competing priorities. GAO has previously reported that while agency managers encounter difficulties in setting outcome-oriented goals or collecting useful data on expected results in general, it is difficult to design effective strategies or measure the impact of programs without them. State and USSC officials noted that USSC was developing a campaign plan for release in mid-2010 to help the Palestinians implement their own revised security strategy—which was still not released as of March 2010–and expected the plan to incorporate performance indicators to the extent possible. According to U.S. military doctrine, effective planning cannot occur without a clear understanding of the desired end state and the conditions that must exist to in order to end the operation. Moreover, a campaign plan should provide an estimate of the time and forces required to reach the conditions for mission success or termination. Determining when conditions are met requires “measures of effectiveness,” such as outcome-based performance measures. GAO has reported on the importance of outcome-based performance indicators as a key characteristic of effective national security strategy planning and a necessary component of developing and executing campaign plans based on these strategies (see list of related GAO products at end of report). Although the fiscal year 2010 Jerusalem mission strategic plan identifies performance indicators for U.S. security assistance programs, the targets to measure progress towards achieving these indicators focus on program outputs rather than program outcomes. For example, the plan identified the performance indicator “building Palestinian security capabilities” to assess progress toward achieving State’s broader goal of reforming Palestinian security forces to improve law and order and reduce terrorism. However, this indicator is measured based on output targets such as “completing the training and equipping of at least one PG and one NSF battalion” in fiscal year 2008 rather than on outcomes such as reduced terrorism as measured by, for example, changes in the number of terrorist-related incidents or changes in crime rates. Moreover, neither the performance plan nor USSC documents establish measurable outcome targets for assessing progress towards such stated USSC objectives as creating a “right-sized, professional” security force or helping the PA transform its security sector. Nor do these plans and documents contain information on expected time frames or estimated total costs for achieving these goals. State and USSC officials acknowledged that it would be useful to describe the impacts of U.S. security assistance on such outcomes as reductions in the number of Israeli security checkpoints in the West Bank. Similarly, they acknowledged it would be useful to tailor some survey questions to establish baselines and assess over time the extent to which polling data suggesting growing Palestinian confidence in their security can be attributed to the conduct and actions of USSC-trained PASF personnel, but noted the difficulty in separating the impact of U.S. security assistance from the impact of such external factors as Israeli political and security actions. In March 2010, State and USSC officials said that they had tasked an officer to clarify how USSC activities achieve State objectives and to improve reporting on USSC performance. Logistical constraints on personnel movement and access, equipment delivery, and acquisition of land for infrastructure projects challenge the implementation of U.S. security assistance programs. In addition, State, USSC, and international officials and documents note that programs to develop the capacity of the civil police and the justice sector are not proceeding at the same pace as U.S. security sector reform programs. Logistical constraints—largely outside U.S. control–involving personnel movement and access, equipment approval and delivery, and land acquisition challenge the implementation of U.S. security assistance in the West Bank and Gaza. State travel restrictions and Israeli Defense Force security checkpoints limit the movement and access of U.S. personnel into and within the West Bank. State restricts U.S. government personnel travel into and within the West Bank and requires that they travel in armored vehicles with security teams when traveling to State designated high threat areas. However, such restrictions do not apply to personnel from other countries supporting the USSC, such as the United Kingdom and Canada, according to State officials. As a result, according to U.S. officials, USSC relies on non-U.S. personnel to visit Palestinian security leaders on a daily basis, gauge local conditions, and conduct training in the West Bank. Israeli security checkpoints when traveling into and out of the West Bank border also limit U.S. government personnel movement and access, according to U.S. officials. For example, on more than one occasion, U.S. government delegations, including staff from USSC and State, were prevented from entering or exiting the West Bank, according to USSC and State officials. PA officials also face movement and access difficulties crossing at Israeli checkpoints when traveling into and out of the West Bank, which hampers the ability of USSC to meet with PA outside of the West Bank. While a process exists for equipment approval and delivery as shown in figure 8, U.S. officials said problems affecting the approval and delivery of equipment have hampered USSC’s ability to equip the PASF in a timely manner. USSC officials noted that without a significant effort at a higher political level to streamline this process, delays will frequently occur with little recourse available to USSC. Delays in equipment approval and delivery have occurred throughout this process, for example: Delays in approval. An absence of agreed-upon terms for the approval of equipment requests, such as equipment specifications or a set timeframe to make approval decisions, has resulted in delayed Israeli approval of shipments of USSC and other donor equipment, according to U.S. and other donor officials. For example, State and EU donor officials told us that the Government of Israel has not agreed to specific procedures for pre-approval of equipment orders as it prefers to continue to approve or deny each equipment request on a case-by-case basis. According to an Israeli official, each equipment request must be reviewed on its own merits, as specifications can change. For example, an Israeli official stated that although the Government of Israel had approved procurement of a shipment of raincoats, it did not guarantee the approval of future shipments of raincoats of comparable types and quantities. In addition, because the parties have not agreed on time frames for submitting or approving equipment requests, significant differences between the amounts of time needed to approve various items have constrained USSC’s ability to estimate equipment delivery dates. For instance, some vehicles ordered for the Presidential Guard and NSF 2nd Special Battalion in March 2008 were delivered in June 2008 while others were not delivered until January 2009. On the other hand, USSC officials said in March 2010 that the time needed to complete the approval process has declined from approximately 3 months to 2 weeks for items that had been approved in prior shipments by the Israeli government. Nevertheless, these officials note there is no guarantee that previously approved items will continue to be approved by the Government of Israel. For example, they noted that as of November 2009 INL has paid at least $176,000 to store a $2.3 million shipment of approximately 1,400 radios and associated gear that the Israeli government approved for delivery, but was then impounded by Israeli customs upon arrival in port in early 2009 after the Government of Israel revoked this approval. Delays in delivery. Delays have occurred at Israeli customs and during shipment into the West Bank, constraining USSC’s mission of properly equipping the NSF and Presidential Guard, according to officials we spoke with. USSC officials stated that shipping items to Israel takes about 1 month by sea and as little as 1 day by air from the United States. However, USSC officials also stated that while the time needed for Israeli customs to approve shipments averages about 2 months, approval can take up to a year or more for items that require modification by the Israelis in order to pass Israeli customs inspections. For example, USSC officials noted that vehicles and trailers were held up as Israeli customs required modifications to their lights, brakes, and other specifications before they would release them. According to an Israeli official, the PA also contributes to the equipment delays at customs by not following the shipment instructions in the approved requests. For example, in one case equipment was shipped with other types of goods destined for the West Bank and the quantity of equipment was lower than the approved amount. In another case, the shipper consolidated a shipment of PASF items with items for other customers. The increasing number of equipment deliveries in 2009 has also added to the delays in clearing customs, according to the Israeli government official. Additional unexpected delays in delivery have occurred when Israeli customs inspectors have not released equipment shipments they have approved, according to USSC and INL officials. These officials told us that security inspections required at Israeli border crossings and checkpoints in the West Bank have also delayed delivery. USSC and INL officials noted that they have taken steps to improve their ability to deliver equipment on time, including: developing standardized NSF battalion equipment packages to minimize Israeli opportunities to question equipment specifications; requiring the contracted freight forwarder in Virginia to check every item against shipment manifests prior to shipment; and making greater use of airfreight delivery. In addition, USSC officials said they have shortened the lead times needed to procure and ship equipment over time by pre-ordering items previously approved by the Israelis to be included in later shipments. Moreover, USSC, INL, and PA officials and staff have found it difficult to check on the status of the shipment or to hold parties accountable for delays, according to USSC officials. These officials stated that conflict appears to exist between various Israeli government departments related to the equipping process, which periodically results in unexplained delays in equipment release or approval. According to USSC, it is unclear with whom USSC or PA officials should speak to seek redress for unexplained equipment delays; as a result, delays are often elevated to high level U.S. and Israeli officials, who then negotiate a resolution. These delays have hampered USSC’s ability to equip the trained NSF and Presidential Guard battalions in a timely manner. While the USSC planned to deliver equipment to these battalions upon their graduation from JIPTC, some of these battalions have operated for several months after graduation without all of their needed equipment. For example, INL ordered equipment for the PG 3rd battalion and NSF 2nd special battalion in December 2007 to be distributed at the time of their May 2008 graduation; however, USSC and INL officials noted that although these two battalions had received all of their vehicles as of March 2010, they had yet to receive 14 percent of their equipment. As of March 2010, the 3rd and 4th NSF special battalions that had completed training in Jordan prior to December 2010 had received over 90 percent of their vehicles but only 44 to 50 percent of their other equipment, according to State officials. These officials said that the 1st NSF special battalion, which graduated in January 2010, had not received any of the purchased vehicles and only 2 percent of its other equipment as of March 2010. INL officials stated that these equipment shortfalls had not significantly affected the ability of the NSF special battalions to operate once they were deployed back to the West Bank. However, they acknowledged that these units had been deployed to the field lacking critical items, such as helmets, armored vests, and communications gear, that had been proposed, and in some cases procured, by the USSC and INL but had not been approved for delivery by the government of Israel. The completion of U.S.-funded infrastructure projects has been delayed by constraints on acquiring land in the West Bank that are largely outside of U.S. control, according to USSC, INL, and PA officials. Israel must approve the location of all proposed facilities and does not set formal standards by which locations are approved, according to USSC officials. These officials also said the PA is largely limited to building in Oslo Area A, which is solely under PA authority but comprises less than 20 percent of the West Bank’s territory. USSC officials also stated that it is difficult to determine whether a proposed installation site includes land solely in Area A. In addition, the Government of Israel requires that the proposed installation sites must not be near Israeli settlements or access routes. After the Government of Israel approves an installation site, USSC officials stated they face a lengthy Palestinian process for establishing ownership rights and obtaining legal title to the land. These officials noted that conflicting land and property claims on the site also create a challenge to acquiring land for PA infrastructure projects. Figure 9 depicts areas where the PA usually can acquire land for security installations in the West Bank (area A) and areas where it cannot (areas B and C). Because of these restrictions and delays, USSC and INL officials said efforts to develop NSF operations camps beyond the two already under construction remain stalled as of March 2010. Owing to delays in acquiring suitable land for permanent camps and the need to house newly trained battalions, USSC and INL officials said they have built temporary operations camps. Similarly, because of delays in land acquisition, USSC and INL cancelled renovation plans of an NSF facility in Alami and redirected the funding to the Nuweimah facility, according to an INL official. Originally designed to house 700 trainees, Nuweimah will be expanded to house 2,000 trainees upon completion in fiscal year 2011. To work around restricted property in certain urban areas, the USSC and INL are planning to construct or renovate multistory buildings within urban- based security compounds known as muquata’as. This effort is underway or being planned for compounds in Tulkarm and other urban locations. As a result, some U.S.-funded PASF centers are holding more troops than originally planned and facilities are being built in a way that allows them to be expanded upon if needed, according to USSC and INL officials. These land acquisition problems constrain USSC’s goal to provide housing for each of the NSF special battalions upon their completion of training in Jordan. U.S. and international officials noted that PA civil police and justice sector reforms are not proceeding at the same pace as U.S. security sector reforms. Palestinian Civil Police (PCP) capacity is limited. According to EUPOL COPPS advisors, the PCP lags behind all other Palestinian security forces in funding for infrastructure and equipment. Although infrastructure exists, such as joint operation centers and prison facilities, about 60 new police stations need to be rebuilt or constructed and existing facilities need to be refurbished, according to a USSC document. The PCP also have difficulties obtaining certain types of equipment, such as finger printing equipment, radios, and personal protective equipment, according to EUPOL COPPS. Also, the judicial police—charged with serving court orders, protecting judges and judicial facilities, and transporting prisoners—lack vehicles and operating capabilities outside of Ramallah, according to a USSC report. According to a U.S. official, a decision was made in 2008 to increase the size of the NSF in the West Bank from 5 to 10 NSF special battalions in part to compensate for the lack of PCP capacity. According to State and international officials, the NSF and the PCP do not coordinate programs to a large extent. Although the NSF receives training on operating with the PCP at JIPTC and PCP are trained on operating with NSF at many different levels, including at USSC-sponsored courses, the NSF and PCP coordination needs to be strengthened, according to U.S. and other officials. While an INL official said the PA is taking efforts to increase coordination between these two security forces through the Interior Ministry’s Strategic Planning Directorate, the PCP’s lack of communication equipment, such as radios, limits coordination. A EUPOL COPPS official told us that it is difficult for the PCP to obtain radios and frequencies and such a lack of communication equipment constrains the building of a sophisticated and well-equipped civil police force. Justice sector capacity is limited. The PA justice sector still lacks sufficient infrastructure, organization, and updated laws, according to PA, U.S., and international documents and officials. Justice-sector infrastructure, such as facilities and courts in each governorate, require upgrades and improvements by 2011, according to USSC. U.S. and foreign officials told us that to improve justice-sector organization, the PA needs to more clearly define the roles of its government agencies. According to a USSC report, cooperation between the elements of the criminal justice system—the courts, police, and prosecutors—is poor. In addition, USSC reported that the physical separation of government agencies within and between governorates results in poor coordination. USSC further reported that the lack of clarity and consistency in PA laws and the lack of a working legislature also undermine PA civil police and justice-sector capacity. USSC reported that the PASF, including the civil police, are constrained in their ability to conduct security operations and to detain persons who present a security threat and the justice sector is constrained in convicting such persons because Palestinian laws on some related issues are vague and sometimes contradictory. However, such laws cannot be reviewed and updated until after a new Palestinian Legislative Council is installed. Moreover, a State document noted that the civil police and justice-sector capacity limitations have become a matter of greater concern as it has become apparent that other donors are not providing the necessary civil policing, justice-sector, and other pledged assistance necessary to keep pace with the progress the U.S. security assistance programs are achieving. According to U.S. and PA officials and documents, sustaining the progress they have made with U.S. assistance in the security sector may be difficult unless the lack of capacity in the civil police and the justice sectors with which the USSC-supported security forces must operate are addressed. To help address some of these gaps, State officials said State had recently reinforced its role for facilitating coordination among U.S. agencies and international donors for justice-sector issues. Examples of U.S. and international justice-sector reform programs include: the INL’s Justice Sector Assistance Project, USAID’s Rule of Law and Justice Enforcement Program, EUPOL COPPS’ civil police and rule of law program, and the Canadian International Development Agency’s “Sharaka” Program, as shown in table 3. In fiscal years 2007 through 2010, State allocated approximately $392 million in USSC assistance to support U.S. strategic goals and Roadmap objectives in the Middle East, and has requested $150 million more for fiscal year 2011. Most of this assistance has supported training and equipping new PASF battalions deployed in the West Bank. Although U.S. and international officials said that U.S. security assistance has helped the PA improve security conditions in some areas of the West Bank and is progressing faster than PA civil police and justice sector reforms, State and USSC have not established clear and measurable outcome-based performance indicators to assess and report on the progress of their security assistance. As such, it is difficult for State and the USSC to gauge whether or not their security assistance programs are helping the PA achieve its Roadmap obligations to undertake security sector transformation and create a right-sized, professional security force. Establishing and tracking outcome-based performance measures in the proposed USSC campaign plan would help inform decisions about the costs, progress, and impact of U.S. security assistance to the PA particularly given that this assistance is progressing faster than the PA civil police and justice-sector reforms. As State develops the USSC campaign plan for providing security assistance to the PA, we recommend that the Secretary of State establish outcome-based indicators and track them over time. State should define specific program objectives and identify appropriate outcome-based indicators that would demonstrate progress toward achieving those objectives, and would enable it to, among other things, weigh the progress made in developing the security forces, civil police, Ministry of Interior, and justice sectors. State provided written comments on a draft of this report, which are reprinted in appendix IV. State partially concurred with our recommendation that the Secretary of State establish outcome-based indicators and track them over time. State commented that they recognize the need for such indicators and have tried to develop ones that are meaningful at this stage of the program’s development. For example, State mentioned that they have included broad performance measures in the Mission Strategic Plan. INL has also factored performance measures into all of its funding obligating documents. State, however, accepts our point that these measures should be more performance-based. Now that trained and equipped security force units are in place, State anticipates developing meaningful security-related baseline data for measuring the progress of U.S. sponsored trained units. State further commented that they have already started to do this with the Jenin justice project, whereby the PA will be able to generate comparative data on the number, speed, and success of the cases they prosecute. In addition, State commented that INL is in the process of crafting a new Letter of Agreement with the Palestinian Authority. This letter is to contain project goals, objectives, and milestones that reflect the program’s recent and anticipated future growth in size and complexity. State cautioned that, as we reported, several factors outside of State’s control influence progress toward the most meaningful performance- based indicators. State noted that while security assistance provided by the United States can strengthen the capabilities of the Palestinian security forces to operate increasingly in certain areas, the Palestinian Authority will only be able to do so if it and the Government of Israel agree on the direction and pace of this deployment. Ultimately, State added, such an agreement depends on a range of political, economic, and social factors that encompass more than just the enhanced law enforcement and security capabilities U.S. assistance gives the PA security forces. State also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of State. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or any of your staffs have any question about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contribution to this report are listed in appendix V. David Gootnick Director, International Affairs and Trade. To describe the nature and extent of U.S. security assistance to the Palestinian Authority since 2007, we reviewed relevant documents and interviewed officials from the Departments of State (State) and Defense (DOD), the Office of the United States Security Coordinator (USSC), and the U.S. Agency for International Development (USAID) in Washington, D.C., in the West Bank, at the U.S. Consulate in Jerusalem, and the U.S. Embassies in Tel Aviv and Amman, Jordan. We also met with PA, Israeli, and Jordanian government and security officials as well as recognized experts in Israeli- Palestinian affairs. We reviewed State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) budget justifications for fiscal years 2008 through 2010 to determine the levels of International Narcotics and Law Enforcement (INCLE) funding allocated to USSC and INL security assistance programs in the West Bank. We determined that the INCLE funding allocation data was sufficiently reliable for our purposes. To describe the nature and extent of the training programs, we reviewed INL, USSC, and contractor status report documentation and conducted site visits to observe U.S.-sponsored training at the Jordanian International Police Training Center. We reviewed examples of training reports, student surveys, and after action reports used by USSC contractors to review the performance of their trainees during and after every National Security Forces (NSF) training session. To describe the status of USSC programs to equip the NSF, we reviewed equipment delivery lists, contractor statements of work, equipment delivery work orders, and summaries of equipment end use monitoring reports. We interviewed INL, USSC, Palestinian Authority Security Forces (PASF), and Jordanian officials about the status of equipment deliveries. To describe the status of the construction of PASF installations, we reviewed the August 2007 “Framework Agreement” signed between the Secretary of State and the PA Prime Minister as well as INL contract summary data and progress reports; visited construction projects in and around the city of Jericho; and interviewed PA and INL contract staff about project objectives, plans, and funding issues. We assessed the reliability of the data on the battalions trained and equipped by the USSC, and on the infrastructure construction data provided by INL. We did not assess the reliability of the data on the current size and structure of the PASF because we are presenting them for background purposes only. To assess State’s efforts to measure the effectiveness of its security assistance programs, we examined whether its approach identified and applied measurable performance indicators necessary to gauge results—as called for in a number of GAO products listed at the end of this report. These reports state that developing and applying outcome-based performance indicators are (1) a management best practice; (2) one of the key characteristics of effective national security strategy planning, particularly when developing counterterrorism strategies; and (3) a necessary component of developing and executing campaign plans based on these strategies. We also reviewed other GAO reports assessing the extent to which other U.S. assistance projects develop and apply results- based performance indicators. We reviewed the strategic plans for State’s Bureau of Near Eastern Affairs and the mission performance and the mission strategic plans for the U.S. Consulate in Jerusalem for fiscal years 2009 through 2011, as well as the four monthly activity reports the USSC has produced between November 2009 and March 2010. We examined United Nations Office for Coordination of Humanitarian Affairs data and an Israeli Ministry of Defense report for changes in the number of Israeli roadblocks within the West Bank from 2007 to 2009. To determine factors that may affect the implementation of U.S. security assistance to the Palestinian Authority, we analyzed reports, conference presentations, and U.S. government sponsored studies to identify issues that affect U.S. programs. We conducted interviews with State, INL, and USSC officials in Washington, D.C., and in the field. We also met with Israeli, PA, Jordanian, and other international officials during our fieldwork in Israel, the West Bank, Jerusalem, and Jordan. To assess logistical constraints, we reviewed relevant UN and State documents on access and movement; INL and USSC documents on the logistics of providing equipment to the PA; and met with USSC officials to discuss challenges in acquiring land for U.S.- funded infrastructure in the West Bank. To illustrate the U.S.-funded equipment approval and delivery process, we developed a schematic representation and identified points during which the process may experience problems based on discussions with U.S., Israeli, and PA officials. We consulted with INL and USSC officials and incorporated their comments into our representation of the equipment approval and delivery process. To assess the capacity of the PA police and justice sector and its impact on U.S. security assistance, we reviewed documents from and met with USSC, Israeli, PA, and international officials. To examine the pace of U.S. and international assistance to the PA civil police and justice sector, we also reviewed State documents and met with current PA police and justice sector donors, including USAID, INL, and EUPOL COPPS. We conducted our work from July 2009 to May 2010 and in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 4 shows the 24 different specialized courses the USSC has conducted or supported for PASF personnel in the West Bank between mid-2008 and March 2010, and the two additional courses it planned to offer as of March 2010. Table 5 shows the type and quantity of equipment approved for the NSF 3rd and 4th battalions. USSC procures an initial issuance of equipment for battalions trained with U.S. funds at the Jordanian International Police Training Center adjusting the issuance slightly for each battalion. An average battalion consists of 500 troops. David Gootnick (202) 512-3149 or Gootnickd@gao.gov. Cheryl Goodman, Assistant Director; B. Patrick Hickey; Michael Maslowski; Jillena Roberts; Martin De Alteriis; Mary Moutsos; Reid L. Lowe; and Joseph P. Carney made key contributions to this report. Etana Finkler provided technical support. The following GAO reports discuss how outcome-based performance indicators can be developed and applied as a management best practice: Human Trafficking: Monitoring and Evaluation of International Projects are Limited, but Experts Suggest Improvements. GAO-07-1034 (Washington, D.C.: July 26, 2007). Security Assistance: State and DOD Need to Assess How the Foreign Military Financing Program for Egypt Achieves U.S. Foreign Policy and Security Goals. GAO-06-437 (Washington, D.C.: April 2006). Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15 (Washington, D.C.: Oct. 21, 2005). Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38 (Washington, D.C.: Mar. 10, 2004). The following GAO reports describe outcome-based performance indicators as one of the characteristics of effective national security strategy planning: Rebuilding Iraq: More Comprehensive National Strategy Needed to Help Achieve U.S. Goals. GAO-06-788 (Washington, D.C.: July 11, 2006). Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism. GAO-04-408T (Washington, D.C.: Feb. 3, 2004). The following reports describe outcome-based performance indicators as a necessary component of campaign planning and execution: Combating Terrorism: The United States Lacks Comprehensive Plan to Destroy the Terrorist Threat and Close the Safe Haven in Pakistan’s Federally Administered Tribal Areas. GAO-08-622 (Washington, D.C.: Apr. 17, 2008). Securing, Stabilizing, and Rebuilding Iraq Progress Report: Some Gains Made, Updated Strategy Needed. GAO-08-837 (Washington, D.C.: June 23, 2008). | The 2003 Roadmap for Peace process sponsored by the United States and other nations obligates the Palestinian Authority (PA) and the Government of Israel to undertake security efforts as a necessary precursor for achieving the long-standing objective of establishing a Palestinian state as part of the two-state solution for peace in the Middle East. In 2005 the Department of State (State) created the office of the United States Security Coordinator (USSC) to help the parties meet these obligations. GAO was asked to (1) describe the nature and extent of U.S. security assistance to the PA since 2007; (2) assess State's efforts to measure the effectiveness of its security assistance; and (3) describe factors that may affect the implementation of U.S. security assistance programs. GAO analyzed documents; interviewed officials and regional experts; and conducted fieldwork in Jerusalem, the West Bank, Israel, and Jordan. State has allocated about $392million to train and equip the PA security forces, oversee construction of related infrastructure projects, and develop the capacity of the PA during fiscal years 2007 through 2010. Of this total, State has allocated: (1) more than $160 million to help fund and support training, primarily for the PA's National Security Force (NSF); (2) approximately $89 million to provide nonlethal equipment; (3) about $99 million to renovate or construct several PA installations, including two of the operations camps it plans to provide; and (4) about $22 million to build the capacity of the Interior Ministry and its Strategic Planning Directorate. State also requested $150 million for its programs for fiscal year 2011. Although U.S. and international officials said that U.S. security assistance programs for the PA have helped to improve security conditions in some West Bank areas, State and USSC have not established clear and measurable outcome-based performance indicators to assess progress. Thus, it is difficult to determine how the programs support the achievement of security-related Roadmap obligations. U.S. officials attributed the lack of agreement on such performance indicators to a number of factors, including the relatively early stage of PA plans and capacity for reforming, rebuilding, and sustaining its security forces. Developing outcome-based indicators to measure and manage performance against program goals has been identified by GAO as a good management practice. Such indicators would help USSC provide objective and useful performance information for decision makers. State and USSC officials noted that they plan to incorporate performance indicators in a USSC campaign plan to be released in mid-2010. The implementation of the U.S. security assistance programs faces logistical constraints largely outside of U.S. control, and these implementation efforts outpace international efforts to develop the limited capacity of the PA police and justice sector. Logistical constraints include restrictions on the movement of USSC personnel in the West Bank, lack of a process to ensure approval and timely delivery of equipment, and difficulties in acquiring suitable land for infrastructure projects. State, U.S. Agency for International Development (USAID), and other international donors have been assisting the PA civil police and justice-sector reforms, although these efforts are not proceeding at the same pace as the security assistance programs. |
We provided the group of schools using the antitrust exemption, Secretary of Education, and Attorney General with a copy of our draft report for review and comments. The group of schools using the exemption reviewed a draft of this report and stated it was a careful and objective report, but raised concerns about the data used in our econometric analysis and the report’s tone and premise. We believe that the data we used were reliable to support our conclusions. The group of schools using the exemption also provided technical comments, which we incorporated where appropriate. The group’s written comments appear in appendix IV. The Department of Education reviewed the report and did not have any comments. The Department of Justice provided technical comments, which we incorporated where appropriate. In the early 1990’s the U.S. Department of Justice (Justice) sued nine universities and colleges, alleging that their practice of collectively making financial aid decisions for students accepted to more than one of their schools restrained trade in violation of the Sherman Act. By consulting about aid policies and aid decisions, through what was known as the Overlap group, the schools made certain that students who were accepted to more than one Overlap school would be expected to contribute the same towards their education. Thus, according to Justice, “fixing the prices” students would be expected to pay. All but one school, Massachusetts Institute of Technology (MIT), settled with Justice out of court, ending the activities of the Overlap group. The District Court ruled that MIT’s joint student aid decisions in the Overlap group violated the Sherman Act. On appeal, the Third Circuit Court of Appeals agreed with the District Court that the challenged practices were commercial activity subject to the antitrust laws. However, it reversed the judgment and directed the District Court to more fully consider the procompetitive and noneconomic justifications advanced by MIT during the court proceedings and whether social benefits attributable to the practices could have been achieved by means less restrictive of competition. In recognition of the importance of financial aid in achieving the government’s goal of educational access, but also mindful of the importance of antitrust laws in ensuring the benefits of competition, the Congress passed a temporary antitrust exemption. In 1994, Congress extended the exemption and specified the four collective activities in which schools that admit students on a need-blind basis could engage. The exemption was extended most recently in 2001, and is set to expire in 2008. For many students, financial aid is necessary in order to enroll in and complete a postsecondary education. In school year 2004-2005, about $113 billion in grant, loan, and work-study aid was awarded to students from a variety of federal, state, and institutional sources. Need analysis methodologies are used to determine the amount of money a family is expected to contribute toward the cost of college and schools use this information in determining how much need-based financial aid they will award. For the purposes of awarding federal aid, expected family contribution (EFC) is defined in the Higher Education Act of 1965, as amended, as the household financial resources reported on the Free Application for Federal Student Aid, minus certain expenses and allowances. The student’s EFC is then compared to the cost of attendance to determine if the student has financial need. (see fig. 1) While the federal methodology is used to determine a student’s eligibility for federal aid, some institutions use this methodology to award their own institutional aid. Others prefer a methodology developed by the College Board (called the institutional methodology) or their own methodology. Schools that use the institutional methodology require students to complete the College Scholarship Service/Financial Aid PROFILE application and the College Board calculates how much they and their families will be expected to contribute toward their education. Schools that use these alternative methodologies feel they better reflect a family’s ability to pay for college because they consider many more factors of each family’s financial situation than the federal methodology. For example, the institutional methodology includes home and farm equity when calculating a family’s ability to pay for college, while the federal methodology excludes them. See table 1 below for a comparison of the federal methodology to the institutional methodology. Twenty-eight schools formed a group under the antitrust exemption and engaged in one of the four activities allowable under the exemption. School officials believed that the one activity—development of a common methodology for assessing financial need—would help reduce variation in amounts students were expected to pay when accepted to multiple schools and allow students to base their decision on which school to attend on factors other than cost. In developing the common methodology, called the consensus approach, schools modified an existing need analysis methodology and reached agreement on how to treat each element of the methodology. While the schools reached agreement on a methodology, implementation of the methodology among the schools varied. Twenty-eight schools, all of which have need-blind admission policies as required under the law, formed the 568 Presidents’ Group in 1998 with the intent to engage in activities allowed by the antitrust exemption. Members of the group are all private 4-year schools that have highly selective admissions policies. One member school dropped out of the group because the school no longer admitted students on a need-blind basis. (See table 2 below for a list of current and former member schools.) Membership is open to colleges and universities that have need blind admissions policies in accordance with the law. Member schools must (1) sign a certificate of compliance confirming the institution’s need-blind admissions policy and (2) submit a signed memorandum of understanding that indicates willingness to participate in the group and adhere to its guidelines. Additionally, members share in paying the group’s expenses. In addition to the group’s 28 members, 6 schools attended meetings of the group to observe and listen to discussions, but have not become members. In order to attend meetings, observer schools were required to provide a certificate of compliance stating that they had a need-blind admission policy. Observer schools explained that their participation was based on a desire to be aware of what similar schools were thinking in terms of need analysis methodology, as well as have an opportunity to participate in these discussions. Despite these benefits, observer schools said they preferred not to join as members because they did not wish to agree to a common approach to need analysis or they did not want to lose institutional independence. Other institutions with need-blind admissions reported that, although eligible to participate in activities allowed by the exemption, they were not interested or not aware of the group formed to use the antitrust exemption. Some told us that they did not understand how students would benefit from the schools’ participation in such activities. Others cited limited funding to make changes to their need analysis methodology and concerns that they would lose the ability to award merit aid to students. Of the four activities allowed under the antitrust exemption, the 28 schools engaged in only one—development of the consensus approach for need analysis. With respect to the other three activities allowed under the exemption, the schools either chose to not engage in the activities or piloted them on a limited basis. For example, three schools in the group attempted to share student-level financial aid data through a third party. However, they reported that because the effort was too burdensome and yielded little useful information, they chose not to continue. The group also expressed little need or interest in creating another common aid application form as such a form already existed. Schools also decided to leave open the option to award aid on a non-need basis. According to the officials representing the 28 schools, the main purpose of the group was to discuss ways to make the financial aid system more understandable to students and their families and commit to developing a common methodology for assessing a family’s ability to pay for college, which they called the consensus approach. Developing an agreed upon common approach to need analysis, according to school officials, might help decrease variation in what families were expected to pay when accepted to multiple schools, allowing students to base their decision on what school to attend on factors other than cost. School officials also believed that agreeing to a common need analysis methodology would produce expected family contributions that were reasonable and fair for families and allow schools to better target need-based aid. The group did not address the composition of a student’s financial aid package; specifically, what combination of grants, loans, or work-study a student would receive. In developing the consensus approach for need analysis, the schools modified elements already in the College Board’s institutional methodology, but member schools agreed to treat these elements the same when calculating a student’s EFC. Some of the modifications that the group made to College Board’s institutional methodology were later incorporated into the institutional methodology. The consensus approach and the institutional methodology similarly treat income from the non- custodial parent, and both account for the number of siblings in college in the same manner when calculating a student’s expected family contribution. However, there are differences in how each methodology treats a family’s home equity and a student’s assets. For example, the institutional methodology uses a family’s entire home equity in its assessment of assets available to pay for college, while the consensus approach limits the amount of home equity that can be included. According to one financial aid officer at a member school, including the full amount of a family’s home equity was unfair to many parents because in some areas of the country the real estate market had risen so rapidly that equity gains inflated a family’s assets. Officials representing some member schools stated that adjustments to home equity would likely affect middle and upper income families more than lower income families who are less likely to own a home. Table 3 below further illustrates the differences and similarities between the consensus approach and the institutional methodology. In addition, under the consensus approach schools agreed to a common calendar for collecting data from families. Members continue to maintain the ability to exercise professional judgment in assessing a family’s ability to pay when there are unique or extenuating financial circumstances. Twenty-five of 28 schools implemented the consensus approach; 3 did not. While 13 schools implemented all the elements of the consensus approach, the remaining schools varied in how they implemented the methodology. As shown in table 4 below, seven schools chose not to use the consensus approach method for accounting for family loan debt, home equity, and family and student assets. The 25 schools that implemented the consensus approach did so between 2002 and 2005. Member schools reported that they preferred to use the consensus approach as opposed to other available need analysis methodologies because it was more consistent and fairer than alternative methodologies. Moreover, according to institution officials, they believed the new methodology had not reduced price competition and had resulted in the average student receiving more financial aid. In some cases, if using the consensus approach lowered a student’s EFC, the institution would then allocate more money for financial aid than it would have if it had used a different need analysis methodology. For some schools the consensus approach was not that different from the methodology their institution already had in place, but other schools said that fully implementing the consensus approach cost their school more money. Among schools that partially implemented the consensus approach, many explained they did not fully implement the new methodology because it would have been too costly. The cost to attend the schools participating under the exemption rose over the past 5 years by over 10 percent while cost increases at all other private schools rose at about half that rate. At the same time, the percentage of students receiving institutional aid increased and institutions increased the amount of such aid they provided students, although at a slower rate than cost increases. During the past 5 years, the cost of attendance—tuition, fees, room, and board—at schools using the exemption increased by approximately 13 percent from $38,319 in school year 2000-2001 to $43,164 in school year 2004-2005, a faster rate than other schools. For example, at other private 4-year schools there was a 7 percent increase in these costs, from $25, 204 to $27, 071. Additionally, as figure 2 illustrates, among a set of schools that were comparable to the schools using the exemption, costs increased by 9 percent from $40,238 to $43,939 over that same time period. Over the same time period, the percentage of students who received any form of institutional grant aid at schools using the exemption increased by 3 percentage points, from 37 to 40 percent, as illustrated by figure 3. Among students receiving institutional grant aid, the percentage of students receiving need-based grant aid increased from 34 to 36 percent from 2000 to 2006. The percentage of students receiving non-need-based grant aid also increased slightly, from 2 to 4 percent. Non-need-based aid is awarded based on a student’s academic or athletic achievement and includes fellowships, stipends, or scholarships. The majority of schools using the exemption did not offer any non-need-based institutional grant aid in school year 2005-2006. However, in 2005-2006 some schools did, allocating non-need-based grant aid to between 16 to 54 percent of their students. As the cost of attendance and percentage of students receiving institutional aid rose, participating institutions increased the amount of such aid they provided students, although the percentage increases in aid were smaller. As shown in figure 4, the average need-based grant aid award across the schools using the exemption increased from $18,925 to $20,059, or 6 percent. The average amount of non-need-based grant aid awards dropped slightly from $12,760 in 2000-01 to $12,520 in 2005-06, or 2 percent. Overall, the average total institutional grant aid awarded to students, which included both need and non-need-based aid, increased from $18,675 in 2000-01 to $19,901 in 2005-06, or 7 percent. There was virtually no difference in the amounts students and their families were expected to pay between schools using the exemption and similar schools not using the exemption. Average EFC was $27,166 for students accepted at schools using the exemption, and $27,395 for those accepted at comparable schools not using the exemption in school year 2005-2006. Moreover, the variation in the EFC for a student who was accepted to several schools using the exemption was similar to the variation in EFC that same student received from schools not using the exemption. The variation in EFCs for these students was about $6,000 at both sets of schools. Because the number of such students was small, we also analyzed variation in EFCs for students who were accepted only at schools using the exemption and compared it to the variation for students who were only accepted at comparable schools not using the exemption. We found slightly greater variation among EFCs for students who were accepted at schools using the exemption; however, because we could not control for student characteristics, factors external to the exemption could explain this result, such as differences in a family’s income or assets. Although officials from schools using the exemption expected that students accepted at several of those schools would experience less variation in the amounts they were expected to pay, none of our analyses confirmed this. The lack of consistency in EFCs among schools using the exemption may be explained by the varied implementation of the consensus approach. As previously mentioned, not all schools using the consensus approach chose to adopt all the elements of the methodology. For example, seven schools chose not to use the consensus approach to home equity, which uses a percentage of the home equity in calculating the EFC. Using another method for assessing a family’s home equity could significantly affect a student’s EFC. For instance, we estimated that a family residing in Maryland with an income of $120,000 and $350,000 in home equity would have an EFC of $58,243 if a school chose not to implement the home equity option in the consensus approach. Under the consensus approach, the amount of home equity included in asset calculations would be capped and only $38,000 of the home’s equity would be included in the calculation of EFC. The same family would then have an EFC of $42,449 if the school chose to implement the option. Based on our econometric analysis, schools’ use of the consensus approach did not have a significant impact on affordability, nor did it cause significant changes in the likelihood of student enrollment at schools using the consensus approach compared to schools that were not using the consensus approach. As shown in table 5, while we found that the consensus approach resulted in higher need-based grant aid awards for some student groups (middle income, Asian students, and Hispanic students) compared to similar students at schools that were not using the consensus approach, this increase was likely offset by decreases in non- need-based grant aid, such as academic or athletic scholarships. Thus, total grant aid awarded was not affected by the consensus approach because the increase in need-based aid was likely offset by decreases in non-need-based grant aid. A different effect was found when low-income students at schools using the consensus approach were compared to their counterparts at schools not using the consensus approach. As shown in table 5, low income students at schools using the consensus approach received, on average, a significantly higher amount of total aid—about $12,121, which includes both grants and loans. However, the amount of grant aid that these students received did not significantly change, suggesting that that they likely received more aid in the form of loans, which would need to be repaid, or work-study. Our analysis of the effects of the consensus approach on various racial groups showed no effect on affordability for these groups compared to their counterparts at schools not using the consensus approach. While Asian, white, and Hispanic students received more need-based grant aid compared to their counterparts at schools not using the consensus approach, their overall grant aid awards did not change. Finally, as shown in table 5, there were no statistically significant effects of the consensus approach on student enrollment compared to the enrollment of students at schools not using the consensus approach. In particular, the consensus approach did not significantly increase the likelihood of enrollment of low-income or minority students or any student group. Our econometric analysis has some limitations that could have affected our findings. For example, we could not include all the schools using the consensus approach in our analysis because there were no data available for some of them. However, there were enough similarities (in terms of “best college” ranking, endowment, tuition and fees, and percentage of tenured faculty) between the included and excluded participating schools that allowed for a meaningful analysis. (See table 6 for a list of schools included in our analysis). Moreover, the data for our post-consensus approach period was collected in 2003-2004—the first or second year that some schools were using the consensus approach. Because we have data for only one year after implementation, it is possible that some eventual effects of the consensus approach may not be captured. The effects of using the consensus approach could be gradual, rather than immediate, and therefore may not be captured until later years. By providing an exemption to antitrust laws enabling schools to collaborate on financial aid policies, the Congress hoped that schools would better target aid, making college more affordable for low income and other underrepresented groups. The exemption has not yet yielded these outcomes. Nor did our analysis find an increase in prices that some feared would result from increased collaboration among schools. Initial implementation of the approach has been varied; some schools have not fully implemented the need analysis methodology, and many schools are still in the initial years of implementation. As is often the case with new approaches, it may be too soon to fully assess the outcomes from this collaboration. We provided the group of schools using the antitrust exemption, the Secretary of Education, and the Attorney General with a copy of our draft report for review and comments. The group of schools using the exemption provided written comments, which appear in appendix IV. In general, the group stated that our study was a careful and objective report, but raised some concerns about the data used in our econometric analysis and the report’s tone and premise. Specifically, they raised concerns about the selection of treatment and control schools for our econometric analysis. As we noted in the report, we selected schools for selection in treatment and control groups based, in part, on the availability of student- level data in the NPSAS. Some schools that used the consensus approach were not included because there were no data available for them. However, we believe there were enough similarities between the included and excluded schools to allow for a meaningful analysis. The group also stated that a number of conclusions were based on a very small number of observations. In appendix II, we acknowledge the small sample size of the data could make the estimates less precise, especially for some of the subgroups of students we considered. However, we performed checks to ensure that our estimates were reliable and believe that we can draw conclusions from our analysis. With respect to the tone and premise of the report, the group raised concerns about using low income students as “a yardstick for judging the success of the Consensus Approach.” When passing the exemption, Congress hoped that it would further the government’s goal of promoting equal access to educational opportunities for students. Need-based grant aid is one way to make college more affordable for the neediest students to help them access a post-secondary education. The group also highlighted several positive outcomes from their collaboration, including a more transparent aid system and more engagement by college presidents in aid-related discussions, topics which our study was not designed to address. The group provided technical comments, which we incorporated where appropriate. Education reviewed the report and did not have any comments. The Department of Justice provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Education, Attorney General, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions please call me on (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix VI. We compared variation in expected family contributions (EFCs) between students who were admitted to both schools using the exemption and comparable schools that did not. We collected data on student EFCs from 27 of the 28 schools using the exemption and 55 schools that had similar selectivity and rankings as schools using the exemption. The data included the student’s EFC calculated by the schools as of April 1, 2006, based on their need analysis methodology. We determined that these data would most likely reflect the school’s first EFC determination for a student and thus would be best for comparison purposes. We then matched students across both sets of schools to identify students accepted to more than one school (which we call cross-admits). Our sample consisted of data for the following three types of cross-admit students: 1. Students accepted to several schools using the exemption and several schools that were not (type 1 students); 2. Students accepted to only schools using the exemption (type 2 3. Students accepted to only schools not using the exemption (type 3 students). Data from the type 1 sample provided the most suitable data for our analysis because it controlled for student characteristics. However, because this sample was relatively small, we used the other samples to supplement the analysis. Once the cross-admits were identified, the EFCs for each student were used to evaluate the mean and median as measures of location and the standard deviation and range as measures of variation. Given the potential scale factor, the variation measures were standardized. The standard deviation was standardized by dividing it by the mean, and the range was standardized by dividing it by the median. The two resulting variation measures were the coefficient of variation (V1) and its robust counterpart (V2), respectively. These two measures of variation were estimated for each and every student. The estimates were grouped for both sets of schools. We labeled schools using the exemption as “568 schools” and comparable schools that were not as “non-568 schools.” Table 7 reports various estimates averaged over students in each group. The table generally shows similar group averages for the mean, standard deviation, median, and range that were used to compute V1 and V2. The values reported are the averages for all the students in each group. There are fewer observations for the 568 schools than for the non-568 school, except for type 1 students where the number of observations were equal because the students were in both groups of colleges. In addition, we imposed the following three conditions: First, for the coefficient of variation V1, we excluded all observations where the standard deviations were zero. The zero standard deviations are excluded because some of the non-568 schools that use only the federal methodology to calculate EFCs report the same EFCs for a student and are likely to bias the results. None of the observations with zero standard deviations that we excluded involved a 568 school. Second, for the coefficient of variation V2, we excluded all observations where the medians were zero because we could not construct this measure that was obtained by dividing the range by the median. And, third, for the coefficient of variation V2, we excluded observations where the standardized variation exceeded 3 based on the observed distributions of the data. The test results were similar when none of those conditions were imposed. Denoting the estimates of V1 and V2 for the two groups by , the empirical distribution of then compared with the empirical distribution of had identical distributions (that is EFCs for 568 schools were similar in variations to those for non-568 schools). A similar comparison was made using the robust measures , . To more closely examine the difference between the variations in EFCs of cross-admit students for 568 and non-568 schools, we performed the Kolmogorov-Smirnov test. The test examines whether the distributions of the variation measures were the same. The same analysis was done for the V2 measures. The test was reported for both samples, consisting of type 1 students and all students. The results reported in table 8 suggest that there was no difference in EFC variations across the two groups, using type 1 students. The results using all students, however, are inconclusive for the V1 estimate, but suggest that non-568 schools have smaller EFC variation for the V2 estimate. The results based on the type 1 sample are more useful as a stand-alone descriptive finding, because this sample controls for student characteristics. The finding based on the combined data requires further analysis to control for student characteristics that we were unable to perform due to data limitations. We used the KSMIRNOV command in Stata to perform the tests. Coefficient of variation 1 (V1) Coefficient of variation 2 (V2) Coefficient of variation 1 (V1) Coefficient of variation 2 (V2) Coefficient of variation 2 (V2) equals range divided by median. All means students with multiple offers from 568 schools as well as offers from non-568 schools (type 1), students with multiple offers from only 568 schools (type 2), and students with multiple offers from only non-568 schools (type 3). The p-values are for the Kolmogorov-Smirnov tests of equality of distribution functions. All tests are interpreted using the 5 percent or lower level of significance. N1 is the sample size for coefficient of variation 1 (V1) and N2 is sample size for coefficient of variation 2 (V2). To estimate the effects of schools’ implementation of the consensus approach to need analysis on affordability (measured by price) and enrollment of freshmen students, we developed econometric models. This appendix provides information on theories of the exemption effects on student financial aid, the data sources for our analyses and selection of control schools, specifications of econometric models and estimation methodology, our econometric results, and limitations of our analysis. Two theories exist about the effects the consensus approach on student financial aid. It is important to note that the award of grant aid represents a discount from the nominal "list price", which lowers the price students actually pay for college. So, any decision to limit grant aid would be an agreement to limit discounts to the list price, and thus may raise the price some students would pay. It is also important to note that schools admit only a limited number of students. One of the theories suggests that allowing schools a limited degree of collaboration could reduce the variation in financial need determination for an individual student and reduce price competition among colleges vying for the same students. While the reduced competition would imply lower financial aid (hence higher prices) for some students, schools could thus devote more financial aid resources to providing access to other students, especially disadvantaged students. This “social benefit theory” assumes that under these conditions disadvantaged students would receive more grant aid and as a result, pay less for school. Also, an implicit assumption of this theory is that the exemption would essentially result in redistribution of financial aid without necessarily changing the amount of financial aid resources available. Moreover, because costs to students and their families would change for some students, enrollment of such students would be affected. An opposing theory is that the exemption will allow schools to coordinate on prices and reduce competition. This “anti-competitive theory” essentially views coordination by the group as restraining competition. Specifically, under this theory, allowing an exemption would result in less grant aid and higher prices on average, especially for students that schools competed over by offering discounts on the list price. As a result, the amount of financial aid available to some students would likely decrease. If prices are higher on average, it could cause a decrease in enrollment, particularly of disadvantaged students since they would be less able to afford the higher prices. Our analyses allowed us to test these two theories with the data available. To construct our model, we used data from: National Postsecondary Student Aid Study (NPSAS): These data, available at the student-level, served as the primary source for our study because we were interested in student outcomes of the exemption. Data were published every 4 years during the period relevant to our study; hence, we have data for academic years 1995-1996, 1999-2000, and 2003-2004. The data contained student-level information for all freshmen enrollees in the database, including enrollment in school, cost of attendance, financial aid, Scholastic Aptitude Test (SAT) scores, household income, and race. The number of freshmen in the database for our study was 1,626 in 1995-1996, 272 in 1999-2000, and 842 in 2003-2004. Integrated Postsecondary Education Data System (IPEDS): These data, available at the school level, included tuition and fees, faculty characteristics, and student enrollment for 1995-1996 and 2003-2004, there were no data published for 1999-2000. However, some of the data for 1999- 2000 were reported in the subsequent publications. We were able to construct some data for 1999-2000 through linear interpolation of the data for 1998-1999 and 2000-2001 or using the data for either year depending on availability; we believed this was reasonable because data for these institutions did not vary much over time. National Association of College and University Business Officers (NACUBO): This source provided data on school endowment from 1992 through 2004. GAO Survey: The survey collected data on the activities of the schools using the higher education antitrust exemption, including when schools implemented the consensus approach methodology. Determining the effects of the exemption required both a treatment group (schools using the exemption) and a control group (a comparable set of schools that did not use the exemption). To find a comparable set of schools we used data on school rankings based on their selectivity from years 1994 to 2004 from the U.S. News and World Report (USNWR). We selected control schools similar to schools using the antitrust exemption that had comparable student selectivity and quality of education using the “best schools” rankings information in the USNWR. The combined control and treatment schools were matched to school-level data from IPEDS, and student-level data from NPSAS. We selected the control schools based on their ranks in the years prior to the implementation of the consensus approach—1995-1996 and 1999-2000—and after the implementation of the consensus approach—2003-2004. The USNWR published its “best schools” rankings annually in August or September. Thus, the 2004 publication reflected the selectivity of the schools during 2003-2004. However, because publications in prior years—2002 and 2003—provided relevant information to students who enrolled in 2003-2004, we considered the rankings published from 2002 through 2004 as important input into decisions made by students and the schools for 2003-2004. Similarly, the publications from 1994 through 1996 were used to determine the selectivity of the schools in 1995-1996, and the publications from 1998 to 2000 were used to determine school selectivity for 1999-2000. The USNWR published separate rankings for liberal arts schools and national universities. The schools using or affiliated with the exemption consisted of 28 current members, two former members, and six observers. These 36 schools comprised the treatment schools used initially to select the comparable control schools. All 36 treatment schools were private; 13 were liberal arts schools and 23 were national universities. To ensure there were enough control schools for the treatment schools, we initially selected all the schools ranked in tier 1 (and tier 2 when available) in the USNWR rankings for each of the two types of institutions—liberal arts schools and national universities. This resulted in 250 schools, including all 36 treatment schools, for nine selected years (1994 to 1996, 1998 to 2000, and 2002 to 2004). All the treatment schools were ranked in each of the nine years (except for one school that was not ranked in 2002). The initial list of 250 schools was refined further to ensure a proper match in selectivity between the treatments and controls. Although we were interested in obtaining an adequate number of control schools to match the treatment schools, we refined the selection process to ensure they were comparable using the following conditions. First, we limited the selection of all the schools (controls and treatments) to those that were ranked in tier 1. This reduced the sample of schools from 250 to 106 schools, comprising all 36 treatment schools and 70 control schools. Second, the list of 106 schools was used to match school-level data from the IPEDS in each of the three academic years. Third, these data were then matched with the IPEDS data for each of the three academic years to student-level data from NPSAS. From the NPSAS, we selected data for cohorts who entered their freshmen year in each of the three academic years. Fourth, since we used a difference-in-difference methodology for the analysis, we wanted data for each school in at least two of the three academic years—one in the pre-treatment and one in the post-treatment period. We therefore initially constructed four samples of schools, depending on whether there were matches between all three academic years, or between any two of the three academic years. This resulted in 30 schools with data in all three academic years 1995-1996, 1999-2000, and 2003-2004 (referred to as sample 1). There were 34 schools with data in 1995-1996 and 2003-2004 (sample 2); 35 schools with data in 1999-2000 and 2003-2004 (sample 3); and 37 schools matched between 1995-1996 and 1999-2000 (sample 4). Finally, we limited the selection to private schools because all of the treatment schools are private. We did this because the governance of private schools generally differed from state-controlled public schools and these differences were likely to affect affordability and enrollment at a school. We also determined the academic year(s) data that would be used to represent the period before and the period after the implementation of the consensus approach. Since we had data for only1995-1996, 1999-2000 and 2003-2004, and given that the consensus approach was implemented in 2003-2004 (or in the prior year by some schools) we selected 1995-1996 as the pre-consensus approach period and 2003-2004 as the post-consensus approach period. Although the 1999-2000 data were relatively current for the pre-consensus approach period, it is possible that the 1999-2000 data may offer neither strong pre- nor post-consensus approach information since the period was very close to the formation of the 568 President’s Group in 1998. Furthermore, the institutional methodology, which is a foundation for the consensus approach and used by some of the control schools in 2003-2004, was revised in 1999. We therefore investigated whether it was appropriate to include 1999-2000 in the pre-consensus approach period or in the post-consensus approach period. We also investigated in which group (control or treatment) the schools that only attended the 568 President’s Group meetings, but had not become members of the group or implemented the consensus approach, belonged. Using the Chow test for pooling data, we determined that 1999-2000 should be excluded from the pre-consensus approach period as well as from the post-consensus approach period. We also determined that schools that only attended the 568 President’s Group meetings could not be regarded as control schools or treatment schools in analyzing the effects of the consensus approach. Therefore, the treatment schools consisted of the group members that implemented the consensus approach, and the control schools consisted of the schools that were not members of the 568 Group and did not attend their meetings. Based on the analysis above, we used the data in sample 2, which excluded data collected in 1999-2000, for our baseline model analysis; the period before the consensus approach is 1995-1996 and the period after is 2003-2004; the control schools that did not use the consensus approach (non-CA schools) are Brandeis University, Bryn Mawr College, New York University, Princeton University, Tulane University, University of Rochester, and Washington University at St. Louis, and the treatment schools that used the consensus approach (CA schools) are Cornell University, Duke University, Georgetown University, University of Notre Dame, Vanderbilt University, Wake Forest University, and Yale University. The complete list of the schools is in table 9. We developed models for analyzing the effects of the implementation of the consensus approach (CA) on affordability and enrollment of incoming freshman using the consensus approach. We used a difference-in- difference approach to identify the effects of implementation of the consensus approach. This approach controlled for two potential sources of changes in school practices that were independent of the consensus approach. First, this approach enabled us to control for variation in the actions of schools over time that were independent of the consensus approach. Having control schools that never implemented the consensus approach allowed us to isolate the effects of the exemption and permitted us to estimate changes over time that were independent of the consensus approach implementation. Second, while we had a control group of schools that did not use the consensus approach, but were otherwise very similar to treatment schools, it is possible that schools using the consensus approach differed in ways that would make them more likely to implement practices that are different from those of other schools. The difference-in-difference approach controlled for this possibility by including data on schools using the consensus approach both before and after its adoption. Controlling then for time effects independent of the consensus approach as well as practices by these schools before adoption, the effect of the use of the consensus approach could be estimated. Compared to the schools that did not use the consensus approach, we expected that the implementation of the consensus approach would have a significantly greater impact on the schools using the consensus approach because its use has potential implications for affordability and enrollment of students in these schools. The basic tenets of financial need analysis are that parents and students should contribute to the student’s education according to their ability to pay. The CA schools used the consensus approach for its need analysis methodology and to determine the expected family contribution (EFC) for each student based on that methodology. Conversely, the non-CA schools primarily used a need analysis methodology called the institutional methodology (IM). The difference between the cost of attendance (COA) and the EFC determines whether a student has financial need. If so, the school then develops a financial aid package of grants, loans, and work study from various sources. The actual amount that students and families pay depends on how much of the aid received is grant aid. Therefore, the implementation of the consensus approach was expected to affect the price paid and the financial aid received by students, and by implication, their enrollment into schools. The study examined the effects of the implementation of the consensus approach on two key variables: affordability (measured by price) and enrollment of freshman. We also estimated other equations to provide further insights on affordability— tuition, total grant aid, need-based grant aid, and total aid. All the dependent variables were measured at the student level, except tuition. Also, all monetary values were adjusted for inflation using the consumer price index (CPI) in 2005 prices. The dependent variables were defined as follows: Price (PRICEijt): Price, in dollars, actually paid by freshman i who enrolled in school j in an academic year t. The variable was measured as the cost of attendance less total grant aid. The cost of attendance consisted of tuition and fees, on-campus room and board, books and supplies, and other expenses such as transportation. Total grant aid consisted of institutional and non-institutional grant aid; it excluded self-help aid (loans and work study). The other dependent variables that we estimated to help provide more insights into the results for affordability were: Tuition (TUITIONijt): The amount of tuition and fees in dollars charged by school j to freshman i who enrolled in an academic year t. Total grant aid (AIDTGRTijt): The amount of total grant aid received, in dollars, by a freshman i who enrolled in school j in an academic year t. The counterpart to grant aid was self-help aid. Need-based grant aid (AIDNDTGRTijt): The amount of need-based grant aid received, in dollars, by freshman i who enrolled in school j in an academic year t. The counterpart to need-based aid was non-need- based aid, which consisted mainly of merit aid. Total aid package (AIDTOTAMTijt): The amount of total aid received, in dollars, by freshman i who enrolled in school j in an academic year t. The total aid consisted of total grants (from the school, the various levels of government—federal, state—and other sources) and self-help (includes loans and work-study). Student enrollment (ENRCAijt): An indicator variable for student enrollment into a CA school (ENRCAijt). It equals one if a freshman i enrolled in an academic year t in school j that was a school using or later the consensus approach, and zero otherwise. Thus, at t=0 (1995-1996), a school was designated as a CA school if it implemented the consensus approach in period t=1 (2003-2004). Students who enrolled in a non-CA school were assigned a value of zero. In other words, ENRCA takes a value of one for every student enrolled in a CA school in any time period (1995- 1996 or 2003-2004), and zero otherwise. Several variables could potentially affect each of the dependent variables identified above. The explanatory variables we used were based on economic reasoning, previous studies, and data availability. All the equations used were in quasi reduced-form specifications. The key explanatory variable of interest was the exercise of the exemption through the implementation of the consensus approach by the 568 Group of schools. We were also interested in the effects of the implementation of the consensus approach on affordability and enrollment of disadvantaged students. In order to isolate the relationships between the consensus approach implementation and each of the dependent variables, we controlled for the potential effects of other explanatory variables. The following is a complete list of all the explanatory variables we used: The exemption was captured by the implementation of the consensus approach by a school. EMCA equals one if school j has implemented CA by academic year t, where t is 2003-2004 and zero otherwise. We used other explanatory variables in our equations, in addition to the exemption indicator for the implementation of the consensus approach. These variables included school-level characteristics, school specific fixed-effects, time specific fixed-effects, and student-level characteristics. This variable was from the GAO survey of the CA and non-CA schools. for financial aid, or the preferences of the students. The variables used were: ENDOWSTUjt: The interaction between the 3-year average endowment per student and the 3-year average percentage rate of return on endowment per student at school j for an academic year t. The inclusion of the rate of returns from endowments helped minimize the possibility that developments in financial markets could bias the results especially if the average endowment per student differed across the two groups of schools. RANKAVGjt: The average “best schools” rank of school j for an academic year t. Although we used this variable to select the control schools that were comparable in selectivity to the treatment schools before matching the data to the NPSAS data, this variable was included, due to data limitations, to control for the possibility that the two groups of schools used in the sample may differ in selectivity. ENROLUGjt: The 3-year average growth rate (in decimals) of undergraduate enrollment at school j for an academic year t. TENUREDjt: The percentage (in decimals) of total faculty at school j that was tenured in an academic year t. These variables captured differences over time that did not vary across the schools, such as increases in national income that could increase affordability of schools. This was an indicator variable for the academic years (time): AY1995: Equals one for the academic year 1995-1996, and zero otherwise AY2003: Equals one for the academic year 2003-2004, and zero otherwise. All the student-level variables or attributes generally varied across students (i), across schools (j), and across time (t). The student characteristics indicated the preferences of the students for a school as well as the decisions of the schools regarding the students they admitted. The variables used were: FINAIDijt: Equals one if a freshman i who enrolled in school j in an academic year t applied for financial aid, and zero otherwise. RACE: Equals one if a freshman i who enrolled in school j in an academic year t is: Asian—ASIANijt, and zero otherwise. Black—BLACKijt, and zero otherwise. Hispanic—HISPANICijt, and zero otherwise. White—WHITEijt, and zero otherwise. Foreigner—FOREIGNijt, and zero otherwise. None of the above—OTHERijt, and zero otherwise. INCOME: Equals one for a freshman i who enrolled in school j in an academic year t has household income in the following quintiles: INCLOijt: Below or equal to the 20th percentile, and zero otherwise. These were low-income students, and the median income for the group was $13,731 in 2005 dollars. INCLOMDijt: Above the 20th and below or equal to the 40th percentile, and zero otherwise. These were lower-middle income students, and the median income for the group was $40,498 in 2005 dollars. INCMDijt: Above the 40th and below or equal to the 60th percentile, and zero otherwise. These were middle-income students, and the median income for the group was $59,739 in 2005 dollars. INCUPMDijt: Above the 60th and below or equal to the 80th percentile, and zero otherwise. These were upper-middle income students, and the median income for the group was $88,090 in 2005 dollars. INCHIijt: Above the 80th percentile, and zero otherwise. These were high-income students, and the median income for the group was $145,912 in 2005 dollars. Since we included minority students (Asian, black, and Hispanic students) as well as lower income groups (low income and lower-middle income students) to measure needy students, the minority variables likely captured nonincome effects. EFCijt: Expected family contribution for a freshman i who enrolled in school j in an academic year t. Although this variable captured the income of the students, it also reflected other factors that affect financial aid, such as the number of siblings in college. SCORESATijt: The combined scholastic aptitude test (SAT) scores for math and verbal of freshman i who enrolled in school j in an academic year t. Tables 10 and 11 show summary statistics for the variables listed above for treatment and control schools in sample 2 (as listed in table 9). In general, the values of the variables were similar between the two groups of schools. Table 12 shows summary statistics on price and financial aid before and after the implementation of the consensus approach in 2003-04 at the CA and non-CA schools in sample 2. Similarly, table 13 shows the summary statistics by income and racial groups. It is important to note that the summary information on the observed differences before and after the implementation of the consensus approach for the CA and non-CA schools are heuristic and do not conclusively determine the potential effects of the implementation of the consensus approach. It is also important to note that, for any given variable, it is possible that there are other factors than implementing the consensus approach that are responsible for the observed differences, including differences between CA and non-CA schools’ student populations or differences in the characteristics of the schools, or both. For instance, the price paid by middle-income students increased more in CA than in non-CA schools. While this may reflect the effect of consensus approach, it is possible that other factors are responsible for the differences. For example, the racial composition of middle-income students might also be different between the two groups, or there may be systematic differences in endowment growth between the CA and non-CA schools that affect financial aid to middle-income students. Thus, to assess the effect of consensus approach, it is necessary to study the effects of consensus approach while controlling simultaneously for all factors that influence price and aid policies. Our econometric analysis is based on panel data, which pooled cross- sectional and time series data. The cross-sectional data were based on freshmen who enrolled in CA schools and non-CA schools, and the time series data were for academic years 1995-1996 and 2003-2004. Where feasible, we used panel-data estimation appropriate for cross-sectional and time series data. Also, we used fixed-effects estimation instead of random- effects estimation because the observations were not randomly chosen and there were likely to be unobserved school-specific effects. The reported estimates were based on the fixed-effects estimators, using probability weights, and the standard errors were robust. Price, Tuition, and Financial Aid Equations: Let Yijt be the dependent variable for freshman i’s outcomes at the chosen school j in academic year t, where the main outcome variable studied is affordability represented by price (PRICEijt). The regression equations were specified generally as follows: (1) where I and S are vectors of school (institution)-level and student-level variables, and EMCA represents the consensus approach implementation; ψ (time specific fixed-effects) and θ (school specific fixed-effects) are scalar parameters, and α and ε are the constant and the random error terms, respectively. There are interactions between EMCA and the school- level variables and between EMCA and the student-level variables. We were primarily interested in the total effects of the implementation of the consensus approach on affordability, as well as the effects that were specific to particular groups of students, such as low-income and minority students, and students who applied for financial aid. Using equation 1, the total effect of the CA implementation on price was where I and S are averages of I and S taken over the observations for the CA schools during the period of the consensus approach implementation (2003-2004). This measures the effect of the consensus approach implementation on CA schools, relative to non-CA schools, controlling for time invariant differences in schools and other variations over time that are common to both groups. The coefficient approach implementation on price, while measures the unconditional effect of the consensus ηˆ and γˆ measure the conditional effects of the consensus approach implementation on price through the school-level variables and student-level variables, respectively. The expression for the total effects of the consensus approach implementation can be evaluated for particular groups of students by averaging I and S over that particular subset of students. For example, the effects of the consensus approach implementation on prices paid by low- income (INCLO) students can be estimated by δwhere the school-level and student-level variables are averaged over the low-income students. More specifically, the second term is the coefficient estimates of each school-level variable multiplied by the school-level variable averaged over the subset of low- income (INCLO) students attending CA schools after the consensus approach implementation; similarly the average is taken for the third term, which is for the student-level variables. Alternatively, we can use equation 1 to illustrate the effects of the consensus approach implementation for particular groups. Consider a simple example in which there are two student characteristics, ijtF is an indicator variable equal to one if the student is a financial aid applicant and zero otherwise, and ijtA is an indicator equal to one if the student is black, and zero otherwise. Then, using equation 1, the equation for this example is: (1.1) Now consider a white student who is a financial aid applicant in school j at time t. The predicted price for a white student if j is a CA school is: (1.2) and the predicted price if j is not a CA school is: (1.3) The effect of the consensus approach implementation for a financial aid applicant at school j is then the difference between equations 1.2 and 1.3, which is: (1.4) The coefficient measures the effect of adopting the consensus approach that is invariant across school and student type, the term differential effect of adopting the consensus approach for a school with characteristics Ijt, and the third term, γˆ , captures the differential effect of adopting the consensus approach for a white student who is a financial aid applicant. Repeating the exercise above for a black student who is a financial aid applicant, the predicted effect of adopting the consensus approach would be: (1.5) The first three terms in equation 1.5 are the same as equation 1.4, while the fourth term captures the differential effect of the consensus approach implementation for a black student. In this example, then, the estimated effect of the consensus approach implementation on financial aid students would be the weighted average of the terms in equations 1.4 or 1.5, with weights corresponding to the proportions of white and black financial-aid students across all schools j that adopted the consensus approach at time t, respectively. Another estimate of the consensus approach’s effect on a particular group is the estimated differential effect on a group, given byγˆ , holding everything else constant. For example, one can ask how a low-income student as compared to a high-income student would be affected by the consensus approach implementation, assuming all other characteristics of the student and the student’s school are held constant. This estimated effect is simply given by the element of the vector in γˆ that corresponds to INCLO. This differs from the total effect of the consensus approach implementation discussed above by taking as given the consensus approach implementation, and by abstracting from the likelihood that low- income students will have other characteristics and attend different schools than non low-income students. We will also discuss the coefficientρˆ , which captures the value of the dependent variable for the particular group in both CA and non-CA schools before the consensus approach implementation, where necessary. The total effect of the exemption on price as well as its specific effects on particular groups will depend on which theory of the exemption is supported by the data. In particular, we expect price to be lower for disadvantaged students if the social benefit theory is valid; on the other hand, price will increase if the anti-competitive theory is valid. Similarly, the effects of the student-level variables would depend on the theories of the effects of the exemption. For the effects of the school-level variables, ENDOWSTU should be negative because with more resources there is less need to raise tuition and there will be more funds for grant aid. RANKAVG should be negative because as the quality of the school decreases tuition as well as grant aid should decrease. ENROLUG would be negative if higher growth in student enrollment perhaps means more revenues and less need to raise tuition. On the other hand, if students’ education is on net subsidized by other sources of school income then ENROLUG would be positive as increased enrollment increases the costs to the school of providing education. And TENURED should be positive if more tenured faculty implies higher quality. We estimated equation 1 for price, as well as for tuition and the financial aid variables, using probability-weighted regression and robust standard errors, as well as the fixed-effects estimator for panel data. See the regression estimates for price and tuition in table 14, and those for the financial aid variables in table 15. The regression models for the price, tuition, and financial aid variables are all highly significant using the F-values of the models. See tables 14 and 15. Furthermore, the school-level variables generally have the expected effects. In particular, for the price equation, a student enrolled in a school with an endowment per student (ENDOWSTU) of $250,000 paid about $5,000 lower price. Also, a student paid about $464 less for a school with a unit drop in its selectivity (RANKAVG). Although the effect is not significant, the positive sign for ENROLUG suggests that an increase in enrollment growth may result in a higher price paid, implying that education is net subsidized and increases in enrollment increases the cost of providing education; and vice versa. Finally, a student enrolled in a school with 10 percent higher tenured faculty (TENURED) paid about $3,310 higher. As discussed earlier, the effects of the student-level variables depend on which theory of the effects of the higher education exemption is relevant. ) (2) Φ is the standard normal cumulative probability distribution function. Similar to equation 1, equation 2 includes student characteristics (with coefficients ρ), time fixed-effects captured by AY2003, and the interaction of the time variable AY2003 with student characteristics (with coefficients γ). The time specific fixed-effect for AY2003 captures any shift, which is constant across students, toward or away from the CA schools, after the consensus approach implementation, while the interaction terms between the AY2003 and the student characteristics capture shifts toward or away from the CA schools by students with specific characteristics. ) ) standard normal probability density function. It should be noted that if AY2003 affects the probability of enrollment in CA schools, it would be a valuable suggestive evidence about the potential impact of the consensus approach implementation. However, it would not establish that the consensus approach implementation caused the shift. This is because it is possible that such effects might be due to changes in other factors at CA schools versus non-CA schools (e.g., more rapid endowment growth in the latter than the former). The effect of the consensus approach implementation is the change in the probability of enrollment in CA schools relative to non-CA schools as a result of the consensus approach implementation. The overall effect of the CA implementation as well as the effects of the consensus approach implementation on particular groups of students, such as low-income students and those who applied for financial aid, can be obtained similar to the discussion above for the price. ) (.). consensus approach implementation on how the probabilities of enrollment of low-income and minority students, and those who applied for financial aid, are affected can be obtained similar to the discussion for the price. Similar to the discussion for the price equation, the effects of the exemption and the student-level variables on enrollment into CA schools will depend on which theory of the exemption is valid. In particular, the social benefit theory will imply increased likelihood of enrollment into CA schools, especially of low-income students, because prices will be lower. While the opposite will occur with the anti-competitive theory because average price will be higher. We estimated equation 2 for student enrollment using the probit estimation, with probability weights and robust standard errors. The regression estimates are in table 14. The regression model for enrollment in table 14 is significant using the chi- square of the model. As indicated earlier, we expect the estimation results will enable us to determine if the likelihood of enrollment into schools implementing the consensus approach by various student groups is more consistent with the social benefit theory or the anti-competitive theory of the effects of the higher education exemption. The results of estimating equations 1 and 2 for the total effects of the CA implementation on affordability and enrollment are summarized in table 16, based on the regression results in tables 14 and 15. The results for price and enrollment in table 16 contain the key findings of the entire study, with the other variables (tuition and financial aid) providing information that supplements the findings for price. For the average student, the consensus approach implementation did not significantly change the prices paid by students in CA schools compared to non-CA schools, including the effects on low-income and minority students and students who applied for financial aid. The CA schools, compared to non-CA schools, did not significantly change the tuition they charged students as a result of the consensus approach implementation. The consensus approach implementation did not significantly change the amount of total grant aid received by students in CA schools compared to non-CA schools. Need-based total grant aid:The consensus approach implementation increased the amount of need- based total grant aid received by students in CA schools compared to non- CA schools by about $6,125, with a confidence interval of between $239 and $12,011. The amounts of need-based grant aid received by students in CA schools compared to non-CA schools were higher for middle income students by about $20,221, with a confidence interval of between $6,718 and $33,724. Asian students received higher need-based grant aid of about $14,628, with a confidence interval of between $5,051 and $24,206; Hispanic students received higher need-based grant aid of about $9,532, with a confidence interval of between $1,006 and $18,059; and white students received higher need-based grant aid of about $6,017, with a confidence interval of between $178 and $11,856. The consensus approach implementation did not significantly change the amount of total aid received by students in CA schools compared to non- CA schools. However, low-income students in CA schools received higher total aid of about $12,121, with a confidence interval of between $1,837 and $22,404. The consensus approach implementation did not significantly change the overall likelihood of enrollment into CA schools compared to non-CA schools, for all types of students. We discuss the estimates of affordability and the likelihood of enrollment in both the schools that adopted the consensus approach and those that did not, of students with particular characteristics, before the consensus approach was implemented. The estimates are reported in table 17, based on tables 14 and 15. These estimates could help explain the extent to which the consensus approach affected particular groups of students. For instance, if certain students were receiving higher financial aid awards prior to the consensus approach, they may be less likely to receive much higher awards as a result of its adoption. We also discuss the differential effects on students with particular characteristics that the consensus approach may have had on affordability and enrollment at those schools. The estimates are reported in table 18, based on tables 14 and 15. As already discussed, these estimates indicate how the consensus approach affected students with particular characteristics, assuming all the other characteristics of the students are held constant. Some students paid lower prices prior to the CA implementation; in particular, financial aid applicants relative to non-financial aid applicants; low income, lower-middle income, middle-income students relative to high-income students; and black and Hispanic students relative to white students. But there were no significant differential effects of implementing the consensus approach on prices paid by these groups of students in CA schools. Some students received higher total grant aid prior to the consensus approach implementation; in particular, low-income, lower-middle income, middle-income, black, and Hispanic students. Need-based total grant aid: Some students received higher need-based aid prior to the consensus approach implementation; in particular, low-income, lower-middle income, middle-income, and black students. But there were no significant differential effects of implementing the consensus approach on prices paid by these groups of students. Some students received higher total aid prior to the consensus approach implementation; in particular, middle-income, and black students. But lower-middle income students received lower total aid prior to the consensus approach implementation. Only low-income students in CA schools received higher aid, compared to high-income students, as a result of implementing the consensus approach. Enrollment: Students generally were not more or less likely to enroll in a CA school prior to the consensus approach implementation. However, implementing the consensus approach lowered the likelihood of enrollment of financial- aid students, compared to non-financial aid applicants, while the likelihood of enrollment of Hispanic students increased, compared to white students, in CA schools. The findings of the study could be limited by the potential of selection bias if the CA schools had characteristics that we could not control for that made them more inclined to adopt the consensus approach and independently influenced the outcome variables. We believe that this is not a serious problem with the estimation since the difference-in- difference approach includes CA schools before the implementation of the CA, implying the latter selection problem would require significant change over a short time span in the character of these schools. Furthermore, a key factor that might motivate schools to join the 568 Group is the legacy of the Overlap group. The 568 Group has objectives that are similar to those stated by the Overlap group—to be able to offer financial aid to more needy students. Our test indicated that the chances of a former Overlap group member joining or not joining the 568 Group did not differ between the two groups of schools in our sample. Thus, the similarity between the two groups, in terms of a school joining the 568 Group, implied the potential for selection bias may be small. In our analysis, the total grant aid does not include self-help aid (loans and work study). However, if the true amount of total grant aid should include some proportion of self-help aid, then its exclusion would lead to an underestimation of total grant aid. Nonetheless, we believe this did not significantly affect our results since we found that the consensus approach implementation did not affect self-help aid. It may be that early admit students pay higher prices because early decision admission might be used by need-blind schools as a screening mechanism to indirectly identify a student’s willingness-to-pay. Under the early decision process a non-financial aid student is therefore more likely to be admitted than a financial-aid student of comparable quality. We did not expect the early decision process to affect our results because while the process might help identify a student with a higher willingness to pay, it is the student’s ability to pay that determines the need-based aid offered by the 568 Group. Furthermore, the total probability of enrollment of a financial-aid applicant was similar to that of a non-financial aid applicant both before and after the consensus approach implementation, even though the consensus approach implementation tended to decrease the likelihood of enrollment of financial-aid students. We could not include all the schools affiliated with the 568 Group in the analysis because of data limitations. (See the list of unmatched treatment schools in table 9.) However, there were several similarities (in terms of “best college” ranking, endowment, tuition and fees, and percentage of tenured faculty) as well as differences (in terms of freshmen enrollment) between the included and excluded CA colleges. The data were available for only one academic year period after implementation of the consensus approach. This could mask potential effects of the consensus approach since these effects could be gradual, rather than immediate, and therefore take time to for the effects to be captured. Also, the small sample size of the data could make the estimates less precise, especially for some of the subgroups of students we considered. However, we checked to ensure that the estimates were consistent with the data by estimating the predicted values corresponding to the observed mean values for price, the key variable of interest, and the financial aid variables. The results, presented in table 19, show that the predictions of our model are consistent qualitatively with the observed data. We conducted tests to determine whether to use data collected in academic year 1999-2000 and whether schools that attended meetings of the 568 President’s group but did not implement the consensus approach could be included in our analysis. First, the academic year 1999-2000 was very close to the establishment of the 568 President’s Group, which occurred in 1998. The 1999-2000 academic year might have been a transitional period, and it would therefore not be appropriate to use the data as part of the period before the 568 Group implemented the consensus approach. Second, there were five schools, among the schools with data available for our econometric analysis, that either only attended the 568 Group meetings (Case Western Reserve University, Stanford University, and University of Southern California) or were members of the 568 Group but had not implemented the CA as of 2003 (Brown University and Dartmouth College). We therefore investigated which group—control or treatment—each of the five schools belonged. We used the data for sample 4 to investigate if data collected in 1999-2000 belonged in the pre-CA period (with data collected in 1995-1996). Although both samples 1 and 4 have data for 1995-1996 and 1999-2000, we chose sample 4 because it was the larger sample. See table 9 in appendix II for the list of the schools in each sample and the academic years for which data were available. The tests were performed using the Chow test, which is of the form:(1) y = β01 + β11 x1 + β21 x2 + u, u ~ N(0,σ2), for group = 1995-1996 (g1), and (2) y = β02 + β12 x1 + β22 x2 + u, u ~ N(0,σ2) for group = 1999-2000 (g2). Pooling the two groups of data we estimated, (3) y = β01 + β11 x1 + β21 x2 + (β02–β01)g2 + (β12–β11)g2x1 + (β22–β21)g2x2 + u, where g2 is an indicator variable. The test examines the hypothesis that the added coefficients are jointly zero: (β02–β01) = (β12–β11) = (β22–β21) = 0. An insignificant test statistic (a small test statistic and a large p-value) suggests that the above equality holds, and there is no difference between the estimates for 1999-2000 and the group with which it is compared (1995- 1996). On the other hand, a significant statistic (a large test statistic and a small p-value) suggests that the above equality does not hold and the 1999- 2000 is different from the group with which it is compared (1995-1996). We combined 1999-2000 with 1995-1996 and tested if the coefficients for 1999-2000 differed from that of 1995-1996, using sample 4. The tests were done for price, the key variable affecting student outcomes for schools. We performed a joint test that the added coefficients in equation 3 are jointly zero. The F-value is 1.71, and significant with a p-value of 0.0375. This implied that data collected in 1999-2000 did not belong to with the 1995-1996 data in the pre-CA period. Similarly, we examined if 1999-2000 belonged to the post-CA period by combining 1999-2000 with 2003-2004, using sample 3. The F-value of the joint test is 8.36, and significant with a p-value of 0.0. This implied that 1999-2000 data did not belong to with the 2003-2004 data in the post-CA period. These results suggest that it was more appropriate to exclude 1999-2000 from the analysis, implying that samples 1 and 2, which have data for the pre-CA period (1995-1996) and the post-CA period (2003-2004) would be more appropriate. However, because sample 2 was larger than sample 1, our subsequent analysis used sample 2. We performed an analysis similar to that described above to determine whether schools that only attended meetings—Brown University, Case Western Reserve University, Dartmouth College, Stanford University, and University Southern California (USC)—belonged in the treatment or control group. We determined whether the behavior of each of these schools was more consistent with the control schools or the treatment schools after the consensus approach implementation, using data for 2003- 2004. Since we had determined from the above analysis that samples 1 and 2 are more appropriate for our subsequent analysis, we focus on sample 2, the larger sample, for these tests. Similar to the analysis in section above, we included Brown in the control group and tested if the coefficients for Brown differed from the control group. We performed a joint test and obtained an F-value of 25.68, significant at 0.00. This implied that Brown did not belong to the control group. For the treatment group test, the F-value was 7.37, significant at 0.00. This also implied that Brown did not belong to the treatment group. Thus, Brown did not belong to either the control or treatment group. The F-value for the control group test was 19.16, significant at 0.00, and the F-value for the treatment group test was 5.59, significant at 0.00. This implied that Stanford did not belong to either the control or treatment group. We tested for which group USC belonged by excluding the SAT scores variable (SCORESAT) from the model since the data were not available for 2003-2004. The F-value for the control group test was 23.23, significant at 0.00, and the F-value for the treatment group test was 12.54, significant at 0.00. This implied that USC did not belong to either the control or treatment group. Based on the above analysis, we determined that the best data for our analysis was sample 2, and we excluded all five schools that only attended the 568 Group meetings but did not implement the consensus approach. The following individuals made important contributions to the report: Sherri Doughty, Assistant Director; Andrea Sykes; John A. Karikari; Angela Miles; Daniele Schiffman; John Mingus; Dayna Shah; Richard Burkard; Susan Bernstein; Rachel Valliere; Robert Alarapon; Thomas Weko; and L. Jerome Gallagher. Avery, C. and C. Hoxby.”Do and Should Financial Aid Packages Affect Students’ College Choices?” National Bureau of Economic Research Working Paper, No. 9482. 2003. Bamberger, G., and D. Carlton.”Antitrust and Higher Education: MIT Financial Aid (1993).” Case 11. The Antitrust Revolution (Third Edition: 1993). Carlton, D., G. Bamberger, and R. Epstein. “Antitrust and Higher Education: Was There A Conspiracy to Restrict Financial Aid?” RAND Journal of Economics, vol. 26, no. 1 (Spring 1995): 131-147. Epple, D., R. Romano, S. Sarpca, and H. Sieg. “Profiling in Bargaining Over College Tuitions,” unpublished paper. January 21, 2005. Hill, C., and G. Winston. “Access: Net prices, Affordability, and Equity At a Highly Selective College.” unpublished paper. December 2001. Hill, C., G. Winston, and S. Boyd. “Affordability: Family Incomes and Net Prices at Highly Selective Private Colleges and Universities.” The Journal of Human Resurces, vol. XL, no. 4 (2005): 769-790. Hoxby, C. “Benevolent Colluders? The Effects of Antitrust Action on College Financial Aid and Tuition.” National Bureau of Economic Research Working Paper, No. 7754. June 2000. Kim, M. “Early Decision and Financial Aid Competition Among Need-Blind Colleges and Universities.” unpublished paper. May 1, 2005. Morrison, R. “Price Fixing Among Elite Colleges and Universities,” The University of Chicago Law Review, vol. 59 (1992): 807-835. Netz, J. “Non-Profits and Price-Fixing: The Case of the Ivy League.” unpublished paper. November 1999. Netz, J. “The End of Collusion?: Competition After Justice and the Ivy League Settle.” unpublished paper. Fall 2000. Salop, S., and L. White. “Antitrust Goes to College,” Journal of Economic Perspectives, vol. 5, no. 3 (Summer 1991): 193-2002. Shepherd, G. “Overlap and Antitrust: Fixing prices in a Smoke-Filled Classroom,” The Antitrust Bulletin. Winter (1995): 859-884. Winston, G., and C. Hill. “Access to the Most Selective Private Colleges by High-Ability, Low-Income Students: Are They Out There?” unpublished paper. October 2005. | In 1991 the U.S. Department of Justice sued nine colleges and universities, alleging that they had restrained competition by making collective financial aid determinations for students accepted to more than one of these schools. Against the backdrop of this litigation, Congress enacted a temporary exemption from antitrust laws for higher education institutions in 1992. The exemption allows limited collaboration regarding financial aid practices with the goal of promoting equal access to education. The exemption applies only to institutional financial aid and can only be used by schools that admit students without regard to ability to pay. In passing an extension to the exemption in 2001, Congress directed GAO to study the effects of the exemption. GAO examined (1) how many schools used the exemption and what joint practices they implemented, (2) trends in costs and institutional grant aid at schools using the exemption, (3) how expected family contributions at schools using the exemption compare to those at similar schools not using the exemption, and (4) the effects of the exemption on affordability and enrollment. GAO surveyed schools, analyzed school and student-level data, and developed econometric models. GAO used extensive peer review to obtain comments from outside experts and made changes as appropriate. Twenty-eight schools--all highly selective, private 4-year institutions--formed a group to use the antitrust exemption and developed a common methodology for assessing financial need, which the group called the consensus approach. The methodology used elements already a part of another need analysis methodology; schools modified this methodology and reached agreement on how to define those elements. By the 2004-2005 school year, 25 of 28 schools in the group were using the consensus approach. Schools' implementation of the approach varied, however, with officials from 12 of the 25 schools reporting that they partially implemented it, in part because they believed it would be costly to do so. Over the last 5 years, tuition, room, and board costs among schools using the antitrust exemption increased by 13 percent compared to 7 percent at all other private 4-year schools not using the exemption. While the amount of institutional aid at schools using the exemption also increased--it did so at a slower rate. The average institutional grant aid award per student increased by 7 percent from $18,675 in 2000-2001 to $19,901 in 2005-2006. There was virtually no difference in the amount students and their families were expected to pay between schools using the exemption and similar schools not using the exemption. While officials from schools using the exemption expected that students accepted to several of their schools would experience less variation in the amount they were expected to pay, GAO found that students accepted to schools using the exemption and comparable schools not using the exemption experienced similar variation in the amount they were expected to pay. Not all schools using the consensus approach chose to adopt all the elements of the methodology, a factor that may account for the lack of consistency in expected family contributions among schools using the exemption. Based on GAO's analysis, schools' use of the consensus approach did not have a significant impact on affordability--the amount students and families paid for college--or affect the likelihood of enrollment at those schools to date. While GAO found that the use of the consensus approach resulted in higher amounts of need-based grant aid awarded to some student groups compared to their counterparts at schools not using the consensus approach, the total amount of grant aid awarded was not significantly affected. It was likely that grant aid awards shifted from non-need-based aid, such as academic and athletic scholarships, to aid based on a student's financial need. Finally, implementing the consensus approach did not increase the likelihood of low-income or minority students enrolling at schools using the consensus approach compared to schools that did not. The group of schools using the exemption reviewed this report and stated it was a careful and objective report. However, they had concerns about the data used in GAO's econometric analysis, which GAO believes were reliable. |
The Army has taken a number of steps since June 2010 at different levels to provide for more effective management and oversight of contracts supporting Arlington, including improving visibility of contracts, establishing new support relationships, formalizing policies and procedures, and increasing the use of dedicated contracting staff to manage and improve acquisition processes. While significant progress has been made, we have recommended that the Army take further action in these areas to ensure continued improvement and institutionalize progress made to date. These recommendations and the agency's response are discussed later in this statement. Arlington does not have its own contracting authority and, as such, relies on other contracting offices to award and manage contracts on its behalf. ANCP receives contracting support in one of two main ways, either by (1) working directly with contracting offices to define requirements, ensure the appropriate contract vehicle, and provide contract oversight, or (2) partnering with another program office to leverage expertise and get help with defining requirements and providing contract oversight. Those program offices, in turn, use other contracting arrangements to obtain services and perform work for Arlington. Using data from multiple sources, we identified 56 contracts and task orders that were active during fiscal year 2010 and the first three quarters of fiscal year 2011 under which these contracting offices obligated roughly $35.2 million on Arlington’s behalf. These contracts and task orders supported cemetery operations, such as landscaping, custodial, and guard services; construction and facility maintenance; and new efforts to enhance information technology systems for the automation of burial operations. Figure 1 identifies the contracting relationships, along with the number of contracts and dollars obligated by contracting office, for the contracts and task orders we reviewed. At the time of our review, we found that ANCP did not maintain complete data on contracts supporting its operations. We have previously reported that the effective acquisition of services requires reliable data to enable informed management decisions.leadership may be without sufficient information to identify, track, and ensure the effective management and oversight of its contracts. While we obtained information on Arlington contracts from various sources, limitations associated with each of these sources make identifying and tracking Arlington’s contracts as a whole difficult. For example: Without complete data, ANCP Internal ANCP data. A contract specialist detailed to ANCP in September 2010 developed and maintained a spreadsheet to identify and track data for specific contracts covering daily cemetery operations and maintenance services. Likewise, ANCP resource management staff maintain a separate spreadsheet that tracks purchase requests and some associated contracts, as well as the amount of funding provided to other organizations through the use of military interdepartmental purchase requests. Neither of these spreadsheets identifies the specific contracts and obligations associated with Arlington’s current information technology and construction requirements. Existing contract and financial systems. The Federal Procurement Data System-Next Generation (FPDS-NG) is the primary system used to track governmentwide contract data, including those for the Department of Defense (DOD) and the Army. The Arlington funding office identification number, a unique code that is intended to identify transactions specific to Arlington, is not consistently used in this system and, in fact, was used for only 34 of the 56 contracts in our review. In October 2010 and consistent with a broader Army initiative, ANCP implemented the General Fund Enterprise Business System (GFEBS) to enhance financial management and oversight and to improve its capability to track expenditures. We found that data in this system did not identify the specific information technology contracts supported by the Army Communications-Electronics Command, Army Geospatial Center, Naval Supply Systems Command Weapon Systems Support office, and others. Officials at ANCP and at the MICC-Fort Belvoir stated that they were exploring the use of additional data resources to assist in tracking Arlington contracts, including the Virtual Contracting Enterprise, an electronic tool intended to help enable visibility and analysis of elements of the contracting process. Contracting support organizations. We also found that Army contracting offices had difficulty in readily providing complete and accurate data to us on Arlington contracts. For example, the National Capital Region Contracting Center could not provide a complete list of active contracts supporting Arlington during fiscal years 2010 and 2011 and in some cases did not provide accurate dollar amounts associated with the contracts it identified. USACE also had difficulty providing a complete list of active Arlington contracts for this time frame. The MICC-Fort Belvoir contracting office was able to provide a complete list of the recently awarded contracts supporting Arlington with accurate dollar amounts for this time frame, and those data were supported by similar information from Arlington. The Army has also taken a number of steps to better align ANCP contract support with the expertise of its partners. However, some of the agreements governing these relationships do not yet fully define roles and responsibilities for contracting support. We have previously reported that a key factor in improving DOD’s service acquisition outcomes—that is, obtaining the right service, at the right price, in the right manner—is having defined responsibilities and associated support structures. Going forward, sustained attention on the part of ANCP and its partners will be important to ensure that contracts of all types and risk levels are managed effectively. The following summarizes ongoing efforts in this area: ANCP established a new contracting support agreement with the Army Contracting Command in August 2010. The agreement states that the command will assign appropriate contracting offices to provide support, in coordination with ANCP, and will conduct joint periodic reviews of new and ongoing contract requirements. In April 2011, ANCP also signed a separate agreement with the MICC, part of the Army Contracting Command, which outlines additional responsibilities for providing contracting support to ANCP. While this agreement states that the MICC, through the Fort Belvoir contracting office, will provide the full range of contracting support, it does not specify the types of requirements that will be supported, nor does it specify that other offices within the command may also do so. ANCP signed an updated support agreement with USACE in December 2010, which states that these organizations will coordinate to assign appropriate offices to provide contracting support and that USACE will provide periodic joint reviews of ongoing and upcoming requirements. At the time of our review, USACE officials noted that they were in the process of finalizing an overarching program management plan with ANCP, which, if implemented, provides additional detail about the structure of and roles and responsibilities for support. USACE and ANCP have also established a Senior Executive Review Group, which updates the senior leadership at both organizations on the status of ongoing efforts. ANCP has also put agreements in place with the Army Information Technology Agency (ITA) and the Army Analytics Group, which provide program support for managing information technology infrastructure and enhance operational capabilities. Officials at ANCP decided to leverage this existing Army expertise, rather than attempting to develop such capabilities independently as was the case under the previous Arlington management. For example, the agreement in place with ITA identifies the services that will be provided to Arlington, performance metrics against which ITA will be measured, as well as Arlington’s responsibilities. These organizations are also responsible for managing the use of contracts in support of their efforts; however, the agreement with ANCP does not specifically address roles and responsibilities associated with the use and management of these contracts supporting Arlington requirements. Although officials from these organizations told us that they currently understand their responsibilities, without being clearly defined in the existing agreements, roles and responsibilities may be less clear in the future when personnel change. ANCP has developed new internal policies and procedures and improved training for staff serving as contracting officer’s representatives, and has dedicated additional staff resources to improve contract management. Many of these efforts were in process at the time of our review, including decisions on contracting staff needs, and their success will depend on continued management attention. The following summarizes our findings in this area: Arlington has taken several steps to more formally define its own internal policies and procedures for contract management. In July 2010, the Executive Director of ANCP issued guidance stating that the Army Contracting Command and USACE are the only authorized contracting centers for Arlington. Further, ANCP is continuing efforts to (1) develop standard operating procedures associated with purchase requests; (2) develop memorandums for all ANCP employees that outline principles of the procurement process, as well as training requirements for contracting officer’s representatives; and (3) create a common location for reference materials and information associated with Arlington contracts. In May 2011, the Executive Director issued guidance requiring contracting officer’s representative training for all personnel assigned to perform that role, and at the time of our review, all of the individuals serving as contracting officer’s representatives had received training for that position. ANCP, in coordination with the MICC-Fort Belvoir contracting office is evaluating staffing requirements to determine the appropriate number, skill level, and location of contracting personnel. In July 2010, the Army completed a study that assessed Arlington’s manpower requirements and identified the need for three full-time contract specialist positions. While these positions have not been filled to date, ANCP’s needs have instead been met through the use of staff provided by the MICC. At the time of our review, the MICC-Fort Belvoir was providing a total of 10 contracting staff positions in support of Arlington, 5 of which are funded by ANCP, with the other 5 funded by the MICC-Fort Belvoir to help ensure adequate support for Arlington requirements. ANCP officials have identified the need for a more senior contracting specialist and stated that they intend to request an update to their staffing allowance for fiscal year 2013 to fill this new position. Prior reviews of Arlington have identified numerous issues with contracts in place prior to the new leadership at ANCP. While our review of similar contracts found common concerns, we also found that contracts and task orders awarded since June 2010 reflect improvements in acquisition practices. Our previous contracting-related work has identified the need to have well-defined requirements, sound business arrangements (i.e., contracts in place), and the right oversight mechanisms to ensure positive outcomes. We found examples of improved documentation, better definition and consolidation of existing requirements for services supporting daily cemetery operations, and more specific requirements for contractor performance. At the time of our review, many of these efforts were still under way, so while initial steps taken reflect improvement, their ultimate success is not yet certain. The Army has also taken positive steps and implemented improvements to address other management deficiencies and to provide information and assistance to families. It has implemented improvements across a broad range of areas at Arlington, including developing procedures for ensuring accountability over remains, taking actions to better provide information- assurance, and improving its capability to respond to the public and to families’ inquiries. For example, Arlington officials have updated and documented the cemetery’s chain-of-custody procedures for remains, to include multiple verification steps by staff members and the tracking of decedent information through a daily schedule, electronic databases, and tags affixed to urns and caskets entering Arlington. Nevertheless, we identified several areas where challenges remain: Managing information-technology investments. Since June 2010, ANCP has invested in information-technology improvements to correct existing problems at Arlington and has begun projects to further enhance the cemetery’s information-technology capabilities. However, these investments and planned improvements are not yet guided by an enterprise architecture—or modernization blueprint. Our experience has shown that developing this type of architecture can help minimize risk of developing systems that are duplicative, poorly integrated, and unnecessarily costly to maintain. ANCP is working to develop an enterprise architecture, and officials told us in January that they expect the architecture will be finalized in September 2012. Until the architecture is in place and ANCP’s ongoing and planned information technology investments are assessed against that architecture, ANCP lacks assurance that these investments will be aligned with its future operational environment, increasing the risk that modernization efforts will not adequately meet the organization’s needs. Updating workforce plans. The Army took a number of positive steps to address deficiencies in its workforce plans, including completing an initial assessment of its organizational structure in July 2010 after the Army IG found that Arlington was significantly understaffed. However, ANCP’s staffing requirements and business processes have continued to evolve, and these changes have made that initial workforce assessment outdated. Since the July 2010 assessment, officials have identified the need for a number of new positions, including positions in ANCP’s public-affairs office and a new security and emergency-response group. Additionally, Arlington has revised a number of its business processes, which could result in a change in staffing needs. Although ANCP has adjusted its staffing levels to address emerging requirements, its staffing needs have not been formally reassessed. Our prior work has demonstrated that this kind of assessment can improve workforce planning, which can enable an organization to remain aware of and be prepared for its current and future needs as an organization. ANCP officials have periodically updated Arlington’s organizational structure as they identify new requirements, and officials told us in January that they plan to completely reassess staffing within ANCP in the summer of 2012 to ensure that it has the staff needed to achieve its goals and objectives. Until this reassessment is completed and documented, ANCP lacks assurance that it has the correct number and types of staff needed to achieve its goals and objectives. Developing an organizational assessment program. Since 2009 ANCP has been the subject of a number of audits and assessments by external organizations that have reviewed many aspects of its management and operations, but it has not yet developed its own assessment program for evaluating and improving cemetery performance on a continuous basis. Both the Army IG and VA have noted the importance of assessment programs in identifying and enabling improvements of cemetery operations to ensure that cemetery standards are met. Further, the Army has emphasized the importance of maintaining an inspection program that includes a management tool to identify, prevent, or eliminate problem areas. At the time of our review, ANCP officials told us they were in the process of developing an assessment program and were adapting VA’s program to meet the needs of the Army’s national cemeteries. ANCP officials estimated in January that they will be ready to perform their first self-assessment in late 2012. Until ANCP institutes an assessment program that includes an ability to complete a self- assessment of operations and an external assessment by cemetery subject-matter experts, it is limited in its ability to evaluate and improve aspects of cemetery performance. Coordinating with key partners. While ANCP has improved its coordination with other Army organizations, we found that it has encountered challenges in coordinating with key operational partners, such as the Military District of Washington, the military service honor guards, and Joint Base Myer-Henderson Hall. Officials from these organizations told us that communication and collaboration with Arlington have improved, but they have encountered challenges and there are opportunities for continued improvement. For example, officials from the Military District of Washington and the military service honor guards indicated that at times they have experienced difficulties working with Arlington’s Interment Scheduling Branch and provided records showing that from June 24, 2010, through December 15, 2010, there were at least 27 instances where scheduling conflicts took place. These challenges are due in part to a lack of written agreements that fully define how these operational partners will support and interact with Arlington. Our prior work has found that agencies can derive benefits from enhancing and sustaining their collaborative efforts by institutionalizing these efforts with agreements that define common outcomes, establish agreed-upon roles and responsibilities, identify mechanisms used to monitor and evaluate collaborative efforts, and enable the organizations to leverage their resources. ANCP has a written agreement in place with Joint Base Myer-Henderson Hall, but this agreement does not address the full scope of how these organizations work together. Additionally, ANCP has drafted, but has not yet signed, a memorandum of agreement with the Military District of Washington. ANCP has not drafted memorandums of agreement with the military service honor guards despite each military service honor guard having its own scheduling procedure that it implements directly with Arlington and each service working with Arlington to address operational challenges. ANCP, by developing memorandums of agreement with its key operational partners, will be better positioned to ensure effective collaboration with these organizations and help to minimize future communication and coordination challenges. Developing a strategic plan. Although ANCP officials have been taking steps to address challenges at Arlington, at the time of our review they had not adopted a strategic plan aimed at achieving the cemetery’s longer-term goals. An effective strategic plan can help managers to prioritize goals; identify actions, milestones, and resource requirements for achieving those goals; and establish measures for assessing progress and outcomes. Our prior work has shown that leading organizations prepare strategic plans that define a clear mission statement, a set of outcome-related goals, and a description of how the organization intends to achieve those goals. Without a strategic plan, ANCP is not well positioned to ensure that cemetery improvements are in line with the organizational mission and achieve desired outcomes. ANCP officials told us during our review that they were at a point where the immediate crisis at the cemetery had subsided and they could focus their efforts on implementing their longer-term goals and priorities. In January, ANCP officials showed us a newly developed campaign plan. While we have not evaluated this plan, our preliminary review found that it contains elements of an effective strategic plan, including expected outcomes and objectives for the cemetery and related performance metrics and milestones. Developing written guidance for providing assistance to families. After the Army IG issued its findings in June 2010, numerous families called Arlington to verify the burial locations of their loved ones. ANCP developed a protocol for investigating these cases and responding to the families. Our review found that ANCP implemented this protocol, and we reviewed file documentation for a sample of these cases. In reviewing the assistance provided by ANCP when a burial error occurred, we found that ANCP’s Executive Director or Chief of Staff contacted the affected families. ANCP’s Executive Director—in consultation with cemetery officials and affected families— made decisions on a case-by-case basis about the assistance that was provided to each family. For instance, some families who lived outside of the Washington, D.C., area were reimbursed for hotel and travel costs. However, the factors that were considered when making these decisions were not documented in a written policy. In its June 2010 report, the Army IG noted in general that the absence of written policies left Arlington at risk of developing knowledge gaps as employees leave the cemetery. By developing written guidance that addresses the cemetery’s interactions with families affected by burial errors, ANCP could identify pertinent DOD and Army regulations and other guidance that should be considered when making such decisions. Also, with written guidance the program staff could identify the types of assistance that can be provided to families. In January, ANCP provided us with a revised protocol for both agency-identified and family member-initiated gravesite inquiries. The revised protocol provides guidance on the cemetery's interactions with the next of kin and emphasizes the importance of maintaining transparency and open communication with affected families. A transfer of jurisdiction for the Army’s two national cemeteries to VA is feasible based on historical precedent for the national cemeteries and examples of other reorganization efforts in the federal government. However, we identified several factors that may affect the advisability of making such a change, including the potential costs and benefits, potential transition challenges, and the potential effect on Arlington’s unique characteristics. In addition, given that the Army has taken steps to address deficiencies at Arlington and has improved its management, it may be premature to move forward with a change in jurisdiction, particularly if other options for improvement exist that entail less disruption. During our review, we identified opportunities for enhancing collaboration between the Army and VA that could leverage their strengths and potentially lead to improvements at all national cemeteries. Transferring cemetery jurisdiction could have both benefits and costs. Our prior work suggests that government reorganization can provide an opportunity for greater effectiveness in program management and result in improved efficiency over the long-term, and can also result in short- term operational costs.told us they were not aware of relevant studies that may provide insight into the potential benefits and costs of making a change in cemetery jurisdiction. However, our review identified areas where VA’s and the Army’s national cemeteries have similar, but not identical, needs and have developed independent capabilities to meet those needs. For example, each agency has its own staff, processes, and systems for determining burial eligibility and scheduling and managing burials. While consolidating these capabilities may result in long-term efficiencies, there could also be challenges and short-term costs. At the time of our review, Army and VA officials Potential transition challenges may arise in transferring cemetery jurisdiction. Army and VA cemeteries have similar operational requirements to provide burial services for service members, veterans, and veterans’ family members; however, officials identified areas where the organizations differ and stated that there could be transition challenges if VA were to manage Arlington, including challenges pertaining to the regulatory framework, appropriations structure, and contracts. For example, Arlington has more restrictive eligibility criteria for in-ground burials, which has the result of limiting the number of individuals eligible for burial at the cemetery. If Arlington cemetery were to be subject to the same eligibility criteria as VA’s cemeteries, the eligibility for in-ground burials at Arlington would be greatly expanded. Additionally, the Army’s national cemeteries are funded through a different appropriations structure than VA’s national cemeteries. If the Army’s national cemeteries were transferred to VA, Congress would have to choose whether to alter the funding structure currently in place for Arlington. Burial eligibility at VA’s national cemeteries is governed by 38 U.S.C. § 2402 and 38 C.F.R. § 38.620. Burial eligibility at Arlington is governed by 38 U.S.C. § 2410 and 32 C.F.R. § 553.15. Mission and vision statements. The Army and VA have developed their own mission and vision statements for their national cemeteries that differ in several ways. Specifically, VA seeks to be a model of excellence for burials and memorials, while Arlington seeks to be the nation’s premier military cemetery. Military honors provided to veterans. The Army and VA have varying approaches to providing military funeral honors. VA is not responsible for providing honors to veterans, and VA cemeteries generally are not involved in helping families obtain military honors from DOD. In contrast, Arlington provides a range of burial honors depending on whether an individual is a service member killed in action, a veteran, or an officer. Ceremonies and special events. Arlington hosts a large number of ceremonies and special events in a given year, some of which may involve the President of the United States as well as visiting heads of state. From June 10, 2010, through October 1, 2011, Arlington hosted more than 3,200 wreath-laying ceremonies, over 70 memorial ceremonies, and 19 state visits, in addition to Veterans Day and Memorial Day ceremonies, and also special honors for Corporal Frank Buckles, the last American servicemember from World War I. VA officials told us that their cemeteries do not support a similar volume of ceremonies, and as a result they have less experience in this area than the Army. During our review, we found that there are opportunities to expand collaboration between the Army and VA that could improve the efficiency and effectiveness of these organizations’ cemetery operations. Our prior work has shown that achieving results for the nation increasingly requires that federal agencies work together, and when considering the nation’s long-range fiscal challenges, the federal government must identify ways to deliver results more efficiently and in a way that is consistent with its limited resources. Since the Army IG issued its findings in June 2010, the Army and VA have taken steps to partner more effectively. The Army’s hiring of several senior VA employees to help manage Arlington has helped to foster collaboration, and the two agencies signed a memorandum of understanding that allows ANCP employees to attend classes at VA’s National Training Center. However, the Army and VA may have opportunities to collaborate and avoid duplication in other areas that could benefit the operations of either or both cemetery organizations. For example, the Army and VA are upgrading or redesigning some of their core information technology systems supporting cemetery operations. By continuing to collaborate in this area, the agencies can better ensure that their information-technology systems are able to communicate, thereby helping to prevent operational challenges stemming from a lack of compatibility between these systems in the future. In addition, each agency may have specialized capabilities that it could share with the other. VA, for example, has staff dedicated to determining burial eligibility, and the Army has an agency that provides geographic-information-system and global-positioning-system capabilities—technologies that VA officials said that they are examining for use at VA’s national cemeteries. While the Army and VA have taken steps to improve collaboration, at the time of our review the agencies had not established a formal mechanism to identify and analyze issues of shared interest, such as process improvements, lessons learned, areas for reducing duplication, and solutions to common problems. VA officials indicated that they planned to meet with ANCP officials in the second quarter of fiscal year 2012, with the aim of enhancing collaboration between the two agencies. Unless the Army and VA collaborate to identify areas where the agencies can assist each other, they could miss opportunities to take advantage of each other’s strengths—thereby missing chances to improve the efficiency and effectiveness of cemetery operations—and are at risk of investing in duplicative capabilities. The success of the Army’s efforts to improve contracting and management at Arlington will depend on continued focus in various areas. Accordingly, we made a number of recommendations in our December 2011 reports. In the area of contracting, we recommended that the Army implement a method to track complete and accurate contract data, ensure that support agreements clearly identify roles and responsibilities for contracting, and determine the number and skills necessary for contracting staff. In its written comments, DOD partially concurred with these recommendations, agreeing that there is a need to take actions to address the issues we raised, but indicating that our recommendations did not adequately capture Army efforts currently underway. We believe our report reflects the significant progress made by Arlington and that implementation of our recommendations will help to institutionalize the positive steps taken to date. With regard to our recommendation to identify and implement a method to track complete and accurate contact data, DOD noted that Arlington intends to implement, by April 2012, a methodology based on an electronic tool which is expected to collect and reconcile information from a number of existing data systems. Should this methodology consider the shortcomings within these data systems as identified in our report, we believe this would satisfy our recommendations. DOD noted planned actions, expected for completion by March 2012 that, if implemented, would satisfy the intent of our other two recommendations. With regard to other management challenges at Arlington, we recommended that the Army implement its enterprise architecture and reassess ongoing and planned information-technology investments; update its assessment of ANCP’s workforce needs; develop and implement a program for assessing and improving cemetery operations; develop memorandums of understanding with Arlington’s key operational partners; develop a strategic plan; and develop written guidance to help determine the types of assistance that will be provided to families affected by burial errors. DOD fully agreed with our recommendations that the Army update its assessment of ANCP's workforce needs and implement a program for assessing and improving cemetery operations. DOD partially agreed with our other recommendations. In January, ANCP officials provided us with updates on its plans to take corrective actions, as discussed in this statement. With regard to implementing an enterprise architecture, DOD stated that investments made to date in information technology have been modest and necessary to address critical deficiencies. We recognize that some vulnerabilities must be expeditiously addressed. Nevertheless, our prior work shows that organizations increase the risk that their information technology investments will not align with their future operational environment if these investments are not guided by an approved enterprise architecture. Regarding its work with key operational partners, DOD stated that it recognizes the value of establishing memorandums of agreement and noted the progress that the Army has made in developing memorandums of agreement with some of its operational partners. We believe that the Army should continue to pursue and finalize agreements with key operational partners that cover the full range of areas where these organizations must work effectively together. With regard to a strategic plan, DOD stated that is was in the process of developing such a plan. As discussed previously, ANCP officials in January showed us a newly developed campaign plan that, based on our preliminary review, contains elements of an effective strategic plan. Regarding written guidance on the factors that the Executive Director will consider when determining the types of assistance provided to families affected by burial errors, DOD stated that such guidance would limit the Executive Director's ability to exercise leadership and judgment to make an appropriate determination. We disagree with this view. Our recommendation does not limit the Executive Director's discretion, which we consider to be an essential part of ensuring that families receive the assistance they require in these difficult situations. Our recommendation, if implemented, would improve visibility into the factors that guide decision-making in these cases. Finally, we recommended that the Army and VA implement a joint working group or other such mechanism to enable ANCP and VA’s National Cemetery Administration to collaborate more closely in the future. Both DOD and VA concurred with this recommendation. As noted, VA stated that a planning meeting to enhance collaboration is planned for the second quarter of 2012. Chairman McCaskill, Ranking Member Portman, and Members of the Subcommittee, this completes our prepared statement. We would be pleased to respond to any questions that you may have at this time. For questions about this statement, please contact Brian Lepore, Director, Defense Capabilities and Management, on (202) 512-4523 or leporeb@gao.gov or Belva Martin, Director, Acquisition and Sourcing Management, on (202) 512-4841 or martinb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony include Tom Gosling, Assistant Director; Brian Mullins, Assistant Director; Kyler Arnold; Russell Bryan; George M. Duncan, Kathryn Edelman; Julie Hadley; Kristine Hassinger; Lina Khan; and Alex Winograd. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Arlington National Cemetery (Arlington) contains the remains of more than 330,000 military servicemembers, their family members, and others. In June 2010, the Army Inspector General identified problems at the cemetery, including deficiencies in contracting and management, burial errors, and a failure to notify next of kin of errors. In response, the Secretary of the Army issued guidance creating the position of the Executive Director of the Army National Cemeteries Program (ANCP) to manage Arlington and requiring changes to address the deficiencies and improve cemetery operations. In response to Public Law 111-339, GAO assessed several areas, including (1) actions taken to improve contract management and oversight, (2) the Armys efforts to address identified management deficiencies and provide information and assistance to families regarding efforts to detect and correct burial errors, and (3) factors affecting the feasibility and advisability of transferring jurisdiction for the Armys national cemeteries to the Department of Veterans Affairs (VA). The information in this testimony summarizes GAOs recent reports on Arlington contracting (GAO-12-99) and management (GAO-12-105). These reports are based on, among other things, analyzing guidance, policies, plans, contract files, and other documentation from the Army, Arlington, and other organizations and interviews with Army and VA officials. GAO identified 56 contracts and task orders that were active during fiscal year 2010 and the first three quarters of fiscal year 2011 under which contracting offices obligated roughly $35.2 million on Arlingtons behalf. These contracts supported cemetery operations, construction and facility maintenance, and new efforts to enhance information technology systems for the automation of burial operations. The Army has taken a number of steps since June 2010 at different levels to provide for more effective management and oversight of contracts, establishing new support relationships, formalizing policies and procedures, and increasing the use of dedicated contracting staff to manage and improve its acquisition processes. However, GAO found that ANCP does not maintain complete data on its contracts, responsibilities for contracting support are not yet fully defined, and dedicated contract staffing arrangements still need to be determined. The success of Arlingtons acquisition outcomes will depend on continued management focus from ANCP and its contracting partners to ensure sustained attention to contract management and institutionalize progress made to date. GAO made three recommendations to continue improvements in contract management. The Department of Defense (DOD) partially concurred and noted actions in progress to address these areas. The Army has taken positive steps and implemented improvements to address other management deficiencies and to provide information and assistance to families. It has implemented improvements across a broad range of areas at Arlington, including developing procedures for ensuring accountability over remains and improving its capability to respond to the public and to families inquiries. Nevertheless, the Army has remaining management challenges in several areasmanaging information technology investments, updating workforce plans, developing an organizational assessment program, coordinating with key partners, developing a strategic plan, and developing guidance for providing assistance to families. GAO made six recommendations to help address these areas. DOD concurred or partially concurred and has begun to take some corrective actions. A transfer of jurisdiction for the Armys two national cemeteries to VA is feasible based on historical precedent for the national cemeteries and examples of other reorganization efforts in the federal government. However, several factors may affect the advisability of making such a change, including the potential costs and benefits, potential transition challenges, and the potential effect on Arlingtons unique characteristics. In addition, given that the Army has taken steps to address deficiencies at Arlington and has improved its management, it may be premature to move forward with a change in jurisdiction, particularly if other options for improvement exist that entail less disruption. GAO identified opportunities for enhancing collaboration between the Army and VA that could leverage their strengths and potentially lead to improvements at all national cemeteries. GAO recommended that the Army and VA develop a mechanism to formalize collaboration between these organizations. DOD and VA concurred with this recommendation. In the reports, GAO made several recommendations to help Arlington sustain progress made to date. |
US-VISIT is a large, complex governmentwide program intended to achieve the goals of (1) enhancing the security of U.S. citizens and visitors, (2) facilitating legitimate travel and trade, (3) ensuring the integrity of the U.S. immigration system, and (4) protecting the privacy of visitors. The program is intended to carry out these goals by collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying visitor identity, and determining visitor admissibility through the use of biometrics (digital fingerprints and a digital photograph); and facilitating information sharing and coordination within the immigration and border management community. Currently, US-VISIT’s scope includes the pre-entry, entry, status, and exit of hundreds of millions of foreign national travelers who enter and leave the United States at over 300 air, sea, and land POEs. The current statutory framework for US-VISIT originates with a requirement to implement an integrated entry and exit data system for foreign nationals, enacted in the Immigration and Naturalization Service Data Management Improvement Act (DMIA) of 2000. The DMIA replaced in its entirety a provision of the Illegal Immigration Reform and Immigrant Responsibility Act of 1996 (IIRIRA) that had required an automated system to record and then match the departure of every foreign national from the United States to the individual’s arrival record. The DMIA instead required an electronic system that would provide access to and integrate foreign national arrival and departure data that are authorized or required to be created or collected under law and are in an electronic format in certain databases, such as those used at POEs and consular offices. Unlike the earlier law, the DMIA specifically provided that it not be interpreted to impose any new documentary or data collection requirements on any person, but it also provided that it not be construed to reduce or curtail the authority of DHS or State under any other provision of law. Thus, the DMIA did not specifically require the collection of any new data on foreign nationals departing at land POEs. The system as described in the DMIA is to compare available arrival records to available departure records; allow on-line search procedures to identify foreign nationals who may have overstayed their authorized period of admission; and use available data to produce a report of arriving and departing foreign nationals. The DMIA also required the implementation of the system at airports and seaports by December 31, 2003, at the 50 highest volume land POEs by December 31, 2004; and at all remaining POEs by December 31, 2005. Laws passed after the DMIA also provided specific requirements with regard to the use of biometrics for those entering and leaving the country. For example, the USA PATRIOT Act required, by October 26, 2003, the development and certification of a technology standard, including appropriate biometric identifier standards, that can be used to verify the identity of persons applying for a U.S. visa, or seeking to enter the United States pursuant to a visa, for the purposes of conducting background checks, confirming identity, and ensuring that a person has not received a visa under a different name. The act also provided that in developing US- VISIT, DHS and State were to focus particularly on the utilization of biometric technology and the development of tamper-resistant documents readable at POEs. The Enhanced Border Security and Visa Entry Reform Act of 2002 required DHS and State to implement, fund, and use the technology standard, including biometric identifier standards, developed under the USA PATRIOT Act at U.S. POEs; it also required the installation at all POEs of equipment and software to allow biometric comparison and authentication of all U.S. visas and other travel and entry documents issued to aliens, and passports issued by Visa Waiver Program participating countries with biometric identifiers. The Intelligence Reform and Terrorism Prevention Act of 2004, unlike the DMIA, specifically required the collection of biometric exit data for all categories of individuals required to provide biometric entry data under US-VISIT, regardless of the port of entry where they entered the United States. The 2004 law did not set a deadline for implementation of this requirement, however. Appendix III discusses the legislative history of the US-VISIT program in greater detail. Within DHS, the US-VISIT Program Office is headed by the US-VISIT Director, who reports directly to the Deputy Secretary for Homeland Security. The US-VISIT Program Office has responsibility for managing the acquisition, deployment, operation, and sustainment of US-VISIT and has been delivering US-VISIT capability incrementally. According to US-VISIT, increments 1 and 2 include a mix of interim or temporary solutions and permanent deployments. For example, increment 1B, dealing with exit capability at airports, is still being piloted, while US-VISIT entry capability at the 50 busiest land POEs—increment 2B—is considered to be a permanent deployment. Increment 3—providing entry capability at the land POEs not covered under Increment 2B—is considered by US-VISIT to be a permanent deployment and increment 4 is, according to US-VISIT, the yet-to-be defined US-VISIT strategic capability. Table 1 summarizes the scope, timeline, and intended functionality of the US-VISIT increment schedule. This report focuses generally, but not exclusively, on increments 2B (entry capability at the 50 busiest land POEs), 2C (exit capability at the 50 busiest land POEs), and 3 (entry capability at the remaining land POEs)—the increments and information that are shown in bold in table 1. From fiscal year 2003 through fiscal year 2007, total funding for the US- VISIT program has been about $1.7 billion. Table 2 summarizes appropriations for US-VISIT for fiscal years 2003 through 2007, as enacted. In prior reports on US-VISIT, we have identified numerous challenges that DHS faces in delivering program capabilities and benefits on time and within budget. In September 2003, we reported that the US-VISIT program is a risky endeavor, both because of the type of program it is (large, complex, and potentially costly) and because of the way that it was being managed. We reported, for example, that the program’s acquisition management process had not been established, and that US-VISIT lacked a governance structure. In March 2004, we testified that DHS faces a major challenge maintaining border security while still welcoming visitors. Preventing the entry of persons who pose a threat to the United States cannot be guaranteed, and the missed entry of just one can have severe consequences. Also, US-VISIT is to achieve the important law enforcement goal of identifying those who overstay or otherwise violate the terms of their visas. Complicating the achievement of these security and law enforcement goals are other key US-VISIT goals: facilitating trade and travel through POEs and providing for enforcement of U.S. privacy laws and regulations. Subsequently, in May 2004, we reported that DHS had not employed the kind of rigorous and disciplined management controls typically associated with successful programs. Moreover, in February 2006, we reported that while DHS had taken steps to implement most of the recommendations from our 2003 and 2004 reports, progress in critical areas had been slow. Of 18 recommendations we made since 2003, only 2 had been fully implemented, 11 had been partially implemented, and 5 were in the process of being implemented, although the extent to which they would be fully carried out was not yet known. As mentioned earlier, US-VISIT currently applies to a certain group of foreign nationals—non-immigrants from countries whose residents are required to obtain nonimmigrant visas before entering the United States and residents of certain countries who are exempt from U.S. visa requirements when they apply for admission to the United States for up to 90 days for tourism or business purposes under the Visa Waiver Program. US-VISIT also applies to (1) Mexican nonimmigrants traveling with a Border Crossing Card (BCC) who wish to remain in the United States longer than 30 days or who declare that they intend to travel more than 25 miles into the country from the border (or more than 75 miles from the Arizona border in the Tucson area) and (2) Canadians traveling to the United States for certain specialized reasons. Most land border crossers—including U.S. citizens, lawful permanent residents, and most Canadian and Mexican citizens—are, by regulation or statute, not required to enroll into US-VISIT. In fiscal year 2004, for example, U.S. citizens and lawful permanent residents comprised about 57 percent of land border crossers; Canadian and Mexican citizens comprised about 41 percent; and less than 2 percent were US-VISIT enrollees. Figure 1 shows the number and percent of persons processed under US- VISIT as a percentage of all border crossings at land, air, and sea POEs in fiscal year 2004. Foreign nationals covered by US-VISIT enter the United States via a multi- step process. For individuals required to obtain visas before entering the United States, the US-VISIT process begins overseas at U.S. consular offices, which in addition to other processes, collect biographic data (i.e., country of origin and date of birth) and biometric data (i.e., digital fingerscans and a digital photograph) from the applicant. These data are checked against databases or watch lists of known criminals and suspected terrorists. If the individual’s name does not appear on any watch list and the individual is not disqualified on the basis of other issues that may be relevant, he or she is to be issued a visa and may seek admission to the United States at a POE. When visitors in vehicles first arrive at a land POE, they initially enter the primary inspection area where CBP officers, often located in booths, are to visually inspect travel documents and query the visitors about such matters as their place of birth and proposed destination. Visitors arriving as pedestrians enter an equivalent primary inspection area, generally inside a CBP building. If the CBP officer believes a more detailed inspection is needed or if the visitors are required to be processed under US-VISIT for the first time, the visitors are to be referred to the secondary inspection area—an area away from the primary inspection area—which is generally inside a facility. The secondary inspection area inside the facility generally contains office space, waiting areas, and space to process visitors, including US-VISIT enrollees. Equipment used for US-VISIT processing includes a computer, printer, digital camera, and a two- fingerprint scanner. Figure 2 shows US-VISIT equipment installed at one land POE. CBP officers use a document reader to scan machine readable travel documents, such as a passport or visa, and use computers to check biographic data from the documents against watch list databases. For US- VISIT processing, biometric verification is performed in part by taking a digital scan of visitors’ fingerprints (the left and right index fingers) and by taking a digital photograph of the visitor. These data are stored in the system’s databases. The computer system compares the two index fingerprints to those stored in DHS’s Automated Biometric Identification System (IDENT) that, among other things, collects and stores biometric data about foreign nationals, including FBI information on all known and suspected terrorists. If the fingerprints are already in IDENT, the system performs a match against the existing digital scans to confirm that the person submitting the fingerprints at secondary inspection at the POE is the one on file. In addition, the CBP officer visually compares the person to the photograph that is in the database, which is brought up onto the computer screen. If no prints are found in IDENT (for example, if the visitor is from a visa- waiver country), that person is then processed into US-VISIT, with biographic data entered into the databases, a digital scan of his or her two index fingerprints, and a digital photograph. Once the CBP officer deems the visitor to be admissible, the individual is issued an I-94 or an I-94W (for persons from visa waiver countries) arrival/departure form. Figure 3 shows how U.S. citizens and most Mexicans, Canadians, and foreign nationals subject to US-VISIT are to be processed at land POEs. In addition to IDENT, US-VISIT relies on a number of information systems to process visitors. Among the computer software applications utilized as part of US-VISIT is U.S. Arrival, which provides an integrated process for issuing I-94 forms and collection of biometric data for visitors covered by US-VISIT who arrive at land POEs. Another is U.S. Pedestrian, which is used by CBP officers in conducting inspections of visitors who arrive at land POEs, entering the United States on foot, mostly along the southern border. As of August 2006, there were 170 land POEs that are geographically dispersed along the nation’s more than 7,500 miles of borders with Canada and Mexico. Some are located in rural areas (such as Alexandria Bay, New York and Blaine-Pacific Highway, Washington) and others in cities (such as Detroit) or in U.S. cities across from Mexican cities, such as Laredo and El Paso, Texas. The volume of visitor traffic at these POEs varies widely, with the busiest four POEs characterized by CBP as San Ysidro, Calexico, and Otay Mesa, California, and Bridge of the Americas in El Paso, Texas. Appendix IV lists the 20 busiest land POEs, based on the number of individuals in vehicles and pedestrian traffic recorded entering the country through POEs in fiscal year 2005. From a facilities standpoint, land POEs vary substantially in building type and size (square footage) as shown in Figures 4a, 4b, and 4c. DHS has installed US-VISIT biometric entry capability at nearly all land POEs consistent with statutory deadlines, but faces challenges identifying and monitoring the operational impacts on POE facilities. CBP officials at the 21 land POEs we visited told us that US-VISIT has generally enhanced the officials’ ability to process visitors subject to US-VISIT by providing officials the ability to do biometric checks and automating the issuance of the visitor I-94 arrival/departure form. DHS plans to introduce changes and enhancements to US-VISIT at land POEs intended to bolster border security, but deploying them poses potential operational challenges to land POE facilities that are known by DHS to be space-constrained. US- VISIT’s efforts to evaluate the impact of US-VISIT on land POE facilities thus far raises questions about whether sufficient management controls exist to ensure that additional operational impacts, such as processing delays or further space constraints, will be anticipated, identified, and appropriately addressed and resolved. In December 2005, DHS officials announced that US-VISIT biometric entry capability had been installed at land POEs in conformance with statutory mandates and Increments 2B and 3 of DHS’s US-VISIT schedule. Deployment at the 50 busiest land POEs was completed by December 31, 2004, and at all but 2 of the other land POEs where DHS determined the program should operate by December 31, 2005, as required by law. Our review of US-VISIT records and discussions with US-VISIT program officials indicated that DHS installed US-VISIT biometric entry capability at 154 of 170 land POEs. (App. V lists all land POEs where US-VISIT has been installed.) With regard to 14 of the 16 POEs where US-VISIT was not installed, CBP and US-VISIT program office officials told us there was no operational need for US-VISIT because visitors who are required to be processed into US-VISIT are, by regulation, not authorized to enter the United States at these locations. Generally, these POEs are small facilities in remote areas. At 2 other POEs, US-VISIT needs to be installed in order to achieve full implementation as required by law, but both of these present significant challenges to installation of US-VISIT. These POEs do not currently have access to appropriate communication transmission lines to operate US-VISIT. CBP officials told us that, given this constraint, they determined that they could continue to operate as before. Thus, CBP officers at these locations process foreign visitors manually. US-VISIT program officials reported and available records showed that equipment for US-VISIT entry capability was installed with minimal construction at the 154 land POEs. At the 21 land POEs we visited, we observed that US-VISIT entry capability equipment had been installed with little or no change to facilities. For example, at the Detroit-Windsor tunnel and the Detroit Ambassador Bridge POEs in Detroit, Michigan, officials confirmed that no additional computer workstations were required to be installed; at the Blaine-Peace Arch POE at Blaine, Washington, electrical capacity was upgraded to accommodate US-VISIT computer needs. In general, our review of reports prepared for each of these POEs indicated that DHS upgraded existing or added new computer workstations and printers in the secondary inspections areas of these facilities (the area where US-VISIT enrollees are processed); installed digital cameras to photograph those to be processed in US-VISIT; installed two-fingerprint scanners that digitally record fingerprints; and installed electronic card readers for detecting data embedded in machine-readable passports and visas. According to US-VISIT officials, funding for installing US-VISIT entry equipment nationwide was approximately $16 million—about 9 percent of the $182 million budgeted for US-VISIT deployment at land ports between fiscal year 2003 and fiscal year 2005. Officials reported that the remaining funds were allocated to computer network infrastructure (about 72 percent) and design and development, network engineering, fingerscan devices, and public awareness and outreach (about 19 percent). During our site visits, CBP officials at all 21 facilities told us that having US-VISIT biometric entry capability generally improved their ability to process visitors required to enroll in US-VISIT because it provided them additional assurance that visitors are who they say they are and automated the paperwork associated with processing the I-94 arrival/departure form. For example, with US-VISIT, the ability to scan a visitor’s passport or other travel document enables the computer at the inspection site to capture basic biographic information and automatically print it on the I-94 form; prior to US-VISIT deployment, the I-94 was filled in manually by the CBP officer or the visitor. DHS plans to introduce changes and enhancements to US-VISIT at land POEs that are designed to further bolster CBP’s ability to verify that individuals attempting to enter the country are who they say they are. While these changes may further aid border security, deploying them poses potential challenges to land POE facilities where US-VISIT operates and where millions of visitors are processed annually. Our site visits, interviews with US-VISIT and CBP officials, and the work of others suggest that both before and after US-VISIT entry capability was installed at land POEs, these facilities faced a number of challenges—operational and physical—including space constraints complicated by the logistics of processing high volumes of visitors and associated traffic congestion. With respect to operational challenges at land POE facilities, we reported in November 2002—more than 2 years before US-VISIT entry capability was installed at the 50 busiest land POEs—that busy land POEs were experiencing 2- to 3-hour delays in processing visitors and that any lengthening of the entry process could affect visitors significantly, through additional wait times. While we cannot generalize about the impact US- VISIT has had on processing time at all land POEs, at one of the busiest land POEs we visited—San Ysidro, California, where more than 41 million visitors entering the country in 2005 were processed—CBP officials told us that, although they had not measured differences in processing times before and after US-VISIT was installed, the steps required to process US- VISIT visitors had added to the total time needed to process all visitors entering through the port. As a result, CBP officials told us that they must occasionally direct visitors arriving at peak times, such as holidays, to leave and return later in the day because there was no room for them to wait. In this case, US-VISIT had an effect on both visitor processing times and on the capacity of the facility to physically accommodate pedestrian and vehicular traffic. A similar type of operational problem that reflects how complex visitor processing activities occur at facilities was reported by a contractor retained by DHS to study wait times associated with the I-94 issuance process at another busy POE, Nogales-DeConcini in Arizona. The study, which examined wait times for 3 separate time periods over a 3-month period in the summer of 2005, found that wait times varied by day (ranging from about 3½ minutes to almost 7 minutes across the time periods studied) and was more a function of the number of people waiting for an I- 94 rather than the time needed to process each individual under US- VISIT. The contractor noted that the group size, wait time, and processing all affected the dynamics of the secondary-processing area or room, which measured approximately 40 feet by 50 feet. During one day of the study, the contractor noted that the secondary processing room became crowded, straining processing capacity. The contractor stated that this occurred because some of the individuals waiting to obtain I-94s were students or seasonal workers that required checks that included phone calls to verify their visa status. The contractor concluded that US-VISIT provided an advantage over manual I-94 processing because the processing was ultimately more efficient. Nevertheless, the extent to which these problems occur is unknown because US-VISIT has not performed comparable studies at other locations. DHS has long been aware of space constraints and other capacity issues at land POE facilities. A task force report developed in response to the Immigration and Naturalization Service Data Management Improvement Act of 2000 found that 117 of 166 land POEs operating at that time (about 70 percent) had three-fourths or less of the required space. The US-VISIT Program Office subsequently confirmed that land POEs had traffic flow problems (i.e., lack of space, insufficient roadways, and poor access to facilities) and that many were aging and undersized; the majority of land POEs were constructed before 1970 when the volume of border crossings was not as great as it is now. Our work for this report indicates that such problems persist, though we cannot generalize to all facilities. For example, at the Nogales-Morley Gate POE in Arizona, where up to 6,000 visitors are processed daily (and up to 10,000 on holidays), US-VISIT equipment was installed, but the system is not used there because CBP determined that it could not accommodate US-VISIT visitors because of concerns about CBP’s ability to carry out the process in a constrained space while thousands of other people not subject to US-VISIT processing already transit through the facility daily. Thus, if a visitor is to be processed into US-VISIT from Morley Gate, that person is directed to return to Mexico (a few feet away) and to walk the approximately 100 yards to the Nogales-DeConcini POE facility, which has the capability to handle secondary inspections of this kind. Figure 5 shows the Nogales- Morley Gate POE building—the small windowed structure on the right is the processing site. CBP officials at three other land POEs on the southwest border also told us that space constraints were a factor in their ability to efficiently process those subject to US-VISIT. Specifically, at the POEs at Los Tomates, Gateway, and Brownsville/Matamoros, Texas, CBP officials told us that US-VISIT had made I-94 processing more efficient, but travelers continued to experience delays of up to 2 hours on peak holiday weekends as they had before US-VISIT was installed. Officials at these facilities told us that they believe they could alleviate this problem if the facility had the space to install more workstations capable of operating US-VISIT entry capability. According to CBP officials, CBP has begun to examine the condition of each facility with the intent of developing a list of border station construction and modification needs and plans to prioritize construction projects based on need. In the meantime, CBP and US-VISIT officials told us that they have taken steps to address problems operating US-VISIT when space constraints are an issue. For example, at the POE in Highgate Springs, Vermont, CBP officials told us that US-VISIT computers and those needed to process commercial truck drivers and their cargoes were competing for space at the interior counter area of the building. Following our visit, we were told that the POE had adjusted its space allocation inside the POE building so that there are now five workstations for US- VISIT and other noncommercial visitor processing, one of which can do both. According to the POE assistant area port director, the POE also extended the hours during which truck drivers can be processed in a separate building designed entirely for processing them and their cargoes, in order to relieve the space pressures in the main building that occur during the high-volume tourist summer season. US-VISIT and CBP officials reported that they have taken other steps to try to minimize any problems that may arise integrating US-VISIT entry capability operations with other CBP operations. For example, to help ensure that US-VISIT does not have an adverse impact on CBP’s operations at ports of entry, US-VISIT and CBP established a liaison office in June 2005, involving supervisory managers detailed from various CBP offices. The liaison officers worked with US-VISIT staff to overcome operational issues at POEs; review plans; develop and deliver training; set up call sites during busy holiday periods to provide support to POEs needing assistance; and work through technology problems. A CBP official told us that he believes both US-VISIT and CBP have been successful in helping land POEs overcome problems as they arise (such as those that might occur operating new technology at space constrained facilities). The CBP officers detailed to the liaison office have since returned to their original duty stations. According to CBP officials, CBP has an open invitation to re-initiate the liaison office at any time. While past challenges with facilities are well known to US-VISIT and CBP officials and efforts have been made to address them, it is not clear whether US-VISIT or CBP is prepared to anticipate additional facilities challenges—challenges already acknowledged by senior US-VISIT officials—that may arise as new US-VISIT capabilities are added. The following two key initiatives, in particular, could affect operations at land POEs: 10-fingerprint scanning of US-VISIT enrollees. DHS plans to require that individuals subject to US-VISIT undergo a 10-fingerprint scan, in place of the current 2, to ensure the highest levels of accuracy in identifying people entering and exiting the country. Under this plan, US-VISIT visitors would be required to have all fingerprints scanned the first time they enroll in US-VISIT and to submit a 2-fingerprint scan during subsequent visits. A cost/benefit analysis of this capability is under way by DHS, selected components, and other agencies, with an anticipated transition period (from the 2- to 10-fingerprint scan requirement) taking place later this year and next. In January 2006, the former Director of US-VISIT testified before the Senate Appropriations Subcommittee on Homeland Security that in order to introduce a 10- fingerprint scan capability at land POEs and other locations, DHS would need a 6-to-8-month period to develop the capability and additional time to introduce initial operating capability. The former Director testified that unresolved technical challenges create the potential for a significant increase in the length of time needed to process individuals subject to US-VISIT at POEs once the 10-fingerprint requirement is in place. In commenting on this report, DHS noted that US-VISIT has been working with industry to speed up processing time and reduce the size of 10-print capture devices to “eliminate or significantly reduce the impact of deploying 10-print scanning.” As noted earlier, our past work has shown that any lengthening in the process of entering the United States at the busiest POEs could inconvenience travelers and result in fewer visits to the United States or lost business to the nation. Electronic passport readers for Visa Waiver Program travelers. All Visa Waiver Program travelers with passports issued after October 26, 2005 must have passports that contain a digital photograph printed in the document; passports issued to visa waiver travelers after October 26, 2006 must have integrated circuit chips, known as electronic passports, which are also called “e-passports.” (The Visa Waiver Program allows travelers from certain countries to gain entry to the United States without a visa.) These e-passports are to contain biographic and biometric information that can be read by an e-passport reader or scanner, a device which electronically reads or scans the information embedded in the e-passport at close proximity, about 4 inches to the reader. According to DHS, all POEs must have the ability to compare and authenticate e-passports as well as visas and other travel and entry documents issued to foreign nationals by DHS and the Department of State. Earlier this year, DHS announced it had successfully tested e-passports and e-passport scanners. A US-VISIT Program Office official told us that deployment of these scanners is moving toward implementation at POEs located at 34 selected international airports where about 97 percent of the Visa Waiver Program travelers enter the country. The official said that e-passport readers will not initially be installed at land POEs—which process a small percentage of visa waiver travelers—and there is no timeline for deploying the scanners at land POEs, although there are plans to do so at some point. CBP’s Director of Automated Programs in the Office of Field Operations told us that e-passport readers and the database used to process e-passport information do not operate as fast as current processes at land POEs and thus could cause additional delays, especially at POEs experiencing processing backlogs and wait times, such as San Ysidro, California, and Nogales-Mariposa, Arizona. Given the potential impact that enhancements to US-VISIT could have both on visitor processing overall and on land POE facilities, it is important for US-VISIT and CBP to be able to gauge how new changes associated with US-VISIT may affect operations. However, our past work showed that US-VISIT had not taken all needed steps to help ensure that US-VISIT entry capability operates as intended because the approaches used to gauge or anticipate the impact of US-VISIT operations on land POE facilities was limited. Specifically, in 2005, in an effort to evaluate the impact of US-VISIT on the busiest land POEs, DHS completed evaluations of the time needed to process and issue the I-94 arrival/departure form at 5 POEs. To conduct its study, DHS studied the I-94 process before and after US-VISIT was installed at five land POEs at three locations (Port Huron, Michigan; Douglas, Arizona; and Laredo, Texas). Based on data collected from these 5 POEs, US-VISIT officials concluded that no additional staff or facility modifications were needed at other POEs in order to accommodate US-VISIT. We reported in February 2006 that the scope of this evaluation was too limited to determine potential operational impacts on POEs. We reported three limitations, in particular: (1) that the evaluations did not take into account the impact of US-VISIT on workforce requirements or facility needs because the evaluations focused solely on I-94 processing time; (2) that the locations selected were chosen in part because they already had sufficient staff to support a US-VISIT pilot-test; and (3) that US-VISIT officials did not base their evaluation of I-94 processing times on a constant basis before and after deployment of US-VISIT—that is, pre- deployment sites used fewer computer workstations to process travelers than did sites studied after deployment. We recommended that DHS explore alternative means to obtaining a full understanding of the impact of US-VISIT on land POEs, including its impact on workforce levels and facilities and that POE sites be surveyed that had not been included in their original assessment. US-VISIT responded that wait times at land POEs were already known and that it would conduct operational assessments at POEs as new projects came online. However, apart from a study conducted at one POE facility by a DHS contractor in August 2005 (cited above), US-VISIT has not provided documentation on any additional evaluations conducted that would provide additional insights about the effect of US-VISIT on land POE operations, including wait times. We recognize that it may not be cost-effective for US-VISIT or CBP to conduct a formal assessment of the impact US-VISIT has on each land POE now that the entry capability has been installed or of all facilities once new enhancements are introduced. Nevertheless, the assessment methodology US-VISIT has used in the past—which focused on measuring changes in I-94 processing times—raises questions about how the agency will assess the impact that the transition from 2- to 10-fingerprint scanning may have on land POE operations. That is, if US-VISIT uses the same methodology and focuses on the changes in processing time, rather than on the overall impact on operations, including facilities, staffing, and support logistics, the results will have the same limitations we highlighted in our earlier study. Our February 2006 recommendation would also be applicable to enhancements that have the potential to negatively affect operations. US-VISIT and CBP have management controls in place to alert them to operational problems as they occur, but these controls did not always work to ensure that US-VISIT operates as intended. Specifically, US-VISIT and CBP officials had not been made aware of computer processing problems that affected operations, in particular, until we brought them to their attention, partly because these problems were not always reported. These computer processing problems have the potential to not only inconvenience travelers because of the increased time needed to complete the inspection process, but to compromise security, particularly if CBP officers are unable to perform biometric checks—one of the critical reasons US-VISIT was installed at POEs. Our standards for internal control in the federal government state that it is important for agencies to provide reasonable assurance that they can achieve effective and efficient operations. This includes establishing and maintaining a control environment that sets a positive and supportive attitude toward control activities that are designed to help ensure that management’s directives are carried out. Control activities include reviewing and monitoring agency operations at the functional level (i.e., at land POEs) to compare operational performance with planned or expected results and to ensure that controls described in policies and procedures are actually applied and applied properly, and having relevant, reliable, and timely communications to ensure that information flows down, across, and up the organization thereby helping program managers carry out their responsibilities and providing assurance that timely action is taken on implementation problems or information that requires follow-up. Our site visit interviews suggest that current monitoring and control activities were not sufficient to ensure that US-VISIT performs in accordance with its security mission and objectives. For example, at 12 of the 21 land POEs we visited, computer-processing problems arose that, according to CBP officials at those locations, had an impact on processing times and traveler delays. Generally, officials at these 12 sites said that computer problems occurred with varying frequency and duration; some said that computers were at times slow or froze up during certain times of the day, while others said that problems were sporadic and they could not ascribe them to a particular time of the day. None of the officials we interviewed had formally assessed the impact of computer slowdowns or freezes on visitors and visitor wait times, but nonetheless cited computer problems as a cause of visitor delays. In November 2005, we notified a US- VISIT program official in headquarters that we had heard about computer processing problems at some of the POEs we had visited. The official told us that US-VISIT had not been aware of these problems and said that, as a result of our work, CBP had been contacted to investigate the problem. In June 2006, a CBP official responsible for information technology at CBP’s data center told us that POEs had experienced slowdowns associated with certain US-VISIT data queries. The CBP official told us that since the computer processing problems were identified and resolved, performance had greatly improved. We did not verify whether the actions taken fully resolved these problems. “…on the morning of Thursday, June 23, the computer systems used to perform secondary inspections became very slow, impacting the issuance of I-94 and enrollment in US-VISIT. The staff had to revert to using the paper I-94s, which visitors had to fill out by hand...” “As happened during the study, the computer systems were unavailable for a period of time. This occurred on Tuesday from 1:00 to 2:00 p.m. Port officials decided to revert to the manual process because the network had become very slow and the queue was growing. CBP officers told … researchers that it was taking up to twenty minutes to receive responses to queries....” In an undated memorandum commenting on the contractor’s report, US- VISIT’s Director of Mission Operations expressed concern about the contractor’s discussion of computer “downtime” as a factor impacting US- VISIT processing times. He stated that these problems can be caused by a variety of factors, including factors related to I-94 processing and that capturing biometric information “is only rarely responsible for the inability to complete the process.” Based on our work, it is unclear what analysis US-VISIT had done to make this determination. US-VISIT officials told us that various controls are in place to alert them to problems as they occur, but the lack of awareness about computer- processing problems raises questions about whether these controls are working as intended. US-VISIT officials told us that it is their position that once US-VISIT entry capability equipment was installed and operating, CBP became responsible for identifying problems and notifying US-VISIT when US-VISIT-related problems occurred so that US-VISIT can work with CBP to resolve them. The officials stated that computer problems can be attributable to other processes and systems not related to US-VISIT which are not the US-VISIT Program Office’s responsibility. In addition, the Acting Director of US-VISIT noted that there are mechanisms in place to help CBP and US-VISIT identify problems. For example, US-VISIT officials told us that US-VISIT and CBP headquarters officials meet regularly to discuss issues associated with US-VISIT implementation and CBP maintains a help desk at its Virginia data center to resolve technology problems raised by CBP field officials. Regarding the latter, the Acting Director noted that if POE officials do not report problems, there is nothing CBP and US-VISIT can do to resolve them. During our review, we noted that CBP officers are required—in training and as part of standard operating procedures—to report problems with US-VISIT technology to the CBP help desk. Nevertheless, CBP officials at 9 of the 12 sites we visited where computer processing problems were identified said they did not always use the help desk to report or resolve computer problems (and thereby generating a record of the problems). Officials at 5 of the 9 sites told us they temporarily resolved the problem by turning off and restarting the computers. Although US-VISIT and CBP have some controls in place to help them identify and address problems like those discussed above, these controls may not have been implemented consistently or may not be sufficient to ensure that US-VISIT operates as intended because officials did not always alert CBP and US-VISIT program managers to the fact that problems were occurring that adversely affected operations. It is important that US-VISIT and CBP managers are alerted to problems as they occur to ensure continuity of operations consistent with US-VISIT’s goal of providing security to U.S. citizens and travelers. Moreover, in light of the fact that US-VISIT plans to enhance security through additional technology investments and that it may be challenging to deploy and operate at facilities that are already known to be aging and undersized, it is incumbent upon the US-VISIT program office to play a continuing and proactive role in the management control structure. Our internal control standards also call for agencies to establish performance measures and indicators throughout the organization so that actual performance can be compared to expected results. The US-VISIT program office has established and implemented performance measures for fiscal years 2005 and 2006 that are designed to gauge performance of various aspects of US-VISIT covering a variety of areas, but these measures do not gauge the performance of US-VISIT entry capabilities at land POEs. For example, according to a July 2006 draft report prepared by the US-VISIT program office, US-VISIT has begun to measure the ratio of adverse actions (defined as decisions to deny entry into the country) to total-biometric-watch-list “hits” when visitors are processed at ports of entry. According to US-VISIT, this measure seeks to help CBP focus its inspection activities on preventing potential known or suspected criminals or terrorists from entering the country. US-VISIT reported that it had not established a baseline or target for this measure in fiscal year 2005. However, according to US-VISIT, CBP officers at all POEs combined denied entrance to 30 percent of persons whose biometric information appeared on a watch list during fiscal year 2005 (about 617 of the 2,059 watch list “hits”). US-VISIT established a target for this measure during fiscal year 2006 of 33 percent. Another measure is designed to gauge the wait time incurred by a specific US-VISIT activity at all air, land, and sea POEs, namely the average response time to deliver results on biometric watch list queries for finger scans. (This measure does not gauge other US-VISIT related activities such as scanning the visa or passport, taking and processing a digital photograph, or printing an I-94.) To ensure that wait times are not increased substantially due to additional US-VISIT capabilities at POEs, US-VISIT has established a goal of 10 seconds and reported that, since October 2004, US-VISIT has been able to maintain, on average, less than an 8-second response time at POEs at which US-VISIT had been installed. These and other existing measures of certain key aspects of program performance with respect to both security and efficiency can be useful in analyzing trends and measuring results against planned or expected results. However, because there are operational and facility differences among air, sea, and land POEs, it is important to be able to measure and distinguish differences—one would not expect baseline or target measures to be the same across these environments. At air and sea ports, visitors are processed in primary inspection in a controlled environment and CBP officers are able to prescreen visitors using passenger manifests, which are transmitted to CBP while passengers are enroute to the POE. By contrast, at land POEs, visitors arrive on foot or in a vehicle and CBP officers refer them to secondary inspection for US-VISIT processing without the benefit of a manifest and based on the information available to officers at the point of initial contact—a process substantially different than that used at air and sea ports. The measures used in August 2006 aggregated baselines and targets for all POEs and did not distinguish among them with regard to air, land, and sea POEs. Without additional performance measures to more fully gauge operational impacts of US-VISIT on land POEs, CBP and US-VISIT may not be well equipped to identify problems, trends, and areas needing improvements now and as additional US-VISIT entry capabilities, such as 10-finger scans, are introduced. Consistent with our past work, we believe such measures could help DHS identify and quantify problems, evaluate alternatives, allocate resources, track progress, and learn from any mistakes that may have been made while deploying and operating US-VISIT at land POEs. While federal laws require the creation of a US-VISIT exit capability using biometric verification, the US-VISIT Program Office concluded that implementing a biometrically-based exit-recording system like that used to record visitors entering the country would require additional staff and new infrastructure (such as buildings and roadways) that would be prohibitively costly, would likely produce major traffic congestion in exit lanes at the busier land POEs and could have adverse impacts on trade and commerce. Although current technology does not exist to enable biometric verification of those leaving the country without major infrastructural changes, US-VISIT officials believe technological advances over the next 5- to 10- years will enable them to record who is leaving the country using biometrics without requiring travelers to stop at a facility, thereby minimizing the need for major infrastructure changes. In the interim, US-VISIT is testing an alternative nonbiometric technology for recording visitors as they exit the country, in which electronic tags containing a numeric identifier associated with each visitor are embedded in I-94 forms. US-VISIT’s own analysis of this technology and our analysis and that of others has identified numerous performance and reliability problems with this solution, including the inability of the nonbiometric solution to ensure that the person exiting the country is the same who entered. US-VISIT has taken corrective actions and testing is still ongoing, but uncertainties remain about how US-VISIT will use technology in the future to meet biometric exit requirements. These uncertainties reflect the fact that DHS has not met a June 2005 statutory requirement to submit a report to the Congress that describes (1) the status of biometric exit data systems already in use at POEs and (2) the manner in which US-VISIT is to meet the goal of a comprehensive screening system, with both entry and exit biometric capability. Federal laws require the creation of a US-VISIT exit capability using biometric verification methods to ensure that the identity of visitors leaving the country can be matched biometrically against their entry records. However, according to officials at the US-VISIT program office and CBP and US-VISIT program documentation, there are interrelated logistical, technological, and infrastructure constraints that have precluded DHS from achieving this mandate, and there are cost factors related to the feasibility of implementation of such a solution. The major constraint to performing biometric verification upon exit at this time, in the US-VISIT Program Office’s view, is that the only proven technology available would necessitate mirroring the processes currently in use for US-VISIT at entry. A mirror-image system for exit would, like entry, require CBP officers at land POEs to examine the travel documents of those leaving the country, take fingerprints, compare visitors’ facial features to photographs, and, if questions about identity arise, direct the departing visitor to secondary inspection for additional questioning. These steps would be carried out for exiting pedestrians as well as for persons exiting in vehicles. The US-VISIT Program Office concluded in an internal January 2005 report assessing alternatives to biometric exit that the mirror-imaging solution was “an infeasible alternative for numerous reasons, including but not limited to, the additional staffing demands, new infrastructure requirements, and potential trade and commerce impacts.” US-VISIT officials told us that they anticipated that a biometric exit process mirroring that used for entry could result in delays at land POEs with heavy daily volumes of visitors. And they stated that in order to implement a mirror-image biometric exit capability, additional lanes for exiting vehicles and additional inspection booths and staff would be needed, though they have not determined precisely how many. According to these officials, it is unclear how new traffic lanes and new facilities could be built at land POEs where space constraints already exist, such as those in congested urban areas. (For example, San Ysidro, California, currently has 24 entry lanes, each with its own staffed booth and 6 unstaffed exit lanes. Thus, if full biometric exit capability were implemented using a mirror image approach, San Ysidro’s current capacity of 6 exit lanes would have to be expanded to 24 exit lanes.) As shown in figure 6, based on observations during our site visit to the San Ysidro POE, the facility is surrounded by dense urban infrastructure, leaving little, if any, room to expand in place. Some of the 24 entry lanes for vehicle traffic heading northwards from Mexico into the United States appear in the bottom left portion of the photograph, where vehicles are shown waiting to approach primary inspection at the facility; the six exit lanes (traffic towards Mexico), which do not have fixed inspection facilities, are at the upper left. Other POE facilities are similarly space-constrained. At the POEs at Nogales-DeConcini, Arizona, for example, we observed that the facility is bordered by railroad tracks, a parking lot, and industrial or commercial buildings. In addition, CBP has identified space constraints at some rural POEs. For example, the Thousand Islands Bridge POE at Alexandria Bay, New York, is situated in what POE officials described as a “geological bowl,” with tall rock outcroppings potentially hindering the ability to expand facilities at the current location. Officials told us that in order to accommodate existing and anticipated traffic volume upon entry, they are in the early stages of planning to build an entirely new POE on a hill about a half-mile south of the present facility. CBP officials at the Blaine-Peace Arch POE in Washington state said that CBP also is considering whether to relocate and expand the POE facility, within the next 5-to-10 years, to better handle existing and projected traffic volume. According to the US- VISIT program officials, none of the plans for any expanded, renovated, or relocated POE include a mirror-image addition of exit lanes or facilities comparable to those existing for entry. In 2003, the US-VISIT Program Office estimated that it would cost approximately $3 billion to implement US-VISIT entry and exit capability at land POEs where US-VISIT was likely to be installed and that such an effort would have a major impact on facility infrastructure at land POEs. We did not assess the reliability of the 2003 estimate. The cost estimate did not separately break out costs for entry and exit construction, but did factor in the cost for building additional exit vehicle lanes and booths as well as buildings and other infrastructure that would be required to accommodate a mirror imaging at exit of the capabilities required for entry processing. US-VISIT program officials told us that they provided this estimate to congressional staff during a briefing, but that the reaction to this projected cost was negative and that they therefore did not move ahead with this option. No subsequent cost estimate updates have been prepared, and DHS’s annual budget requests have not included funds to build the infrastructure that would be associated with the required facilities. US-VISIT officials stated that they believe that technological advances over the next 5-to-10 years will make it possible to utilize alternative technologies that provide biometric verification of persons exiting the country without major changes to facility infrastructure and without requiring those exiting to stop and/or exit their vehicles, thereby precluding traffic backup, congestion, and resulting delays. US-VISIT’s report assessing biometric alternatives noted that although limitations in technology currently preclude the use of biometric identification because visitors would have to be stopped, the use of the as-yet undeveloped biometric verification technology supports the long-term vision of the US- VISIT program. However, no such technology or device currently exists that would not have a major impact on facilities. The prospects for its development, manufacture, deployment and reliable utilization are currently uncertain or unknown, although a prototype device that would permit a fingerprint to be read remotely without requiring the visitor to come to a full stop is under development. While logistical, technical, and cost constraints may prevent implementation of a biometrically based exit technology for US-VISIT at this time, it is important to note that there currently is not a legislatively mandated date for implementation of such a solution. The Intelligence Reform and Terrorism Prevention Act of 2004 requires US-VISIT to collect biometric-exit-data from all individuals who are required to provide biometric entry data. The act did not set a deadline, however, for requiring collection of biometric exit data from all individuals who are required to provide biometric entry data. Although US-VISIT had set a December 2007 deadline for implementing exit capability at the 50 busiest land POEs, US-VISIT has since determined that implementing exit capability by this date is no longer feasible, and a new date for doing so has not been set. US-VISIT evaluated 12 different exit-recording technologies against the six criteria listed above, including some that incorporated biometric features—scanning the retina or iris, and a facial recognition system. Because the biometric solutions considered would have required an exiting visitor to slow down, stop, or possibly enter a POE facility, they were rejected. Other alternatives, such as the use of a global positioning system, were rejected because they transmit signals that could facilitate surveillance of individuals, raising concerns about privacy. no additional traffic congestion); (5) be convenient to the visitor, and (6) be commercially available. None of these criteria directly addressed or reflected the legislative mandate to deploy a system to record entry and exit by foreign travelers using biometric identifiers in order to ensure that persons leaving the country were those who had entered. Rather, the criteria focused on choosing a technology that would not require a major investment in facilities, would protect privacy, and would not generate large traffic backups that would inconvenience or delay both travelers and commercial carriers. Among the technologies considered for testing by the US-VISIT Program Office, the only one that met all the US-VISIT evaluation criteria was passive, automated, radio frequency identification (RFID). This technology, according to US-VISIT, “best satisfied all the assessment criteria.” RFID is an automated data-capture technology that can be used to electronically store information contained on a very small tag that can be embedded in a document (or some other physical item). This information can then be identified, and recorded as having been identified, by RFID readers that are connected to computer databases. For purposes of US-VISIT’s testing of the nonbiometric technology, the RFID tag is embedded in a modified I-94 arrival/departure form, called an I-94A. Each RFID tag has only a single number stored in it; privacy is protected because no information is stored on these tags other than a unique ID number that is linked to the visitor’s biographic information. To facilitate the transmission of the number from the RFID tag, a new DHS system of records—the Automated Identification Management System (AIDMS) —was created to link the unique RFID tag ID number to existing information stored in the Treasury Enforcement Communications System (TECS) database, which is used by CBP to verify travel information and update traveler data. According to US-VISIT, limiting the data on the tag to a single number helps preserve the privacy of travelers; acquisition of the number would provide no meaningful information to non-authorized persons, since they would then have to access TECS to link the number to biographic data. However, access to computers and their databases at land POEs is restricted to authorized personnel and involves additional protections such as passwords as well as entrance into physically restricted areas inside POE buildings. (A more detailed discussion of RFID technology and privacy issues is contained in appendix VI.) The RFID technology used in this way is considered passive because the tag cannot initiate communications. Rather, the tag responds to radio frequency emissions from an RFID reader—an electronic device that can be installed on a pole, or on a steel gantry of the kind that holds highway signs over the entire width of a roadway (see figure 11)—and transmits the numeric information stored on the tag back to the reader, from up to 30 feet away, according to the US-VISIT Program Office. Figure 7a shows RFID readers mounted on a metal gantry at the Thousand Islands Bridge land POE, Alexandria Bay, New York. The readers are attached to metal extensions that project out from the right side of the gantry, to record an I- 94A embedded with tags that are inside the vehicles that pass underneath. RFID readers can also be installed in portals or on poles at pedestrian traffic areas to read the I-94A embedded with tags of persons leaving the country on foot. Figure 7b shows RFID readers in portals positioned on either side of pedestrian exit doors at the Blaine-Peace Arch POE in Washington State. In December 2004 and January 2005, a team of US-VISIT contractors conducted the first part of a feasibility study to test passive RFID equipment in a simulated environment-at a mock POE in Virginia. At this site, different types of vehicles– including cars, buses, and trucks—were run at different speeds to test RFID read rates. Pedestrians carrying documents with RFID tags embedded or attached were not tested. The feasibility study raised numerous issues about the reliability and performance of the RFID technology. For example, RFID readers held on a gantry over a roadway had difficulty detecting RFID-detectable tags that were inside vehicles with metallic tinted windows (whether the windows were open or closed). The read rate was improved from about 56 percent to about 70 percent if the readers were moved to both sides of the road, rather than overhead, and if the occupants held their documents with the RFID-detectable tags up to the vehicle’s side windows. The study concluded that the physical actions of the visitor had to be taken into account when obtaining a read of the I-94A and made specific recommendations to improve read rates, such as suggesting that vehicle occupants hold the I-94A up to a side window and keep multiple forms apart. After the feasibility study, US-VISIT proceeded, as planned, with phase 1 of proof-of-concept testing for RFID at five land POEs at the northern and southern borders to determine what corrective actions, if any, should be taken to improve RFID read rates for exiting vehicles and pedestrians. This effort comprised testing for both exit and for re-entry by persons who have been issued a tag-embedded I-94A that is valid for multiple entries over several months. The RFID performance tests were conducted for one-week periods at land POEs, as follows: vehicular traffic was tested at Nogales-Mariposa and Nogales-DeConcini POEs in Nogales, Arizona; the Blaine-Pacific Highway and Blaine-Peace Arch POEs in Blaine, Washington; and Thousand Islands Bridge POE in Alexandria Bay, New York; pedestrian traffic was tested at the Nogales-Mariposa and Nogales- DeConcini POEs. For these exit tests, the US-VISIT Program Office developed critical success factor target read rates to compare them to the actual read rates obtained during the test for both pedestrians carrying an I-94A with RFID- detectable tags and for travelers in vehicles who also had an RFID- detectable I-94A with them inside the vehicles. The target exit read rates ranged from an expected success rate of 70 percent to 95 percent, based on anticipated performance under different conditions, partly as demonstrated in the earlier feasibility study, on business requirements, and on a concept of operation plan prepared for Increment 2C. In a January 2006 assessment of the test results, the US-VISIT Program Office reported that the exit read rates that occurred during the test generally fell short of the expected target rates for both pedestrians and for travelers in vehicles. For example, according to US-VISIT, at the Blaine-Pacific Highway test site, of 166 vehicles tested, RFID readers correctly identified 14 percent; the target read rate was 70 percent. Another problem that arose was that of cross-reads, in which multiple RFID readers installed on gantries or poles picked up information from the same visitor, regardless of whether the individual was entering or exiting in a vehicle or on foot. Thus, cross-reads resulted in inaccurate record- keeping. According to a January 2006 US-VISIT corrective-action report, signal-filtering equipment is to be installed to correct the problem and additional testing is to be conducted to confirm and understand the extent of the problem. The report also noted that remedying cross-reads would require changes to equipment and infrastructure on a case-by-case basis at each land POE, because each has a different physical configuration of buildings, roadways, roofs, gantries, poles, and other surfaces against which the signals can bounce and cause cross-reads. Each would therefore require a different physical solution to avoid the signal interference that triggers cross-reads. Although cost estimates or time lines have not been developed for such alterations to facilities and equipment, it is possible that having to alter the physical configuration at each land POE in some regard and then test each separately to ensure that cross-reads had been eliminated would be both time consuming and potentially costly, in terms of changes to infrastructure and equipment. We observed potential problems with the RFID exit system relating to facilities and infrastructure at some of the POEs we visited. At the Nogales-Mariposa POE, in Nogales, Arizona, for example, we observed that RFID portals for pedestrians had been placed on the right side of the CBP POE building, on a rocky, sloping hillside, and that there was no signage directing pedestrians to walk between them, nor was a walkway installed, as shown in figure 8a. Although travelers were expected to walk between the portals, this configuration enabled pedestrians to avoid the portals altogether—to walk around them or cross the road to avoid them, as shown in figure 8b. According to the US-VISIT corrective actions report, 15 percent of exiting pedestrian (including those participating in the test and those who did not) used the pathway between the two portals at the Nogales facility during a September 2005 observation period. In this same report, US-VISIT acknowledged that there was no defined pathway or infrastructure for pedestrian exit at Nogales-Mariposa, Arizona, and that only one of the three pedestrian paths were covered by the portals that had been placed there. US-VISIT reported that while the placement of the portal readers will not be changed, it is taking steps to improve the likelihood of detection with additional antennae, readers, and signage. However, there are no plans at present to modify the existing POE infrastructure on the west side of the building where the portals were installed, such as by installing a paved walkway or by constructing fencing to divert those exiting to go through the readers in order to increase the chances that exiting pedestrians are detected. In commenting on this report, DHS stated that it had constructed a new primary pedestrian exit walkway parallel to the existing pedestrian entry and had installed signage, sidewalks, and a new secure gate. However, according to a CBP official at the Nogales-Mariposa POE, the newly constructed pedestrian exit walkway is on the other (east) side of the building from the pathway where the portal readers were placed and tested. During the period that US-VISIT carried out RFID exit tests at land POEs, US-VISIT also tested read rates for RFID-detectable documents carried by pedestrians or persons in vehicles who had been issued an I-94A during a prior visit to the United States, had subsequently left the country, and were intending to re-enter. (I-94s can be issued that are valid for up to 6 months for multiple re-entries into the country.) US-VISIT performed the re-entry test for documents held by persons in vehicles at the Mariposa and DeConcini POEs in Nogales, Arizona; the Blaine-Pacific Highway and Blaine-Peace Arch, POEs in Washington state; and Thousand Islands Bridge POE at Alexandria Bay, New York. For pedestrians, the re-entry test was performed at the Mariposa and DeConcini POEs in Nogales, Arizona (see tables 6a and 6b, appendix VII). US-VISIT set higher expected target read rates for the re-entry test than for exit because all persons and vehicles entering or re-entering the country must stop for questioning by CBP officers and must take travel documents out of their pockets or from inside a vehicle, and show them to the officer, enhancing the likelihood that RFID-detectable documents would be detected. As expected by US- VISIT, read rates for the re-entry test for vehicles were generally higher than for exit, although the results did not meet the critical success factors initially projected by US-VISIT. Appendix VII discusses the results of RFID performance for exit and re-entry in greater detail. Beyond RFID operations issues that affect facilities, our work and that of the DHS Privacy Office have identified other performance and reliability problems related to passive RFID. In June 2005, we testified before the Subcommittee on Economic Security, Infrastructure Protection, and Cybersecurity of the House Committee on Homeland Security on similar reliability problems with RFID. We noted, for example, that when an object close to the reader or tag interferes with the radio waves, read-rate accuracy decreases, and that environmental conditions, such as temperature and humidity, can make tags unreadable. We further noted that tags read at high speeds have a significant decrease in read rates. According to US-VISIT officials, phase 2 of the RFID proof-of-concept testing, which is to expand the capabilities identified at the five phase 1 locations will, among other things, link visitor data to vehicle exit data (or re-entry, if the visitor already has an RFID- embedded I-94 form), address deficiencies noted in phase 1, and further evaluate RFID performance. At the time of our review, many uncertainties about the future of a US-VISIT exit capability remained because US-VISIT had not developed a plan to show when phase 2 of proof-of-concept testing of RFID would conclude, when an evaluation of the technology would be completed, and how US- VISIT would define success. However, even if RFID deficiencies were to be fully addressed and deadlines set, questions remain about DHS’s intentions going forward. For example, the RFID solution does not meet the congressional requirement for a biometric exit capability because the technology that has been tested cannot meet a key goal of US-VISIT—ensuring that visitors who enter the country are the same ones who leave. By design, an RFID tag embedded in an I-94 arrival/departure form cannot provide the biometric identity- matching capability that is envisioned as part of a comprehensive entry/exit border security system using biometric identifiers for tracking overstays and others entering, exiting, and re-entering the country. Specifically, the RFID tag in the I-94 form cannot be physically tied to an individual. This situation means that while a document may be detected as leaving the country, the person to whom it was issued at time of entry may be somewhere else. DHS was to have reported to Congress by June 2005 on how the agency intended to fully implement a biometric entry/exit program. As of October 2006, this plan was still under review in the Office of the Secretary, according to US-VISIT officials. According to statute, this plan is to include, among other things, a description of the manner in which the US- VISIT program meets the goals of a comprehensive entry and exit screening system—including both biometric entry and exit—and fulfills statutory obligations imposed on the program by several laws enacted between 1996 and 2002. Until such a plan is finalized and issued, DHS is not able to articulate how entry/exit concepts will fit together—including any interim nonbiometric solutions—and neither DHS nor Congress is positioned to prioritize and allocate resources for a US-VISIT exit capability or plan for the program’s future. In commenting on this report, DHS acknowledged that the interim non- biometric exit technology using RFID tags embedded in the I-94 does not meet the statutory requirement for a biometric exit capability. DHS stated that it used the non-biometric technology because industry was not to the point of developing a device that could satisfy US-VISIT requirements, such as not impacting traffic flows or not having safety impacts. DHS said that US-VISIT officials would perform subsequent research and industry outreach activities in an attempt to satisfy statutory requirements for a biometric exit capability. In recent years, DHS has planned or implemented a number of initiatives aimed at securing the nation’s borders. However, DHS has not defined a strategic context that shows how US-VISIT fits with other land border initiatives. As we reported in September 2003, agency programs need to properly fit within a common strategic context governing key aspects of program operations—e.g., what functions are to be performed by whom; when and where they are to be performed; what information is to be used to perform them; what rules and standards will govern the application of technology to support them; and what facility or infrastructure changes will be needed to ensure that they operate in harmony and as intended. Without a clear strategic context for US-VISIT, the risk is increased that the program will not operate with related programs and thus not cost- effectively meet mission needs. In our September 2003 report, we stated that DHS had not defined key aspects of the larger homeland security environment in which US-VISIT would need to operate. For example, certain policy and standards decisions had not been made, such as whether official travel documents would be required for all persons who enter and exit the country, including U.S. and Canadian citizens, and how many fingerprints would be collected—factors that could potentially increase inspection times and ultimately increase traveler wait times at some of the higher volume land POE facilities. To minimize the impact of these changes, we recommended that DHS clarify the context in which US-VISIT is to operate. Three years later, defining this strategic context remains a work in progress. Thus, the program’s relationships and dependencies with other closely allied initiatives and programs are still unclear. According to the US-VISIT Chief Strategist, the Program Office drafted in March 2005 a strategic plan that showed how US-VISIT would be strategically aligned with DHS’s organizational mission and also defined an overall vision for immigration and border management. According to this official, the draft plan provided for an immigration and border management enterprise that unified multiple internal departmental and other external stakeholders with common objectives, strategies, processes, and infrastructures. As of October 2006, we were told that DHS had not approved this strategic plan. This draft plan was not available to us, and it is unclear how it would provide an overarching vision and road map of how all these component elements can at this time be addressed given that critical elements of other emerging border security initiatives have yet to be finalized. For example, under the Intelligence Reform and Terrorism Prevention Act of 2004, DHS and State are to develop and implement a plan, no later than June 2009, which requires U.S. citizens and foreign nationals of Canada, Bermuda, and Mexico to present a passport or other document or combination of documents deemed sufficient to show identity and citizenship to enter the United States (this is currently not a requirement for these individuals entering the United States via land POEs from within the western hemisphere). This effort, known as the Western Hemisphere Travel Initiative (WHTI), was first announced in 2005, and some members of Congress and others have raised questions about agencies’ progress carrying out WHTI. In May 2006, we issued a report that provided our observations on efforts to implement WHTI along the U.S. border with Canada. We stated that DHS and State had taken some steps to carry out the Travel Initiative, but they had a long way to go to implement their proposed plans, and time was slipping by. Among other things, we found that: key decisions had yet to be made about what documents other than a passport would be acceptable when U.S. citizens and citizens of Canada enter or return to the United States—a decision critical to making decisions about how DHS is to inspect individuals entering the country, including what common facilities or infrastructure might be needed to perform these inspections at land POEs; a DHS and Department of State proposal to develop an alternative form of passport, called a PASS card, would rely on RFID technology to help DHS process U.S. citizens re-entering the country, but DHS had not made decisions involving a broad set of considerations that include (1) utilizing security features to protect personal information, (2) ensuring that proper equipment and facilities are in place to facilitate crossings at land borders, and (3) enhancing compatibility with other border crossing technology already in use. As of September 2006, DHS had still not finalized plans for changing the inspection process and using technology to process U.S. citizens and foreign nationals of Canada, Bermuda, and Mexico reentering or entering the country at land POEs. In the absence of decisions about the strategic direction of both programs, it is still unclear (1) how the technology used to facilitate border crossings under the Travel Initiative will be integrated with US-VISIT technology, if at all, and (2) how land POE facilities would have to be modified to accommodate both programs to ensure efficient inspections that do not seriously affect wait times. This raises the possibility that CBP would be faced with managing differing technology platforms and border inspection processes at high-volume land POEs facilities that, according to DHS, already face space constraints and congestion. Similarly, it is not clear how US-VISIT is to operate in relation to another emerging border security effort, the Secure Border Initiative (SBI)—a new comprehensive DHS initiative, announced last year, to secure the country’s borders and reduce illegal migration. According to DHS, as of June 2006, SBI is to focus broadly on two major themes: border control—gaining full control of the borders to prevent illegal immigration, as well as security breaches, and interior enforcement—disrupting and dismantling cross border crime into the interior of the United States while locating and removing aliens who are present in the United States in violation of law. Under SBI and its CBP component, called SBInet, DHS plans to use a systems approach to integrate personnel, infrastructures, technologies, and rapid response capability into a comprehensive border protection system. DHS reports that, among other things, SBInet is to encompass both the northern and southern land borders, including the Great Lakes, under a unified border control strategy whereby CBP is to focus on the interdiction of cross-border violations between the ports and at the official land POEs and funnel traffic to the land POEs. DHS has recently awarded a contract to help DHS design, build, and execute SBInet. Although DHS has published some information on various aspects of SBI and SBInet, it remains unclear how SBInet will be linked, if at all, to US-VISIT so that the two systems can share technology, infrastructure, and data across programs. For example, from a border control perspective, questions arise on whether CBP needs additional resources, facilities or facility modifications, and procedural changes at land POEs if all those who attempt to enter the country on the northern and southern border are successfully funneled to land POEs. Also, given the absence of a comprehensive entry and exit system, questions remain about what meaningful data US-VISIT may be able to provide other DHS components, such as Immigration and Customs Enforcement (ICE), to ensure that DHS can, from an interior enforcement perspective, identify and remove foreign nationals covered by US-VISIT who may have overstayed their visas. In a May 2004 report, we stated that although no firm estimates were available, the extent of overstaying is significant. We stated that most long-term overstays appeared to be motivated by economic opportunities, but a few had been identified as terrorists or involved in terrorist-related activities. Notably, some of the September 11 hijackers had overstayed their visas. We further reported that US-VISIT held promise for identifying and tracking overstays as long as it could overcome weaknesses matching visitors’ entry and exit. Developing and deploying complex technology that records the entry and exit of millions of visitors to the United States, verifies their identities to mitigate the likelihood that terrorists or criminals can enter or exit at will, and tracks persons who remain in the country longer than authorized is a worthy goal in our nation’s effort to enhance border security in a post-9/11 era. But doing so also poses significant challenges; foremost among them is striking a reasonable balance between US-VISIT’s goals of providing security to U.S. citizens and visitors while facilitating legitimate trade and travel. DHS has made considerable progress making the entry portion of the US-VISIT program at land ports of entry (POEs) operational, and border officials have clearly expressed the benefits that US-VISIT technology and biometric identification tools have afforded them. Nevertheless, US-VISIT is one in a series of ambitious border security initiatives that could take a toll on the current facilities and infrastructure in place to support the activities at land POEs, which already process a large majority (more than 75 percent) of all visitors entering the United States via legal checkpoints. Many land POEs operate out of small, aging structures that are constrained by space and that were constructed before technology and associated equipment played a prominent role in processing activities. Our current and past work has raised questions on whether DHS has adequately assessed how US-VISIT has affected operations at land POEs, given current constraints at facilities that routinely experience high traffic volumes and which encounter occasional computer-processing problems. As additional US-VISIT capabilities—such as 10-fingerprint scanning—are installed at land POEs and as other border security initiatives unfold, including the Western Hemisphere Travel Initiative, it is particularly important that DHS be able to anticipate potential problems and develop solutions to minimize any operational and logistical impacts on aging and already overcrowded land POE facilities. Our earlier recommendation on this issue suggested that DHS needed to expand upon prior efforts to assess the impact of US-VISIT on busy land POEs in order to obtain a fuller understanding of the system’s impact on these facilities from an operational and human capital perspective. We believe this remains an important step to take because it would help DHS establish a baseline or foundation from which to anticipate potential problems while providing a framework for developing strategies and action plans to overcome them. Although US-VISIT has said it would conduct operational assessments at POEs as new projects came online, the assessment methodology US-VISIT has used in the past—which focused on measuring changes in I-94 processing times—raised questions about how the agency will perform future assessments. In addition, because US-VISIT will likely continue to have an impact on land POE facilities as it evolves, it is important for US-VISIT and CBP officials to have sufficient management controls for identifying and reporting potential computer and other operational problems as they arise—problems that could affect the ability of US-VIST entry capability to operate as intended. If additional delays in processing visitors were to occur, the ability of POE facilities to handle additional vehicular and pedestrian traffic could be further strained, and incidents requiring officials to turn visitors away temporarily may increase. Likewise, if disruptions to US-VISIT computer operations are not consistently and promptly reported and resolved and if communication between CBP and US-VISIT officials about computer-related problems and other operational challenges is not effective, then it is possible that a critical US-VISIT function—notably, the ability to use biometric information to confirm visitors’ identities through various databases—could be disrupted, as has occurred in the past. The need to avoid disruptions to biometric verification is important given that one of the primary goals of US-VISIT is to enhance the security of U.S. citizens and visitors, and in light of the substantial investment DHS has made in US-VISIT technology and equipment. US-VISIT has taken appropriate steps to develop performance measures that focus on various aspects of US-VISIT performance across air, land, and sea POEs. However, these measures do not go far enough to assess the affect of US-VISIT on POE operations, particularly land POEs, which are operationally distinctive from air and sea POEs where US-VISIT entry has also been installed. Such measures are needed to ensure that officials can identify and address problems at land-based facilities where improvements may be needed. With respect to DHS’s effort to create an exit verification capability, developing and deploying this capability for US-VISIT at land POEs has posed a set of challenges that are distinct from those associated with entry. US-VISIT has not determined whether it can achieve, in a realistic time frame, or at an acceptable cost, the legislatively mandated capability to record the exit of travelers at land POEs using biometric technology. Apart from acquiring new facilities and infrastructure at an estimated cost of billions of dollars, US-VISIT officials have acknowledged that no technology now exists to reliably record travelers’ exit from the country, and to ensure that the person leaving the country is the same person who entered, without requiring them to stop upon exit—potentially imposing a substantial burden on travelers and commerce. US-VISIT officials stated that they believe a biometrically based solution that does not require those exiting the country to stop for processing, that minimizes the need for major facility changes, and that can used to definitively match a visitor’s entry and exit will be available in 5 to 10 years. In the interim, it remains unclear how officials plan to proceed—whether a nonbiometric alternative now being tested can provide an acceptable interim solution or whether the government ought to wait for a viable biometric solution to become available. According to statute, DHS was required to report more than a year ago on its plans for developing a comprehensive biometric entry and exit system, but DHS has yet to finalize this road map for Congress. Reporting might provide better assurance that US-VISIT can balance its goals of providing security, serving the immigration system, facilitating trade and travel, and protecting privacy at land POEs. This plan would also give DHS the opportunity to discuss the costs, benefits, barriers, and opportunities associated with various strategies for deploying biometric and nonbiometric exit capabilities and keep Congress informed of its progress overall. Until DHS finalizes such a plan, neither Congress nor DHS are likely to have sufficient information as a basis for decisions about various factors relevant to the success of US-VISIT, ranging from funding needed for any land POE facility modifications in support of the installation of exit technology to the trade-offs associated with ensuring traveler convenience while providing verification of travelers’ departure consistent with US-VISIT’s national security and law enforcement goals. Finally, DHS has not articulated how US-VISIT fits strategically and operationally with other land-border security initiatives, such as the Western Hemisphere Travel Initiative and Secure Border Initiative. Without knowing how US-VISIT is to be integrated within the larger strategic context governing DHS operations, DHS faces substantial risk that US-VISIT will not align or operate with other initiatives at land POEs and thus not cost-effectively meet mission needs. Knowing how US-VISIT is to work in harmony with these initiatives could help Congress, DHS, and others better understand what resources, tools, and investments in land POE facilities and infrastructure are needed to ensure their success, while providing critical information to help make decisions about other DHS missions. This could include, for example, information on what funds and staffing resources ICE would need to enforce immigration laws if US-VISIT were able to provide reliable and timely information on potentially millions of persons who have overstayed the terms of their visas, some of whom may pose a threat to the nation’s security. To help DHS achieve benefits commensurate with its investment in US- VISIT at land POEs and security goals and objectives, we are recommending that the Secretary of Homeland Security direct the US- VISIT Program Director, in collaboration with the Commissioner of CBP, to take the following two actions: improve existing management controls for identifying and reporting computer processing and other operational problems as they arise at land POEs and ensure that these controls are consistently administered; and develop performance measures for assessing the impact of US-VISIT operations specifically at land POEs. We also recommend that as DHS finalizes the statutorily mandated report describing a comprehensive biometric entry and exit system for US-VISIT, the Secretary of Homeland Security take steps to ensure that the report include, among other things, information on the costs, benefits, and feasibility of deploying biometric and nonbiometric exit capabilities at land POEs; a discussion of how DHS intends to move from a nonbiometric exit capability, such as the technology currently being tested, to a reliable biometric exit capability that meets statutory requirements; and a description of how DHS expects to align emerging land border security initiatives with US-VISIT and what facility or facility modifications would be needed at land POEs to ensure that technology and processes work in harmony. We requested comments on a draft of this report from the Secretary of Homeland Security. In an October 31, 2006, letter, DHS provided written comments, which are summarized below and included in their entirety in appendix VIII. DHS generally agreed with our recommendations and stated that it needed to improve existing management controls associated with US-VISIT, develop performance measures to assess the impact of US-VISIT operations at land POEs, and ensure that the statutorily mandated report describes how DHS will move to a biometric entry and exit capability and align US-VISIT with emerging land border initiatives. DHS did not provide timelines for when it plans to take these steps, including finalizing the statutorily mandated report, which was to have been issued to the Congress in June 2005. DHS disagreed with certain aspects of or sought clarification on some of our findings. DHS disagreed with our finding that the US-VISIT program office did not fully consider the impact of US-VISIT on the overall operations at POEs. It said that US-VISIT impacts are limited to changes in Form I-94 processing time, which it says are positive, as supported by US-VISIT evaluations. According to DHS other factors related to capacity, staffing, and the volume of travelers are “arguably” beyond the scope of US-VISIT. We agree that the approach taken to do operational assessments of the impact of US-VISIT land POE facilities focused on changes to I-94 processing time and that a variety of factors and processes can affect traveler inspections and associated wait times at land POEs. However, as discussed in this and our February 2006 report, the assessment methodology US-VISIT has used thus far had limitations--including focusing solely on I-94 processing time. Unanticipated problems at facilities that routinely experience high traffic volumes and occasionally encounter computer processing shortfalls raise questions about whether DHS has adequately assessed how US-VISIT has affected operations at land POEs. Although it may not be cost-effective for US-VISIT or CBP to conduct a formal assessment of the impact of US-VISIT at each land POE, it is important that DHS be positioned to anticipate potential problems and develop solutions to minimize any operational and logistical impacts on aging and already overcrowded land POE facilities. This is especially true given that DHS recognizes that the transition from 2- to 10-print digital scanning has a high likelihood of impacting port facilities. Regarding the latter, we have amended our report to clarify, consistent with DHS’s comments, that US-VISIT is currently working with industry to speed up processing time and reduce the size of the 10-print capture devices to “eliminate or significantly reduce the impact of deploying 10- print scanning.” DHS efforts to work with industry highlights the need to more fully assess how US-VISIT affects land POEs so that potential problems can be identified and addressed before the readers, or any other new programs, are introduced at land POEs. As noted in our report, based on our past work, any lengthening in the process of entering the United States at the busiest land POEs could inconvenience travelers and result in fewer visits to the United States or lost business to the nation. DHS also suggested that we clarify its acknowledgement that the non- biometric technology tested did not meet the statutory requirement for biometric exit capability. DHS stated that the non-biometric technology was used because industry has yet to develop a biometric exit device that could satisfy mission requirements such as not impacting traffic flow and not having safety impacts. We have amended our report to clarify that DHS acknowledged that the non-biometric technology would not satisfy statutory requirements and to reflect that it would perform research and industry outreach to satisfy the mandate. Nonetheless, the fact that the non-biometric exit technology used does not satisfy the congressionally mandated biometric exit capability underscores the importance of our recommendation for DHS to clearly articulate how it plans to move from a non-biometric exit technology to a biometric exit solution. In addition, DHS suggested that we clarify that, with regard to the RFID pedestrian exit portals at the Nogales-Mariposa, Arizona, POE, it had constructed a new primary pedestrian exit walkway parallel to the existing pedestrian entry and had installed signage, sidewalks, and a new secure gate. We have amended the report to include information about the new pedestrian exit walkway. However, as we noted in our report, portals were installed only on one of the three pedestrian pathways used to exit the United States. According to a CBP official at the Nogales-Mariposa POE, the newly constructed pedestrian exit walkway is on the other side of the building from the pathway where the portal readers were placed and tested and thus would not mitigate the vulnerabilities we identified. Finally, DHS provided other comments that we considered technical in nature. We have amended our report to incorporate these clarifications, where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the issuance date of our original report, which, as discussed earlier, was classified For Official Use Only. At that time, we will provide copies of this report to appropriate departments and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IX. This report addresses the progress the Department of Homeland Security and U.S. Customs and Border Protection (CBP) have made in implementing the United States Visitor Status Indicator Technology (US- VISIT) program at existing land Ports of Entry (POE). Specifically, we analyzed the following issues: (1) What has the US-VISIT Program Office done to implement US-VISIT entry capabilities at land POEs and what impact has US-VISIT had on these facilities? (2) What is the status of US- VISIT Program Office efforts to implement a US-VISIT exit capability at land POE facilities? (3) What has DHS done to define a strategic context to show how US-VISIT entry and exit capabilities at land POE facilities fit with other current and emerging border security initiatives? We performed our work at the Department of Homeland Security’s US- VISIT Program Office and CBP. We also carried out work at 21 of 154 land POEs where US-VISIT entry capability had been installed. At 3 of these 21 land POEs, DHS was also testing exit capability. Table 3 shows the 21 land POEs we visited, by location and state, between August 2005 and February 2006. In selecting land POEs to visit, we originally selected 10 land POEs on the northern border and 10 POEs on the southern border based on geographic dispersion along the border and taking into consideration POEs that were located near each other to minimize travel costs. We added the Morley Gate POE after we initially selected sites because it is physically located about 100 yards from the DeConcini POE in downtown Nogales (Ariz.) and after learning that US-VISIT was treating Morley Gate as a stand-alone POE for US-VISIT deployment purposes. In making our selections, we also considered US-VISIT deployment schedules, facility size, and the number of border crossings and I-94 issuances. Fifteen of the 21 selected sites in our study were among the 50 busiest land POEs for which US- VISIT entry capability was to be operating by December 31, 2004, as required by law. The other 6 sites were among those remaining POEs where, according to law, US-VISIT entry capability was to be operating by December 31, 2005. While selecting sites, we also included the five POEs at which the US-VISIT program office was testing radio frequency identification (RFID) technology as part of a proof of concept for meeting US-VISIT exit capability requirements. These were: Blaine-Peace Arch; Blaine-Pacific Highway; Thousand Islands Bridge, Alexandria Bay; Nogales-Mariposa; and Nogales-DeConcini. The information from our site visits is limited to the 21 POEs we visited and is not generalizable to the remaining POEs. To examine what the US-VISIT Program Office has done to implement US- VISIT entry capabilities at land POEs and what impact US-VISIT has had on these facilities, we interviewed US-VISIT and CBP headquarters officials as well as CBP officials at the 21 locations we visited. We obtained and analyzed available DHS reports on US-VISIT entry capability planning, deployment, and operations across land POEs, including the 21 we visited. At the 21 locations, we (1) discussed US-VISIT entry capability deployment at the facility, any facility-related barriers or constraints encountered during installation, and any operational issues encountered since and (2) obtained any available documentation about US-VISIT deployment and operations at the facility. We also toured secondary inspection at each facility to observe what US-VISIT equipment was installed, how it was installed, and where possible, how it operated when visitors covered by US-VISIT arrived at the facility for processing into the country. While doing our site visits, we met with US-VISIT and CBP officials at headquarters to discuss our field work; discern why problems we identified in the field may have occurred, and if problems occurred, gather and analyze available US-VISIT and CBP information about those problems, including information on any corrective actions. We also examined whether internal or management controls were in place to alert officials to the problems we identified, and examined whether these controls were being applied, consistent with GAO’s Standards for Internal Controls in the Federal Government. In addition, we interviewed CBP and US-VISIT headquarters officials about plans for installing and operating new technology and equipment related to US-VISIT, such as 10-finger-scan readers, at land POEs; reviewed available DHS documents about plans to implement these devices; and reviewed available DHS documents that discussed performance measures for US-VISIT overall. We also reviewed applicable laws, regulations, and DHS federal register notices pertaining to US-VISIT entry capability deployment at land POEs, as well as reports prepared by DHS, GAO, the DHS Office of Inspector General, and the Congressional Research Service. To determine the status of DHS’s efforts to implement a US-VISIT exit capability at land POEs, we interviewed US-VISIT and CBP headquarters officials and CBP officials at the five locations where US-VISIT exit capability was being tested (Nogales-Mariposa, Nogales-DeConcini, Blaine-Pacific Highway, Blaine-Peace Arch, and Alexandria Bay). At each of the locations, we toured the areas where exit testing equipment and technology had been installed and discussed with CBP officials how it was installed and to be tested. We also reviewed applicable laws and regulations and obtained and analyzed available DHS reports on US-VISIT exit capability including an operational alternatives assessment; feasibility studies; and proof of concept performance evaluation and corrective action reports. Our analysis of these reports focused on DHS strategies for selecting, testing, acquiring, and evaluating alternative methods that could meet the requirements; DHS’s criteria used to select and test the potential of RFID technology; and the challenges encountered, including any privacy issues associated with RFID use. Finally, we obtained and analyzed DHS reports on the costs of the equipment and related facility infrastructure, such as the metal gantry erected over roadways to hold RFID readers, to estimate what it would cost to install RFID equipment at all land POEs. We developed our overall estimate based on the average cost to date (about $1 million each) of installing exit gantries and associated RFID equipment at the four POEs where gantries and equipment were installed. (Although RFID use was tested at five POEs, at the DeConcini POE in downtown Nogales, Arizona, the RFID readers were placed on poles on either side of entry lanes, since all entering vehicles pass under a large permanent canopy structure that precludes installing a gantry. At the other four POEs, RFID readers were attached to metal gantries placed over roadway lanes.) To examine what DHS has done to define a strategic context to show how US-VISIT entry and exit capabilities at land POE facilities fit with other current and emerging border security initiatives, we reviewed past GAO reports and public DHS announcements about the Western Hemisphere Travel Initiative and the Secure Border Initiative (SBI). We also interviewed DHS officials about the status of efforts to implement these initiatives as well as the status of efforts to develop and promulgate a strategic plan for US-VISIT and compared available information on DHS plans to implement initiatives with the results of our discussions with US- VISIT program officials. We conducted our work from September 2005 through October 2006 in accordance with generally accepted government auditing standards. The Department of State’s (State) Visa Waiver Program (VWP) enables nationals of certain countries to travel to the United States for tourism or business for stays of 90 days or less without obtaining a visa. The program was established in 1986 with the objective of promoting better relations with U.S. allies, eliminating unnecessary barriers to travel, stimulating the tourism industry, and permitting the Department of State to focus consular resources in other areas. VWP eligible travelers may apply for a visa, if they prefer to do so. Not all countries participate in the VWP, and not all travelers from VWP countries are eligible to use the program. VWP travelers are screened prior to admission into the United States, and they are enrolled in the Department of Homeland Security’s US-VISIT program. Currently, 27 countries participate in the Visa Waiver Program as shown in the following table. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 originally required the development of an automated entry and exit control system to collect a record of departure for every alien departing the United States and match the record of departure with the record of the alien’s arrival in the United States; make it possible to identify nonimmigrants who remain in the country beyond the authorized period; and not significantly disrupt trade, tourism, or other legitimate cross-border traffic at land border ports of entry. It also required the integration of overstay information into appropriate databases of the INS and the Department of State, including those used at ports of entry and at consular offices. The system was originally to be developed by September 30, 1998; this deadline was changed to October 15, 1998, and was changed again for land border ports of entry and sea ports to March 30, 2001. The Immigration and Naturalization Service Data Management Improvement Act (DMIA) of 2000 replaced the 1996 statute in its entirety, requiring instead an electronic system that would provide access to and integrate alien arrival and departure data that are authorized or required to be created or collected under law, are in an electronic format, and are in a data base of the Department of Justice or the Department of State, including those created or used at ports of entry and at consular offices. The Act specifically provided that it not be construed to permit the imposition of any new documentary or data collection requirements on any person for the purpose of satisfying its provisions, but it further provided that it also not be construed to reduce or curtail any authority of the Attorney General (now Secretary of Homeland Security) or Secretary of State under any other provision of law. The integrated entry and exit data system was to be implemented at airports and seaports by December 31, 2003, at the 50 busiest land ports of entry by December 31, 2004, and at all remaining ports of entry by December 31, 2005. The DMIA also required that the system use available data to produce a report of arriving and departing aliens by country of nationality, classification as an immigrant or nonimmigrant, and date of arrival in and departure from the United States. The system was to match an alien’s available arrival data with the alien’s available departure data, assist in the identification of possible overstays, and use available alien arrival and departure data for annual reports to Congress. These reports were to include the number of aliens for whom departure data were collected during the reporting period, with an accounting by country of nationality; the number of departing aliens whose departure data was successfully matched to the alien’s arrival data, with an accounting by country of nationality and classification as an immigrant or nonimmigrant; the number of aliens who arrived pursuant to a nonimmigrant visa, or as a visitor under the visa waiver program, for whom no matching departure data have been obtained as of the end of the alien’s authorized period of stay, with an accounting by country of nationality and date of arrival in the United States; and the number of identified overstays, with an accounting by country of nationality. In 2001, the USA PATRIOT Act provided that, in developing the integrated entry and exit data system under the DMIA, the Attorney General (now Secretary of Homeland Security) and Secretary of State were to focus particularly on the utilization of biometric technology and the development of tamper-resistant documents readable at ports of entry. It also required that the system be able to interface with law enforcement databases for use by federal law enforcement to identify and detain individuals who pose a threat to the national security of the United States. The PATRIOT Act also required by January 26, 2003, the development and certification of a technology standard, including appropriate biometric identifier standards, that can be used to verify the identity of persons applying for a U.S. visa or persons seeking to enter the United States pursuant to a visa for the purposes of conducting background checks, confirming identity, and ensuring that a person has not received a visa under a different name. This technology standard was to be the technological basis for a cross-agency, cross-platform electronic system that is a cost-effective, efficient, fully interoperable means to share law enforcement and intelligence information necessary to confirm the identity of persons applying for a U.S. visa or persons seeking to enter the United States pursuant to a visa. This electronic system was to be readily and easily accessible to consular officers, border inspection agents, and law enforcement and intelligence officers responsible for investigation or identification of aliens admitted to the United States pursuant to a visa. Every 2 years beginning on October 26, 2002, the Attorney General (now Secretary of Homeland Security) and the Secretary of State were to jointly report to Congress on the development, implementation, efficacy, and privacy implications of the technology standard and electronic database system. The Enhanced Border Security and Visa Entry Reform Act of 2002 required that, in developing the integrated entry and exit data system for the ports of entry under the DMIA, the Attorney General (now Secretary of Homeland Security) and Secretary of State implement, fund, and use the technology standard required by the USA PATRIOT Act at U.S. ports of entry and at consular posts abroad. The act also required the establishment of a database containing the arrival and departure data from machine-readable visas, passports, and other travel and entry documents possessed by aliens and the interoperability of all security databases relevant to making determinations of admissibility under section 212 of the Immigration and Nationality Act. In implementing these requirements, the INS (now DHS) and the Department of State were to utilize technologies that facilitate the lawful and efficient cross-border movement of commerce and persons without compromising the safety and security of the United States and were to consider implementing a North American National Security Program, for which other provisions in the act called for a feasibility study. The act, as amended, also established a number of requirements regarding biometric travel and entry documents. It required that not later than October 26, 2004, the Attorney General (now Secretary of Homeland Security) and the Secretary of State issue to aliens only machine-readable, tamper-resistant visas and other travel and entry documents that use biometric identifiers and that they jointly establish document authentication standards and biometric identifiers standards to be employed on such visas and other travel and entry documents from among those biometric identifiers recognized by domestic and international standards organizations. It also required by October 26, 2005, the installation at all ports of entry of the United States equipment and software to allow biometric comparison and authentication of all U.S. visas and other travel and entry documents issued to aliens and passports issued by visa waiver participants. Such biometric data readers and scanners were to be those that domestic and international standards organizations determine to be highly accurate when used to verify identity, that can read the biometric identifiers used under the act, and that can authenticate the document presented to verify identity. These systems also were to utilize the technology standard established pursuant to the PATRIOT Act. The Intelligence Reform and Terrorism Prevention Act of 2004 did not amend the existing statutory provisions governing US-VISIT, but it did establish additional statutory requirements concerning the program. It described the program as an “automated biometric entry and exit data system” and required DHS to develop a plan to accelerate the full implementation of the program and to report to Congress on this plan by June 15, 2005. The report was to provide several types of information about the implementation of US-VISIT, including a “listing of ports of entry and other DHS and Department of State locations with biometric exit data systems in use.” The report also was to provide a description of the manner in which the US-VISIT program meets the goals of a comprehensive entry and exit screening system, “including both entry and exit biometric;” and fulfills the statutory obligations imposed on the program by several laws enacted between 1996 and 2002. The act provided that US-VISIT “shall include a requirement for the collection of biometric exit data for all categories of individuals who are required to provide biometric entry data, regardless of the port of entry where such categories of individuals entered the United States.” The new provisions in the 2004 act also addressed integration and interoperability of databases and data systems that process or contain information on aliens and federal law enforcement and intelligence information relevant to visa issuance and admissibility of aliens; maintaining the accuracy and integrity of the US-VISIT data system; using the system to track and facilitate the processing of immigration benefits using biometric identifiers; the goals of the program (e.g., serving as a vital counterterrorism tool, screening visitors efficiently and in a welcoming manner, integrating relevant databases and plans for database modifications to address volume increase and database usage, and providing inspectors and related personnel with adequate real time information); training, education, and outreach on US-VISIT, low risk visitor programs, and immigration law; annual compliance reports by DHS, State, the Department of Justice, and any other department or agency subject to the requirements of the new provisions; and development and implementation of a registered traveler program. Appendix IV: The 20 Busiest Land Ports of Entry (POE) by Volume of Individuals Entering the United States in Fiscal Year 2005 and Foreign Entrants (Pedestrians and Vehicle Occupants) Calif. Calif. Calif. Tex. Tex. Tex. Ariz. Tex. N.Y. Ariz. Tex. Mich. Ariz. N.Y. Tex. Tex. Calif. Mich. Mich. Tex. This site comprises multiple POEs at this location. According to the US-VISIT program office, US-VISIT entry capability was installed at the following land POE by December 31, 2005. The list is arranged in state alphabetical order. Protecting the privacy of visitors to the United States is one of the four stated primary mission goals of the US-VISIT program. We and others have raised questions in recent years about the potential privacy risks surrounding the use of RFID technology to track the movement of persons, as opposed to goods; the potential for the technology to be subverted for surveillance purposes, rather than identification; and the potential for “function creep,” whereby information collected for one purpose gradually develops other secondary uses, such as has occurred with Social Security numbers. In congressional testimony, we have noted that the use of RFID tags and associated databases raises important security considerations related to the confidentiality, integrity, and availability of the data on the tags and in the databases, and in how this information is being protected. We have noted, as well, that while the federal government had begun using RFID technology for a variety of applications—to track and identify assets, weapons, and baggage on flights, for example—using this technology for generic inventory control did not raise the same privacy issues as using it to track the movement of persons. The US-VISIT Program Office has taken steps to meet statutory and congressional requirements protecting the privacy of individuals who would be affected if RFID technology were to be implemented as part of the US-VISIT exit and re-entry process, and to address the privacy concerns raised by us and others. According to OMB guidance, a privacy impact assessment should be conducted before an agency develops or procures an information technology system, such as the proposed RFID system, which collects, maintains, or disseminates information about an individual—in this case, numeric information that may be linked to biographic information contained within databases. In January 2004, DHS published a Privacy Impact Assessment in the Federal Register, as required by law, for the initial deployment of US-VISIT, and published the latest in a series of updated Privacy Impact Assessments in July 2005, addressing privacy issues related to the proof-of-concept testing of RFID for Increment 2C. In its July 2005 Privacy Impact Assessment, DHS said that by design, the information embedded in the RFID-readable I-94 tag does not compromise a visitor’s security, for the following reasons and with the following strictures: Passive RFID minimizes privacy impacts and reduces the chance of visitors being surreptitiously tracked because it does not constantly transmit information or “beacon” a signal. The numeric identifier read in the I-94 tag does not contain and is not derived from any personal information, and can only be used to obtain personal information when combined with data within the Automated Identification Management System (the system created to link the unique RFID tag ID number to existing biographic information received from the TECS database). The Automated Identification Management System records the exit and re-entry data automatically captured for a particular RFID tag, rather than a specific individual. The individual’s complete travel history is created only when the information captured from the RFID tag is sent along with the biographic information stored in the TECS database to a DHS Arrival and Departure Information System. The Automated Identification Management System is undergoing the DHS certification and accreditation process, which includes having an approved detailed security plan and a comprehensive technical assessment of the risks of operating the system. The certification and accreditation process will be completed before the proof-of-concept becomes operational. The Automated Identification Management System database can only be accessed by authorized personnel signed into authorized workstations that communicate with the system via a secure network. These computer workstations are generally in CBP POE buildings, inside work areas with physical controls over who can enter the area, according to the Privacy Impact Assessment, and each POE is required to be in compliance with DHS regulations with regard to security. Even if an RFID tag number were secretly detected by someone, that person would also have to obtain access to the Automated Identification Management System secure database, to link the number to an individual’s records. DHS acknowledged that two potential privacy risks related to the RFID exit/re-entry solution have been identified, and that US-VISIT creates a pool of individuals whose personal information is at risk. Nevertheless, it is stated in the July 2005 Privacy Impact Assessment that the privacy risks will either be avoided or mitigated through the use of access controls, education and training, encryption, and minimizing collection and use of personal information will mitigate privacy risks associate with data sharing. The first stated risk is that, if the format or some other characteristic of the RFID tag number renders it recognizable as a US- VISIT RFID tag, this would allow an unauthorized reader to surreptitiously determine an individual’s status (i.e., within US-VISIT covered population). DHS stated that the RFID tag number will be structured so that it cannot be used to identify an individual specifically as a nonimmigrant. Second, DHS noted there is a low risk that the RFID tag could be used to conduct surreptitious locational surveillance of an individual; i.e., to use the presence of the tag to follow an individual as he or she moves about in the United States. However, ensuring that RFID tag numbers do not exhibit properties that can be readily attributed to US-VISIT and using a limited radio frequency range effectively mitigates this risk, according to DHS. The US-VISIT Program Office has been testing the use of passive, automated, radio frequency identification (RFID) technology as a means to record the exit of visitors from the United States at land POEs. RFID is an automated data-capture technology that can be used to electronically store information contained on a very small tag that can be embedded in a document (or some other physical item); in this case, US-VISIT embedded the tag in a modified Form I-94, called an I-94A. This information can then be identified, and recorded as having been identified, by RFID readers that are connected to computer databases. The RFID tests were conducted for one-week periods at land POEs, as follows: vehicular traffic was tested at Nogales-Mariposa and Nogales-DeConcini POEs in Nogales, Arizona; the Blaine-Pacific Highway and Blaine-Peace Arch POEs in Blaine, Washington; and Thousand Islands Bridge POE in Alexandria Bay, New York; pedestrian traffic was tested at the Nogales-Mariposa and Nogales- DeConcini POEs. For these exit tests, the US-VISIT Program Office developed critical success factor target read rates to compare them to the actual read rates obtained during the test for both pedestrians carrying I-94As with RFID- detectable tags and for travelers in vehicles who also had RFID-detectable I-94As with them inside the vehicles. The target exit read rates ranged from an expected success rate of 70 percent to 95 percent, based on anticipated performance under different conditions, partly as demonstrated in the earlier feasibility study, on business requirements, and on a concept of operation plan prepared for Increment 2C. Table 5 shows the exit test results compared to the target read rates, reflecting specifically the percentage of persons detected by the readers who were carrying RFID-detectable documents for (1) pedestrians and (2) persons in vehicles, as they passed through the POE area, while exiting the country. In phase 1 of proof-of-concept testing for RFID, US-VISIT reported that read rates were higher for both vehicle occupants and pedestrians who held the I-94A up toward the reader, rather than leaving it inside a pocket. Through the use of billboards, radio and print advertisements, and other methods of communication, visitors were encouraged to place their RFID- detectable I-94A forms on the vehicle dashboard or up to a window. These locations were believed to increase the chances for a successful read. Those who took these actions were referred to as “participants,” and those who did not as “nonparticipants.” The US-VISIT Program Office reported that during the week-long proof-of-concept exit testing, one of the three pedestrians was a participant—that is, the individual was observed as voluntarily complying with the instructions; for those exiting in a vehicle, these data were not reported. Moreover, although CBP officials made substantial pre-test efforts to encourage travelers to optimize the chances of I-94A tags being read, the report noted that this effort apparently met with mixed success and that no additional solutions were planned. During the time period that US-VISIT tested the performance of RFID readers for detecting I-94As carried by persons exiting the country in vehicles at two land POEs (Thousand Islands Bridge, Alexandria Bay, New York and Blaine-Pacific Highway, Washington), it also tested RFID reader performance for persons in vehicles with RFID-embedded I-94As who re- entered the country at both of these locations and three others (Blaine- Peace Arch, Washington; and, in Arizona, Nogales-Mariposa and Nogales- DeConcini). In addition, tests of RFID detectability carried by pedestrians re-entering the country were conducted at Nogales-Mariposa, and Nogales- DeConcini; pedestrian exit was tested only at Nogales-Mariposa because of operational constraints at Nogales-DeConcini, according to the report on the tests. Since persons re-entering the country with a RFID-enabled I- 94 would already have obtained an I-94A on a prior visit to the United States, in order for it to be detected by an RFID reader, this process is sometimes referred to by the US-VISIT program office as “re-entry.” DHS set separate, higher critical success factors (performance targets) for the RFID proof-of-concept tests for the vehicle re-entry process than for the vehicle exit process. According to a US-VISIT official, these higher performance targets were based, in part, on the fact that vehicles must stop as part of the re-entry process, which makes it more likely that a tag will be detected than is the case for exiting vehicles, which do not need to slow down or stop at land POEs. As with the tests conducted for exit, test observers monitored traveler behavior to see whether, in compliance with numerous advertisements in print and on local radio, the vehicle driver placed the RFID-enabled I-94A on the vehicle dashboard or on an empty passenger seat, or, for vehicle occupants, if they held the I-94A up to a window or who made it otherwise visible, to better enable detection it by the reader. Vehicle drivers or occupants who displayed an I-94A in any of these requested ways were categorized as “participants,” but read rates for them were, nevertheless, low at four of five test locations. For example, at Nogales-DeConcini, which had the lowest vehicle-entry read rates overall, the read rate was 27 percent for the 62 persons re-entering in vehicles with visitors whom US -VISIT reported as making an effort to have their I-94A tags read. In contrast, at Nogales-Mariposa, which had the highest overall re-entry read rate for the vehicle test, US-VISIT reported that 83 out of 96 (86 percent) of travelers who were categorized as participants were detected. Among those at this same location who did not make this effort, US-VISIT reported that I-94s with RFID tags were detected for about half (51 percent) of the persons in the vehicles. Table 6 shows the results of RFID read-rates upon re-entry for vehicle participants and nonparticipants. Table 7 shows the results of RFID read-rate detection upon re-entry for pedestrian participants and nonparticipants. In addition to the above, John F. Mortin, Assistant Director; Amy Bernstein, Frances Cook, Odi Cuero, Richard Hung, Amanda Miller, James R. Russell, and Jonathan Tumin made key contributions to this report. | The Department of Homeland Security (DHS) established the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program to collect, maintain, and share data on selected foreign nationals entering and exiting the United States at air, sea and land ports of entry (POEs). These data, including biometric identifiers like digital fingerprints, are to be used to screen persons against watch lists, verify visitors' identities, and record arrival and departure. GAO was asked to review implementation at land POE facilities and in doing so GAO analyzed: (1) efforts to implement US-VISIT entry capability; (2) efforts to implement US-VISIT exit capability; and (3) DHS's efforts to define how US-VISIT fits with other emerging border security initiatives. GAO reviewed DHS and US-VISIT program documents, interviewed program officials, and visited 21 land POEs with varied traffic levels on both borders. US-VISIT entry capability has been installed at 154 of the 170 land POEs. Officials at all 21 sites GAO visited reported that US-VISIT had improved their ability to process visitors and verify identities. DHS plans to further enhance US-VISIT's capabilities by, among other things, requiring new technology and equipment for scanning all 10 fingerprints. While this may aid border security, installation could increase processing times and adversely affect operations at land POEs where space constraints, traffic congestion, and processing delays already exist. GAO's work indicated that management controls in place to identify such problems and evaluate operations were insufficient and inconsistently administered. For example, GAO identified computer processing problems at 12 sites visited; at 9 of these, the problems were not always reported. US-VISIT has developed performance measures, but measures to gauge factors that uniquely affect land POE operations were not developed; these would put US-VISIT officials in a better position to identify areas for improvement. US-VISIT officials concluded that, for various reasons, a biometric US-VISIT exit capability cannot now be implemented without incurring a major impact on land POE facilities. An interim nonbiometric exit technology being tested does not meet the statutory requirement for a biometric exit capability and cannot ensure that visitors who enter the country are those who leave. DHS has not yet reported to Congress on a required plan describing how it intends to fully implement a biometric entry/exit program, or use nonbiometric solutions. Until this plan is finalized, neither DHS nor Congress is in a good position to prioritize and allocate program resources or plan for POE facilities modifications. DHS has not yet articulated how US-VISIT is to align with other emerging land border security initiatives and mandates, and thus cannot ensure that the program will meet strategic program goals and operate cost effectively at land POEs. Knowing how US-VISIT is to work with these initiatives, such as one requiring U.S. citizens, Canadians, and others to present passports or other documents at the border in 2009, is important for understanding the broader strategic context for US-VISIT and identifying resources, tools, and potential facility modifications needed to ensure success. |
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (P.L. 104-193), generally known as welfare reform, imposed time limits on the receipt of welfare benefits and established work requirements to promote self-sufficiency for families on welfare. In addition, the act shifted important responsibilities for welfare from the federal government to the states. Housing programs, some of them dating back to the 1980s, and recent changes in housing policies have also encouraged work and self-sufficiency. The changes in housing policies are intended to make public housing agencies, as well as tenants, less dependent on federal subsidies. Many recipients of federal housing assistance have been or will be affected by the changes in both welfare and housing programs and policies. According to the Department of Housing and Urban Development’s (HUD) data, about 29 percent of the households that received housing assistance also received Aid to Families with Dependent Children (AFDC) as of September 1996. A majority of these households reside either in public housing or in private rental units—under HUD’s Section 8 certificate and voucher programs—that they select and that HUD subsidizes through payments by public housing agencies to landlords of a portion of each household’s rent. Because households that receive housing assistance generally pay 30 percent of their income for housing, changes in tenants’ incomes resulting from welfare reform will affect the rental revenue that public housing agencies receive and the amounts of the subsidies they need from HUD to cover their operating costs. Several federal departments, the states, and local public housing and welfare agencies have roles in efforts to move families from welfare and housing assistance to work and self-sufficiency. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 established time limits and work requirements to promote self-sufficiency for families on welfare. The act replaced the entitlement program—AFDC—with block grants to the states under Temporary Assistance for Needy Families (TANF). The fixed amounts of states’ grants under the new law are based on the amount of their grants in specified fiscal years under prior law, supplemented for population increases under certain circumstances. In total, TANF grants to the states are authorized at $16.5 billion per year. With respect to state funding, the federal welfare reform law includes a “maintenance of effort” provision requiring the states to provide 75 or 80 percent of their historic level of funding. Under the act, the states must meet statewide mandatory requirements for the percentage of families engaged in work activities or their TANF grants will be reduced. In turn, if a recipient family fails to participate as required, the state must reduce and may terminate the family’s cash assistance. The act also imposed a 60-month lifetime limit on the receipt of TANF benefits for most individuals. In addition, the federal welfare reform law increased federal funding for child care subsidies for low-income families under the Child Care and Development Fund, authorized to provide $3 billion in fiscal year 1997 and increase to $3.7 billion by 2002. The act also tightened the eligibility requirements for food stamps and Supplemental Security Income (SSI), a source of cash assistance for some children with special needs, immigrants, and others. In subsequent legislation, the Congress restored SSI benefits for many legal aliens, ensured Medicaid coverage for some children who became ineligible for SSI benefits, and authorized $3 billion in welfare-to-work grants to the states for fiscal years 1998 and 1999, to be overseen by the Department of Labor. Appendix I provides information on the welfare reform plans and benefit levels established by the states we visited. HUD and the Congress have undertaken and proposed efforts to reform the nation’s public housing industry in much the same way as welfare has been reformed. These efforts are designed to promote self-sufficiency on the part of both tenants and housing agencies. Specifically, HUD now manages a variety of self-sufficiency programs, such as the Family Self-Sufficiency (FSS) program, which provides employment-related services for tenants of public and assisted housing who volunteer for the program. The Congress, through appropriations bills, has implemented changes in public housing policies to encourage work. The revised policies eliminate requirements for public housing agencies to give preference only to the poorest of the poor in selecting tenants and allow the agencies to establish local preferences, ceiling rents, and adjustments to earned income. HUD and the Congress have also proposed permanent legislation that would, among other things, consolidate public housing programs and increase the mix of incomes among tenants. Since the mid-1980s, HUD has provided housing-based self-sufficiency and economic opportunity programs to deliver supportive services to the tenants of public and assisted housing. These programs have provided job training, counseling, and placement services; child care; and transportation. Several of these programs require coordination with other local efforts. One of the most widely used of these programs, FSS, was created under the National Affordable Housing Act of 1990 (P.L. 101-625) to help the tenants of public and assisted housing reduce their reliance on welfare and gain employment through education, training, and supportive services. Since fiscal year 1993, HUD has required housing agencies that receive additional public housing units or Section 8 certificates and vouchers to participate in FSS. The Anti-Drug Abuse Act of 1988 (42 U.S.C. 11901 et seq.), as amended, authorized the Public and Assisted Housing Drug Elimination program, whose goal is to provide alternative approaches to reducing crime and drug activity in public and assisted housing. Under this program, HUD awards grants to housing agencies and owners of assisted housing for activities such as protective services, drug prevention programs, and youth sports programs. Other self-sufficiency programs include the Economic Development and Supportive Services grant program and Jobs Plus, both of which provide housing agencies with additional resources and incentives to encourage tenants to achieve self-sufficiency. Appendix II describes HUD’s self-sufficiency and economic development programs identified by selected housing agencies as facilitating welfare reform. The Congress has authorized some changes for public housing agencies through recent appropriations laws, beginning in 1996 with the Balanced Budget Downpayment Act, I (P.L. 104-99, also known as the Continuing Resolution). These changes eliminated the requirement that housing agencies select families from their waiting lists on the basis of federal preferences and allowed the agencies to establish local preferences, ceiling rents, and adjustments to earned income. Local preferences enable housing agencies to select working families, those in employment and training programs, veterans, and persons living in the immediate vicinity to fill vacant units. The agencies may determine which preference(s) to implement as long as they do not change the makeup of their developments in a way that would displace elderly or disabled tenants. Ceiling rents—levels above which rents no longer rise with increases in tenants’ incomes—are designed to attract, retain, and support working families, who are generally thought to provide leadership to housing developments and serve as role models for other tenants. A ceiling rent must reflect the reasonable market value of the housing unit and cannot be less than the monthly per-unit operating costs. HUD considers ceiling rents useful in easing the rent burden on working families residing in public housing. In addition, ceiling rents can create incentives for tenants to save money and purchase their own homes. The Continuing Resolution provided a transition rule allowing housing agencies to establish ceiling rents until HUD issues final regulations. HUD proposed a regulation on ceiling rents for public housing in November 1997. Like ceiling rents, adjustments to earned income can be used to attract working families, increase the mix of incomes among public housing tenants, and help tenants save money and become homeowners. Adjustments to earned income allow housing agencies to exclude certain types of income in calculating rents. As a result, tenants may retain more of the income they earn if they have participated in certain types of training and work activities. Housing agencies electing to use adjustments may recoup potential losses by attracting tenants with higher incomes. In addition to the rent policies discussed above, HUD and the Congress have proposed more sweeping housing reforms that would further transform public housing. Under HUD’s proposal, public housing programs would be consolidated, and the tenant-based certificate and voucher programs would be merged. HUD has also proposed the deregulation of well-performing housing agencies and acknowledges the need for more predictable and effective actions to address problems at failing housing agencies. In addition, HUD has proposed to strengthen the Department’s policy on coordination with welfare agencies, consolidate the Economic Development and Supportive Services grant program with another program for tenants, and create a welfare-to-work voucher program. Both the House and the Senate have proposed legislation that would modify the U.S. Housing Act of 1937. The House bill would repeal the act, replacing it with new legislation, while the Senate bill would revise the act. Both the House and Senate bills would combine the Section 8 certificate and voucher programs into a single tenant-based assistance program, to be called choice-based under the House bill. The proposed legislation would also allow local governments to receive federal funds for public housing directly, alter income-targeting rules, and increase funding for tenant organizations. H.R. 2 passed the House in May 1997. S. 462 passed the Senate in September 1997. As of April 1998, permanent housing reform legislation had not been enacted. Among federal agencies, the departments of Health and Human Services (HHS) and Labor have the greatest responsibilities for welfare reform. The states play a larger role under welfare reform than they did in the past. HUD subsidizes public housing and provides rental assistance and grants for supportive services. Local public housing agencies own and operate public housing and administer the subsidies, rental assistance, and grants that they receive from HUD. While welfare reform shifted responsibility to the states for designing and implementing TANF programs, HHS remains the federal agency with primary responsibility for welfare programs. The TANF legislation made HHS responsible for aiding and overseeing the states’ development of TANF programs; developing certain types of regulations, including reporting requirements for the states and penalties for noncompliance with the law; drafting a formula to reward “high performing” states (i.e., those that achieve the goals of the law); and conducting research on the benefits, costs, and effects of the new law. HHS may also assist the states in developing innovative approaches to reduce dependency on welfare and increase the well-being of children and is responsible for evaluating these approaches. In addition, HHS administers the Child Care and Development Fund. The Department of Labor also has a prominent role under welfare reform. Its programs for low-income adults include, among others, the Welfare-to-Work program, the Job Training Partnership Act (JTPA) Title II-A Adult Training Grants, and the One-Stop Career Center initiative. Labor’s Welfare-to-Work program is designed to move the hardest-to-serve welfare recipients into unsubsidized jobs and economic self-sufficiency.The Balanced Budget Act of 1997 (P.L. 105-33) authorized $1.5 billion annually for formula and competitive grants for the Welfare-to-Work program over 2 years. The JTPA Title II-A program supplements the Welfare-to-Work program by providing both job training for welfare recipients and job training and placement services for other low-income adults to keep them off welfare. For fiscal year 1999, Labor requested $1 billion for this program—$45 million more than it received for fiscal year 1998. The agency also requested $146.5 million for the One-Stop Career Center initiative. Labor considers this initiative the cornerstone of a reform effort to encourage state and local bureaucracies to reinvent themselves, consolidate service delivery at the “street level,” focus on the customer, and restructure accountability. Other federal agencies, such as the Department of Transportation and the Small Business Administration, also have initiatives related to welfare reform under way. The act shifted important responsibilities for welfare from the federal government to the states. The states have more flexibility than before to design their own programs and strategies for aiding needy families, including those for helping welfare recipients move into the workforce. In addition, the states are allowed to set forth their own criteria for eligibility and for the types of assistance and services that will be available, provided they ensure that recipients are treated fairly and equitably. As a result, the states can decide how to allocate their TANF funds between cash assistance and support services, such as child care and education and training. A state may also devolve its responsibility to county or local authorities. At local welfare offices, TANF programs are generally administered by state, county, or local officials. Before the states could receive their block grants, the act required them to submit their TANF plans to the Secretary of Health and Human Services for approval by July 1, 1997. Most states had begun implementing their TANF programs before the July 1, 1997, deadline. Because many states had already begun changing their AFDC programs under waivers of federal law from HHS, the states were at different stages of implementing their reform efforts when the federal legislation was enacted. HUD establishes the guidelines for receiving federal housing assistance and provides several types of subsidies to produce and maintain housing affordable to low-income households. Of these, the most important for housing agencies are operating subsidies (to offset some or all of any shortfall between rental revenue and operating costs) and modernization funds. HUD also provides rental assistance to tenants through certificates and vouchers. Housing agencies can compete for a variety of grants, such as those for operating Drug Elimination programs. Housing agencies can also compete for Homeownership and Opportunity for People Everywhere (HOPE) grants to revitalize severely distressed housing through both physical improvements and activities, such as training and education, to promote residents’ self-sufficiency. Finally, HUD operates other programs to promote self-sufficiency, many of which are targeted to the tenants of public housing (see app. II). HUD provides housing assistance through three types of programs—public housing, the Section 8 certificate and voucher programs, and the Section 8 project-based program. Nationwide, there are 1.2 million units of public housing, 1.4 million units rented to holders of certificates and vouchers who receive Section 8 tenant-based rental assistance, and 1.7 million units with project-based rental assistance. In fiscal year 1997, HUD spent $23.8 billion on these programs. Because housing agencies do not administer the project-based program, it is not discussed in this report. Local public housing agencies own and operate public housing for low- and moderate-income households. The housing agencies operate under state and local laws that set forth their organization and structure, but state governments do not oversee public housing. In many cities, the mayor appoints a governing body or board of commissioners that hires the housing agency’s executive director, who oversees the agency’s day-to-day operations. Housing agencies enter into contracts with HUD, under which the agencies agree to abide by federal regulations and HUD agrees to provide subsidies for public housing and rental assistance for low-income households residing in private housing. As discussed previously, housing agencies may also compete for grants from HUD to provide supportive services. To calculate a housing agency’s operating subsidy, HUD uses its Performance Funding System. Under this system, the amount of the subsidy, determined at the beginning of the housing agency’s fiscal year, is based on projections of the agency’s future funding needs, as well as the total congressional appropriation for operating subsidies. This method is known as forward funding. Projections of the housing agency’s future funding needs are based on assumptions about the agency’s future income and expenses, which, in turn, are based on assumptions about future conditions, including the number of eligible units, tenants’ incomes—tenants generally pay 30 percent of their adjusted income for rent—the rate of inflation, and other factors that affect income and expenses. If the amount of the operating subsidy is insufficient for the housing agency’s needs during the year, the housing agency must reduce its spending. After the end of each fiscal year, certain adjustments are made on the basis of the housing agency’s experience during the year. The Section 8 certificate and voucher programs allow eligible households to select their own units in the private housing market and receive subsidies to cover part of their rent. HUD operates the certificate and voucher programs by entering into contracts and providing payments to local and state housing agencies, including public housing agencies. Housing agencies use these payments to provide rent subsidies to the owners of private housing on behalf of the assisted households. HUD also pays each housing agency a statutorily determined administrative fee for tasks involved in managing the program, including certifying applicants for eligibility, inspecting units found by tenants for compliance with housing standards, and verifying that the terms of leases meet HUD’s requirements. If HUD’s payments are insufficient to cover the housing agency’s rent subsidy needs, the agency uses its Section 8 reserve account to cover the difference. But if HUD’s payments exceed the agency’s subsidy needs, the additional funds are added to the reserve fund and HUD can subsequently adjust the housing agency’s future funding. In the past, some housing agencies used the additional funds to issue more Section 8 certificates and vouchers. However, HUD has instructed housing agencies that they may not issue any more vouchers and certificates using reserve funds than they have in the past. Under the certificate program, a household generally pays 30 percent of its income for rent. The housing agency pays the difference between the rent charged—which, under most circumstances, cannot exceed the fair market rent set by HUD—and each tenant’s payment. Under the voucher program, the housing agency pays the difference between a payment standard that is set by the housing agency and 30 percent of the tenant’s monthly income. Generally, a household with a voucher must pay more than 30 percent of its income for rent if the unit’s rent exceeds the payment standard. Conversely, a household usually pays less than 30 percent of its income for rent if the unit’s rent is lower than the payment standard. To obtain information about the potential implications of welfare reform on housing agencies, the Chairman of the Subcommittee on Housing and Community Opportunity, House Committee on Banking and Financial Services, asked us to review the impact of welfare reform on selected public housing agencies. In response to that request and to a mandate in the 1998 House Report on the Departments of Veterans Affairs and Housing and Urban Development, and Independent Agencies Appropriations Bill (H.R. Report 105-175), we identified (1) the impact of welfare reform on the revenue sources, employment status of tenants, and roles of selected housing agencies and (2) HUD’s role in assisting housing agencies and their clients as they adapt to welfare reform. To obtain information about the impact of welfare reform on the revenue sources, employment status of tenants, and roles of selected housing agencies, we interviewed and gathered studies from HUD officials, researchers, and interest groups, including those representing housing agencies and the recipients of housing assistance. To identify the factors that determine tenants’ prospects of moving from welfare to work, we reviewed various studies on welfare and housing. We also contracted with Mathematica Policy Research, Incorporated, to use its Simulation of Trends in Employment, Welfare, and Related Dynamics (STEWARD) model to estimate the impact of alternative welfare reform plans and economic scenarios on welfare recipients with and without housing assistance. Appendix III provides additional information on the work performed by Mathematica. In addition, we selected four states for field work—California, Louisiana, Massachusetts, and Minnesota. As table 1.1 indicates, these states have differing approaches to welfare reform, a significant number of tenants who receive TANF benefits, geographical diversity, varied poverty levels, housing agencies of different sizes, and varied unemployment rates. To identify changes resulting from welfare reform in the selected states, we obtained and reviewed the states’ TANF plans and related studies. To learn more about the states’ implementation of welfare reform and to determine whether the states had involved housing agency officials in their welfare reform efforts, we interviewed and obtained plans, reports, and other documents from state welfare, housing and community development, and labor officials using standard sets of questions that we developed. To obtain additional perspectives, we used a standard set of questions to interview and collect information from officials representing HUD field offices, state associations, public housing and community groups, and mayoral and legislative offices. Appendix V provides maps of the states showing the selected housing agencies. Within each state, we visited a minimum of four housing agencies to obtain more detailed information about the anticipated effects of welfare reform. We selected housing agencies on the basis of their size, location, tenants’ characteristics (including reliance on cash assistance), local economic conditions, and approaches to providing social services. As table 1.2 indicates, we generally selected a small (250 to 499 federally funded public housing units), medium (500 to 1,249 federally funded public housing units), large (1,250 to 4,999 federally funded public housing units), and extra large (5,000 or more federally funded public housing units) housing agency in each state. In California, we selected six agencies—two small, two medium, one large, and one extra large—because of the size of the state. We did not select a medium-sized housing agency in Minnesota because none of the agencies in the state has from 500 to 1,249 public housing units. Instead, we selected the Duluth housing agency because, with 1,261 units, it is the closest to the medium range. The housing agencies that we visited in Massachusetts and rural California generally own and manage additional units under state or U.S. Department of Agriculture programs. In addition, all but one of the selected housing agencies operated certificate and voucher programs, which ranged in size from 30 to 28,134 authorized certificates and vouchers. Before visiting the selected housing agencies, we sent them a survey to obtain information on their housing stock, revenue sources, and tenants’ incomes and demographics. During our visits to the housing agencies, we used a standard set of questions for our interviews with the agencies’ executive directors, finance managers, social service coordinators, occupancy specialists, and tenants and tenant associations. We also interviewed local government officials, social service officials, and housing advocacy groups about their expectations for welfare reform and the actions that housing agencies might take or have taken in response to welfare reform. In addition, we obtained reports on housing agencies’ self-sufficiency efforts, analyses of housing agencies’ financial positions, and copies of documents on welfare reform used by the housing agencies. Where available, we obtained reports on the financial impact of welfare reform on the housing agencies and surrounding communities. We obtained additional data on the characteristics of the selected housing agencies from HUD’s September 1996 Picture Book of Subsidized Housing, a compilation of data primarily derived from information sent to HUD by local housing agencies, and had these data verified by the housing agencies. Because of past concerns about the reliability of the data—which come from HUD’s Multifamily Tenant Characteristics System (MTCS) database—we asked the housing agencies in our study to help us corroborate the accuracy of the data. We believe that, with a few exceptions, the data were close enough to state that the data for the selected housing agencies reported in the Picture Book are reliable for these agencies. Our analysis was based on responses to our request from all 18 housing agencies in our review. For about 75 percent of the data in our review that could be verified, there were no differences between the data reported by the housing agencies and the MTCS data. For an additional 10 percent of the data, differences of 1 to 2 percent were reported. However, for several housing agencies, the number of tenant-based Section 8 certificates and vouchers reported by the housing agencies differed from the MTCS data by more than 5 percent. To determine whether this difference was meaningful, we summed the numbers for all of the housing agencies and found a difference of about 24 percent between the two sets of numbers; the housing agencies reported higher total numbers than the Picture Book. The data for one housing agency, operated by the city of Los Angeles, was responsible for a significant portion of this difference. We also identified differences in the data on tenants’ incomes. For example, the differences between the housing agencies’ figures for tenants’ average annual incomes and the Picture Book’s figures ranged from 2 percent to 30 percent; the housing agencies generally reported lower average annual incomes for tenants in public housing, while the Picture Book reported lower average annual incomes for tenants receiving certificates and vouchers. Appendix IV provides additional information on the selected housing agencies’ demographics and sources of revenue. To identify HUD’s role in helping housing agencies and their tenants adapt to welfare reform, we reviewed HUD’s studies, reports, and notices on self-sufficiency programs, employment and training programs, and public and Indian housing programs. We conducted a literature search and reviewed documents on welfare reform, housing and welfare reform legislation, proposed housing bills, and HUD’s self-sufficiency and economic opportunity programs. We interviewed and gathered studies and planning documents from senior HUD officials in HUD’s Office of Policy Development and Research, Office of Public and Indian Housing, Office of Community Planning and Development, and Office of Labor Relations. We interviewed senior officials from HHS’ Administration for Children and Families and Office of the Assistant Secretary for Planning and Evaluation. Finally, we interviewed and gathered studies and position papers from researchers studying welfare reform and housing issues, as well as from interest groups representing HUD’s clients, including tenant organizations, public housing agencies, state agencies, and local government officials. We also drew on our prior and ongoing work on welfare reform and the Government Performance and Results Act. We performed our work from June 1997 through April 1998 in accordance with generally accepted government auditing standards. It is too early to be certain what impact welfare reform will have on the revenue of the housing agencies we visited, the employment status of their tenants, and the roles of the housing agencies. Although these agencies serve many tenants who depend on cash assistance for some or all of their income, most of their executive directors and other officials had not developed financial estimates of welfare reform’s impact. These officials had considered the challenges that tenants will face in moving from welfare to work and that housing agencies will face in using new rent policies to provide support and incentives for working families. Although recent appropriations laws have given housing agencies the flexibility to change some rent rules that discourage work, the officials said they had made minimal use of the laws’ provisions. The officials also noted that the roles of their agencies have expanded to include providing a broader range of social services that are consistent with welfare reform’s goal of moving recipients from welfare to work. However, the housing agencies’ supportive service activities were generally operated separately from the states’ welfare reform efforts. In addition, the state government offices with welfare reform responsibilities that are providing services to help welfare recipients reduce their reliance on cash assistance are rarely targeting funds and programs to public housing developments or assisted housing programs. While the executive directors of the housing agencies we visited were uncertain about the specific effects of welfare reform, their views on its overall impact varied widely, ranging from significantly positive to significantly negative. Expectations tended to vary by location on the basis of characteristics such as state time limits and local economic conditions. Welfare reform is likely to affect the revenue of the housing agencies we visited because many of their tenants depend on TANF for some or all of their income. However, the executive directors and finance officials were uncertain how—and how extensively—welfare reform would affect their revenue because welfare reform continues to evolve at both the federal and the state levels. In addition, they had difficulty separating the effects of welfare reform from those of local economic conditions. For public housing, they were primarily concerned about the possibilities of falling rental revenue, declining operating subsidies, and rising operating costs. These officials were also concerned about welfare reform’s impact on Section 8 revenue. Officials generally lack the resources needed to undertake detailed analyses of the impact of their state’s welfare reform plan on their revenue. However, three of the selected housing agencies had developed some financial estimates of welfare reform’s impact. Although housing agency officials were generally uncertain about the direction and extent of its effects, welfare reform is likely to affect the revenue of the housing agencies we visited because 37 percent of their tenants rely on TANF for some or all of their income. At the housing agencies we visited, 19 to 61 percent of the tenants relied on TANF. If tenants’ incomes change because of welfare reform, changes will also occur in the rental revenue housing agencies receive and in the amount of the subsidies they may need from HUD to cover their operating costs. To cover most of their annual operating expenses, housing agencies depend on the rent paid by public housing tenants and on HUD’s payments, including public housing operating subsidies, Section 8 tenant-based program funds, and program grants. Housing agencies use the bulk of their Section 8 tenant-based program funds to pay private landlords to subsidize tenants’ rents, but they also receive administrative fees for managing the program. As figure 2.1 shows, the housing agencies we visited received 31.1 percent of their operating revenue from rental income, 43.5 percent from HUD’s operating subsidies, 15.8 percent from Section 8 administrative fees, and 9.5 percent from other sources. The percentage of revenue that each housing agency received from rental income ranged from about 18.6 percent at the Shreveport housing agency to about 68.7 percent at the Hibbing housing agency. Eleven of the 18 housing agencies received more funds from rental income than from HUD’s operating subsidies. See appendix IV, table IV.8, for additional information about the revenue sources for the selected housing agencies. At the time of our review, welfare reform was evolving at both the federal and the state levels. Therefore, housing agency officials, tenants, social service providers, government officials, and interest groups said it was too early to predict welfare reform’s impact on rental revenue with any certainty. At the federal level, for example, some benefits for legal immigrants were restored and additional funds were appropriated to provide welfare-to-work programs in the states. At the state level, housing agency officials and tenants continue to face great uncertainty. For example, when we visited California, the state had only recently adopted welfare reform legislation, and counties were still formulating plans for implementing the reforms in January 1998. In Massachusetts, where some TANF recipients will lose benefits in December 1998, state officials had not determined as of April 1998 what groups would be among the 20 percent of beneficiaries who would be exempted from the 5-year federal limit on the receipt of benefits. Officials at many of the housing agencies we visited expected welfare reform to affect their revenue, but some found it difficult to separate the effects of welfare reform from those of other economic changes. While some housing agency officials attributed recent changes in rental revenue to increases in TANF recipients’ earnings under welfare reform, others ascribed the changes to different causes. In Massachusetts, officials at the Chicopee and Lawrence housing agencies believed their rental revenue was rising because more tenants were working under welfare reform, but the officials were uncertain what would happen in December 1998, when the state’s 2-year time limit went into effect. They reasoned that the people who could go to work fairly easily were doing so but that when the time limit hit, the people who could not find employment at reasonable wages might see their incomes plummet and housing agencies might be faced with falling rental revenue. In Merced County, the executive director said that turnover had increased with the 1995 closing of Castle Air Force Base; however, he attributed the recent exodus of residents on welfare to their need to find work before they lost their TANF benefits. Similarly, the executive director of the New Bedford housing agency attributed its high turnover to the long-term stagnation of the economy in southeastern Massachusetts, while staff said tenants’ Section 8 rental payments had recently decreased because tenants were losing income through sanctions imposed on them for failing to follow the state’s TANF requirements. Officials at the housing agencies we visited also expressed uncertainty about the impact that welfare reform’s tighter eligibility requirements for food stamps could have on housing agencies’ rental revenue. Although food stamps are not considered income in tenants’ rent calculations, changes in food stamp benefits may affect the rents that tenants can afford to pay. In California, where reductions in food stamp benefits were among the few provisions of welfare reform that had been implemented when we visited, the executive director at the Butte County housing agency said that several tenants had moved out after losing their food stamps. In Louisiana, where cash assistance levels are lower than in the other states we visited, housing agency officials and tenants said the loss of food stamps would have a significant impact. One housing agency official in East Baton Rouge said that in the past recipients might have sold their food stamps to meet their cash obligations. While housing agency officials and tenants in several locations said they would expect families who had lost their food stamps to have difficulty paying their rent because they would need their income to buy food, in Boston where affordable housing is scarce, tenants said that families might forgo food in order to pay their rent. Housing agency officials in Lawrence, where much of the private housing is substandard, said they thought their tenants might also pay their rent before buying food. Because of restrictions in food stamp eligibility, some housing agency officials said that uncollected rents or turnover might increase. Housing agencies varied in their estimates of the likelihood that they would be able to evict tenants for not paying their rent. For example, officials at a Minnesota housing agency, a California housing agency, and two Louisiana housing agencies said that they would evict people for not paying rent, while officials at two Massachusetts housing agencies said that the law probably would require them to exempt tenants who could not meet the minimum rent requirements. Besides being concerned about possible declines in rental revenue, housing agency officials raised questions about their ability to maintain their public housing units if operating costs rose or HUD’s funding did not fully meet their needs for operating subsidies. Officials at the Kern County, Hibbing, and St. Paul housing agencies said that operating costs might also increase if turnover among residents increased as a result of welfare reform. In addition, when we asked whether HUD would be likely to fully meet the housing agencies’ operating subsidy needs during the next fiscal year, most housing agency finance officials thought that it would not. Some housing agency executive directors and finance directors were also concerned about welfare reform’s impact on revenue from the Section 8 tenant-based assistance program. If the incomes of Section 8 tenants fall, housing agencies cover short-term increases in subsidies with reserves set aside for that purpose. However, officials at the Duluth, Minneapolis, and San Bernardino housing agencies said that in the longer term, the number of households served by the program might have to be reduced. Housing agency staff lack some of the basic information they would need to analyze the impact of welfare reform and said that they do not have resources to devote to collecting and using this kind of information. Although housing agency staff are required to collect information on all tenants’ income sources, family composition, and minority status, they are required to collect information on education only for tenants in the FSS program. Information on education and prior work experience for all tenants would be useful because recent research has shown that education, prior work experience, age, and minority status are important determinants of the speed with which an individual may leave welfare. In addition, housing agency officials would probably need basic information about their state’s welfare reform plan and local employment opportunities. According to housing policy analysts at the National Association of Housing and Redevelopment Officials, if housing agencies had the information, only the large housing agencies would have access to the research skills needed to analyze the impact of welfare reform on tenants’ incomes. New Orleans, a large troubled housing agency, has recently hired a strategic planner, but the finance director at the Butte County housing agency, a small California agency, said he would like HUD to analyze the data for his agency. In addition to information and resources for predicting tenants’ incomes, housing agencies might also need to understand how welfare reform would affect the demand for public and assisted housing. For example, officials at the housing agency in Hibbing, where private housing is inexpensive, said that if the agency’s current tenants become independent of welfare and leave public housing, the housing agency could be left with vacant units or units housing nonworking poor tenants. Then, new tenants might pose greater social and economic problems for the housing agency. Similarly, at the New Bedford housing agency, managers questioned whether public housing could compete with private housing if tenants’ incomes rose. If the housing agencies use HUD’s new rent and admission policies to attract working families, the interaction with the local housing market would become more complex. Despite the difficulties, three of the housing agencies we visited had completed some quantitative estimates of the impact of welfare reform. A finance official at the Los Angeles housing agency said that agency officials had estimated the financial impact of the governor’s original welfare reform plan. They projected a 3- to 6-percent loss in rental revenue, not taking into account any reductions in the agency’s operating subsidy. Adding a 3-percent increase for inflation to the 3- to 6-percent decrease in rental income, the Los Angeles housing agency estimated a possible total loss of 6 to 10 percent under welfare reform. These estimates were completed before some benefits were restored for legal immigrants and additional funds were allocated to the states to provide welfare-to-work programs by the Balanced Budget Act of 1997. According to the housing agency’s planning director, the final welfare reform plan, adopted by the California legislature in August 1997, has a stronger safety net than the governor’s original plan. Thus, he does not expect rental revenue to fall by more than 5 percent under the adopted plan. However, because the adopted plan includes many more variables than the governor’s plan, the agency has decided that it is too early to undertake a new detailed estimate at this time. In addition, two Minnesota housing agencies we visited estimated the impact of a state welfare reform provision on their revenue. Under Minnesota’s welfare reform plan, households that receive both TANF and housing assistance were scheduled to have their TANF benefits reduced by $100. In developing their estimates, housing agency staff assumed that the adjusted incomes of tenants receiving TANF benefits would fall by $100. Then, because tenants generally pay 30 percent of their adjusted income in rent, the staff assumed that the housing agencies’ rental revenue would fall by $30 a month for each resident receiving TANF benefits. To calculate the monthly drop in rental revenue, they multiplied the number of TANF recipients by $30. Their estimates assumed that the number of tenants receiving TANF benefits and the adjusted incomes of these tenants would remain fixed over a 12-month period. At the housing agencies we visited, the residents of assisted housing were facing challenges in seeking employment and the housing agencies were struggling to use new rent and admission policies to provide support and incentives for working families. The challenges facing tenants seeking employment included a lack of job readiness skills, basic literacy skills, child care, and transportation. However, officials in welfare and employment offices in the states were developing new programs that could help to address these challenges. In addition, recent appropriations laws have given housing agencies, for a limited time, the flexibility to change some rent rules—such as those that increase rent with every increase in income—that can discourage work. Recent appropriations laws also allow housing agencies to give preference in admission to certain groups, such as working tenants. Thirteen of the housing agencies we visited were using one or more of these rent and admission policies to support or provide incentives for working families. However, four of the housing agencies we visited had encountered obstacles in employing these policies, and few tenants were benefiting from them. Although most of the housing agency managers and tenants we interviewed believed that entry level and minimum wage jobs existed in their areas, they cited the lack of job readiness skills and lack of work experience as barriers to the success of welfare reform for assisted housing tenants. Research has shown that lack of prior work experience is a major factor in increasing the length of time that people stay on welfare. Recent analysis by staff at Mathematica Policy Research, Incorporated, for GAO also suggests that single mothers on TANF with less prior work experience are less likely than single mothers with more work experience to become employed. These results also show that TANF recipients who receive housing assistance have less work experience than other TANF recipients. See appendix III for a more detailed description of Mathematica’s results. In the states we visited, managers, tenants, service providers, and local government officials cited poor language skills as a significant barrier to moving tenants from welfare to work. In areas with large immigrant populations, such as California; St. Paul and Minneapolis, Minnesota; and New Bedford, Chicopee, and Lawrence, Massachusetts, officials cited lack of literacy in the native language and a shortage of courses in English as a second language (ESL) as difficulties. In California, where employment for those who speak only Spanish is possible in some areas, officials or residents at all six of the housing agencies we visited said that those who did not speak English well would have a difficult time finding employment. Because of economic incentives, church sponsorship, and family ties, St. Paul and Minneapolis attracted large pools of Hmong (from Southeast Asia) and Somali immigrants. Both groups lack proficiency in English in an area where English is necessary for employment. In Massachusetts, where over 25 percent of the public housing residents at the housing agencies we visited were Hispanic, managers, tenants, local government officials, and service providers at three of the four locations cited low literacy skills as a severe challenge. Also, the executive directors of the New Orleans and East Baton Rouge housing agencies, state and local officials, and HUD field officials cited low literacy rates among Louisiana welfare recipients and housing agency tenants as a serious barrier to employment. Lack of affordable child care was also mentioned as a barrier for tenants by housing agency officials and tenants we interviewed. While the federal welfare reform law provides additional child care funds to the states, some housing agency officials and tenants said that access to child care is sometimes a problem. Officials and/or tenants at 10 of the 18 housing agencies we visited said that child care was unavailable or unavailable during late hours or that residents needed child care. Even when child care centers are located at public housing sites, they do not necessarily serve the public housing tenants. For example, in Boston, the resident initiatives director explained that in the past, child care centers in public housing units had not been under contract to reserve a large number of spaces for residents’ children. In addition, tenants did not always have the resources to pay for child care services. He said it was not surprising to find that only 30 percent of the children in the child care centers were public housing tenants. In New Bedford, we visited a child care center that was using the housing agency’s space but, at the time of our visit, was not serving any of the development’s children. The executive director of the New Bedford housing agency said that he contracted with the child care center to improve relations with the surrounding neighborhood. Some housing agency officials and tenants said transportation was a barrier to achieving independence from welfare because mass transportation sometimes does not exist from the neighborhoods where public and assisted housing tenants live to those where jobs are likely to be found. In seven locations, either city or housing agency officials and tenants said bus service did not exist, did not operate on a reasonable schedule, or did not reach areas where public and assisted housing tenants live. Some housing agency staff and tenants, interest groups, and federal and local officials thought that welfare reform would have different effects on the tenants of public housing and tenants receiving Section 8 tenant-based assistance. Staff at the New Bedford, Minneapolis, Lawrence, and San Bernardino housing agencies noted that those with tenant-based assistance were more likely to be independent and have more control over their lives, while officials or residents at the Boston, Chicopee, Duluth, and Hibbing housing agencies said that those in public housing could face discrimination because of their address. However, the Director of the Citizens Housing and Planning Association and housing agency officials in Hibbing, Los Angeles, and Kern County noted that Section 8 residents do not have the same access to programs and services as the residents of more concentrated public housing units. In addition, housing agencies can provide space for these activities at public housing developments. Nationwide, the welfare rolls have declined dramatically, and the states have additional budgetary resources to spend on low-income families. Because of the dramatic decline in caseloads, the fixed amounts of the federal grants to the states under the new law, and the maintenance-of-effort provision in TANF requiring the states to provide 75 or 80 percent of their historic level of funding, we estimate that the total assistance—federal and state—available in fiscal year 1997 for states’ low-income programs was about $4.6 billion more than would have been available under the AFDC program. Between January 1996 and September 1997, the welfare rolls decreased by 16 percent in California and Minnesota, 20 percent in Massachusetts, and 47 percent in Louisiana. Although it was too early at the time of our review to tell to what extent the residents of public and assisted housing on TANF were receiving services, welfare and employment officials in the states we visited were implementing new and revised child care, training, and transportation efforts that could address the barriers cited by housing agency managers and tenants. According to a recent survey of state child care agencies, California is offering pilot programs to train TANF recipients to become child care and development teachers, and Minnesota has provided $700,000 for grants to increase the availability of culturally appropriate child care options. Massachusetts is redirecting its Career Centers to serve low-income residents, while Louisiana is using its Family Independence Work Program to provide transportation for TANF recipients attending training or community service activities. According to the manager of the state social service office in Caddo Parish, each parish welfare office contracts for transportation services with local providers. In Shreveport, for example, the office contracts with the local bus company to provide TANF recipients with monthly bus passes, while in a more rural area of the same county, cabs take recipients to training courses. Beginning with the Continuing Resolution in 1996, the Congress gave housing agencies the flexibility to adopt “local admission preferences” for a limited time. Some of the housing agencies we visited were using this option to give preference in admission to working families, those in employment and training programs, veterans, and persons living in the immediate vicinity of the housing agency. In a 1997 survey of its members, the Public Housing Authorities Directors Association (PHADA) found that about 59 percent of housing agencies surveyed were using local preferences. Of those that were using these preferences, about 40 percent said they were giving preference in admission to households with income from wages. Some of the housing agencies we visited had adopted preferences for working families or for those in training programs with varying success. In Kern County, the housing agency staff said local preferences are helping the housing agency move toward creating mixed-income developments. However, an official at the San Bernardino housing agency said the agency is abandoning the local preference for working families after finding it difficult to administer. According to this official, families who initially qualified under the local preference and were put on the waiting list were no longer working when a unit became available. In East Baton Rouge, housing agency staff said some tenants got jobs to become eligible for housing and then quit working as soon as they moved in. The Continuing Resolution in 1996 and subsequent appropriations legislation have also allowed housing agencies to use ceiling rents and adjustments to earned income. Of the respondents to PHADA’s survey, 37 percent said they had implemented ceiling rents while 12 percent said they had adopted adjustments to earned income. Over 80 percent of the respondents who had implemented ceiling rents and over 70 percent who had adopted adjustments to earned income said they had done so because these tools would help them attract and retain working families. Of the housing agencies we visited, over one-third had implemented ceiling rents and one-third had implemented adjustments to earned income. About half of the housing agency managers we interviewed believed ceiling rents and/or adjustments to earned income would be very or somewhat effective in encouraging tenants to work. But even at the housing agencies where these policies had been adopted, they were relatively new, and few families were enjoying their benefits. For example, at the housing agencies with ceiling rents that we visited, the percentage of tenants paying ceiling rents ranged from less than 1 percent to 6 percent. Housing agency officials in Boston, Duluth, Hibbing, and St. Paul said that adopting ceiling rents and adjustments to earned income would reduce their revenue, since the policies would reduce the amount of rent tenants would pay the housing agency. Officials in Butte and St. Paul also said that the policies were administratively burdensome. For example, they said that, to administer the adjustment to earned income, they must keep two sets of financial records—one showing their income and expenses with the income adjustment and a second showing their financial position as it would have been without the adjustment. Finally, the Minneapolis housing agency said that it would not actively pursue these policies until after the passage of a federal public and assisted housing reform bill. The Continuing Resolution provided a transition rule that allowed housing agencies to establish ceiling rents pending HUD’s issuance of final regulations. HUD issued a proposed regulation on ceiling rents for public housing in November 1997. According to executive directors at the housing agencies we visited, their primary role is to provide housing, but they adopted broader roles that included providing social services before welfare reform began. However, the types of services and delivery systems varied across the housing agencies we visited. Although housing agencies have adopted broader social service roles consistent with welfare reform, their programs are not fully integrated with their states’ welfare reform efforts. While housing agencies house and provide services to a significant portion of each state’s welfare population, in the states we visited, the housing community had limited involvement in developing the state’s welfare reform plan. In addition, the state and local government offices with welfare reform responsibilities that we visited rarely targeted funds and programs to public housing developments. All of the housing agencies we visited made some use of HUD’s self-sufficiency grant programs for purposes related to welfare reform; however, the large and extra large housing agencies were able to make use of a wider range of programs. Table 2.1 shows the number of housing agencies that used specific HUD self-sufficiency programs. All but two of our selected housing agencies operated an FSS program. As part of their drug prevention efforts, the housing agencies we visited used funds from the Drug Elimination program to set up and operate after-school activities for youth and develop centers to provide some employment opportunities for older youth in the housing developments. For example, in Chicopee, the housing agency uses Drug Elimination funds for summer youth programs. In Boston, the housing agency uses Drug Elimination dollars to fund training centers that provide tenants with training in life skills. Some housing agencies—particularly the larger ones—also received HUD grants for other self-sufficiency efforts, including employment-related demonstration programs, such as Jobs Plus and Moving to Work, and competitive grant programs, such as the Economic Development and Supportive Services (EDSS) program. Housing agency coordinators of resident services said they also provided services to tenants by offering space to outside service providers and using service coordinators to link tenants with services in the surrounding community. However, the extent to which services were provided varied greatly among the housing agencies we visited. In St. Paul, the housing agency provided space for services, including food pantries, ESL classes, Head Start programs, and employment counseling. In Merced, the housing agency’s FSS coordinator is also the president of the county’s Family Resource Council and works with other social service agencies to gain access to services for the housing agency’s tenants. The East Baton Rouge housing agency has contracted with several service organizations that provide family mentoring, job placement, and counseling services. The Lawrence housing agency houses a Boys and Girls Club and provides space for employment and training counseling associated with its EDSS grant. About half of the executive directors at the housing agencies and most of the officials at the HUD field offices we visited said they had little or no involvement in developing the welfare reforms of the states they cover. While four of the housing agency executive directors said they were moderately involved, none said they were very involved, and the involvement they described was generally limited. At the Minneapolis housing agency, the manager of the welfare-to-work department said the housing agency was moderately involved with the legislature but had limited involvement with other state officials in developing the state plan. At the Lawrence housing agency, a staff member served on a state senator’s local welfare reform task force. This task force is generally credited with having had a significant impact on the deliberations of the state legislature. In Los Angeles, the planning director said he had limited input into the state plan through the California Housing Authority Association (CHAA) but had a strong relationship with the county welfare system that led to coordinated efforts for the housing agency’s tenants. The executive director of the Kings County housing agency also reported being involved through CHAA, as did directors at two other California housing agencies. A CHAA official said she worked through the California Welfare Directors Association to provide comments on legislative proposals, testified before the state legislature, and communicated regularly with her members. Table 2.2 summarizes the responses of the directors to our questions about their level of involvement in developing their state’s welfare reform plan and their level of satisfaction with this involvement. In California, Massachusetts, and Minnesota, state welfare officials said they did not reach out to the public housing community for input into their state’s welfare reforms, perhaps because housing issues were not central to these reforms. In Louisiana, a state welfare official said she did elicit input from the public housing community at a state housing conference, but housing interests were not represented on the state’s welfare reform task force. In addition, officials at state welfare offices and housing agencies said the states had not targeted funds for employment, training, and support services to housing agencies with large TANF populations; however, TANF recipients with housing assistance are eligible for the same services as other TANF recipients. At the housing agencies we visited, officials were somewhat more likely to be involved at the local level during the implementation of welfare reforms. In Massachusetts, the Deputy Director of the Department of Transitional Assistance said that welfare offices and public housing agencies have always interacted; however, the department is encouraging them to interact and communicate more often around the issue of welfare reform, and they are certain that this is happening. In California, where budgetary resources for employment services, supportive services, and training increased by nearly 60 percent in the state fiscal year that began in July 1997 and resources for child care increased by over 125 percent, state officials said counties had the flexibility to involve public housing agencies in developing their local implementation plans. During the development of Merced County’s implementation plan, the executive director of the housing agency drafted a position paper on housing issues and participated in community forums. Housing agencies’ and states’ efforts to move welfare recipients to work are not as well coordinated as they might be. For example, the executive director of the Butte County housing agency said that when housing agency staff approached welfare officials to form a collaboration between the state’s self-sufficiency programs and FSS, the officials said the 5-year FSS program sent the wrong message because California has a 2-year limit on the receipt of TANF benefits. When we visited, staff and tenants at the housing agency were just discovering that they could tailor their program to meet changing needs. The director of the Butte County Department of Social Welfare also described efforts to set up one-stop centers for TANF recipients in collaboration with the local private industry council and the local employment and training office. She said the effort was moving slowly because of a lack of available space; however, she had not brought in the housing agency as a partner. Similarly, in Massachusetts, where the Lawrence housing agency was awarded an $800,000 EDSS grant to move welfare recipients toward employment through an intensive employment and training program, the program’s requirements were not well coordinated with those of the state plan or of other local employment and training efforts. Tenants at the Lawrence housing agency said that participants in the program, who might be mothers of school-age children, were required to participate in 30 hours of training a week, while the state welfare plan required them to work or participate in community service for 20 hours a week. Thus, participants in the program faced the possibility of having to be away from their homes for 50 hours a week. Although the Lawrence area’s private industry council used the same trainer and offered similar programs, the executive director of the Lawrence housing agency said he needed his own program because the state could meet its welfare reform participation rate requirements without ever getting to his tenants. Some housing agencies and local welfare offices are beginning to coordinate more to ensure the success of local welfare reform efforts and housing self-sufficiency programs. These efforts are especially evident at Jobs Plus sites in Los Angeles and St. Paul, where local coordination was required for the housing agency to be included in the program. The Los Angeles social services director said that the Jobs Plus program, which is still in the planning phase, has strengthened the collaboration between the housing agency and other social service agencies. In St. Paul, the welfare officials said they plan to locate a welfare office at the Jobs Plus site. Other housing agencies also report increasing coordination. For example, 12 federally funded housing developments in Boston are working with one of Massachusetts’ Career Centers to offer on-site job search facilities. The centers are quasi-public entities responsible for delivering many of the state’s employment services. In Minneapolis, the housing agency is under contract with the county welfare office to provide employment and training services for the tenants. In addition, in Kern County, the new executive director—who previously held a position in the county welfare office—has involved the housing agency in several welfare working groups and has proposed that the county contract with the housing agency to make rental payments for welfare recipients who pass the time limit for receiving TANF benefits but have children who still receive benefits. We asked the 18 executive directors we interviewed to rate their overall expectations about the impact of welfare reform on their housing agencies from significantly positive to significantly negative. Because the number of housing agencies we visited was small, consistent patterns across various characteristics are difficult to discern. However, as table 2.3 indicates, the executive directors of the housing agencies we selected in California and Minnesota were generally more positive about the impact of welfare reform than the executive directors in Massachusetts and Louisiana. In general, the latter—and their tenants—have less time to adapt to welfare reform. Researchers with the Institute for Policy Studies at the Johns Hopkins University have shown that welfare recipients with housing assistance have longer spells on welfare than those without housing assistance. Thus, welfare recipients with housing assistance are more likely than other welfare recipients to reach the limits on their receipt of TANF benefits without having found employment, and their employment prospects worsen as their time limits decline. While Massachusetts and Louisiana will reach their 2-year limits by the end of 1998, Minnesota has a 5-year limit that did not start until July 1997. California has a 2-year limit that did not start until January 1998 and may, in some instances, be extended to 5 years. In addition, California counties were still formulating plans to implement the state’s welfare reform plan when we visited. Recent modeling by Mathematica also shows, on the basis of prior behavior, who will be likely to go to work within the proposed time limits under various welfare plans. According to Mathematica’s analysis, TANF recipients with housing assistance are less likely to leave the welfare rolls, less likely to find jobs, and more likely to have lower incomes than TANF recipients without housing assistance. In addition, local economic conditions may have affected the executive directors’ expectations. Although we visited our housing agencies during a time of high national job growth, some localities were experiencing long-term economic declines that were limiting the job opportunities of welfare recipients. For example, the executive directors of the Duluth and Hibbing housing agencies expected welfare reform to have a negative impact on their housing agencies, even though they were not facing an imminent time limit. However, according to the Hibbing housing agency, the region has been severely affected by a long-term decline in the iron ore industry. Similarly, in Massachusetts, the generally negative expectations of the New Bedford housing agency’s executive director may be attributable to the long-term economic decline in southeastern Massachusetts. However, local economic conditions do not seem to have affected the expectations of the executive directors of housing agencies in California’s Central Valley. There, even though unemployment rates were high, the directors’ expectations were generally positive. HUD agreed that, in general, housing agency officials are facing major challenges in understanding and dealing with the potential effects of welfare reform on the recipients of housing assistance and on the housing agencies themselves. However, HUD said that our report could have provided more information in several areas to help HUD improve its performance. For example, HUD identified a need for more guidance on what housing agencies need to know to estimate the impact of welfare reform. HUD also suggested that we include more information on obstacles to the use of rent reform policies and mention that welfare agencies collect and should provide housing agencies with data on recipients’ education levels. We considered HUD’s comments but made no changes because the draft already explained the housing agencies’ reasons for not using rent reform policies—namely, that the policies would reduce the agencies’ revenue, are difficult to administer, and have not been permanently adopted. In addition, the draft report discussed the reasons why demographic information—such as data on tenants’ education, prior work experience, age, and minority status—is useful for housing agencies to know. We also provided chapter 2 of the draft report to the 18 housing agencies we visited for their comments. Nine of the housing agencies responded, and several provided clarifying language and technical corrections. We incorporated their comments as appropriate. In addition, we provided Mathematica with excerpts of the draft report for its technical review and incorporated its technical corrections as appropriate. HUD has a smaller role in welfare reform than the states or some other federal agencies, such as the departments of Health and Human Services and Labor, yet HUD has stated that it is committed to making welfare reform work. HUD’s commitment rests, in part, on the large numbers of tenants who currently receive, but may lose, welfare benefits if they do not find work. The potential reductions in tenants’ incomes from such losses could decrease many housing agencies’ revenue and increase the need for operating subsidies from HUD. To date, HUD has discussed the importance of making welfare reform work in the strategic plan that it developed under the Government Performance and Results Act, redirected several existing programs to emphasize work activities, and emphasized the use of existing programs to achieve welfare reform’s goals. However, some field and housing agency officials whom we interviewed were confused about HUD’s role and said they had not received guidance from HUD. In addition, housing agencies said that some of the programs HUD identifies as relevant to welfare reform are of limited use because of funding and other constraints. HUD officials have begun to coordinate discussions of welfare reform efforts, both internally and externally, but HUD has not developed a comprehensive strategy for bringing its resources for welfare reform together with the funds and programs available through the states and other federal agencies. Although HUD has resources—demographic data on tenants, expertise gained through demonstration programs, and staff at the field level—and supports physical facilities for providing services, it has not systematically developed relationships with the states, which have most of the funds for welfare reform. While HUD plans to do its part to make welfare reform succeed, the success or failure of welfare reform does not depend on HUD. The states are the most important players under welfare reform because they have the flexibility under the law to design and implement welfare reform plans and to determine how to use their block grants. HHS is the federal agency that is primarily responsible for assisting the states with their TANF programs and for providing additional funding for social services, such as child care. In addition, the Department of Labor plays a prominent role because of its job training programs and welfare-to-work initiatives. Although HUD recognizes that its role under welfare reform is limited, it has made welfare reform a priority for the Department. Under welfare reform, important responsibilities were shifted from the federal government to the states. As discussed in chapter 1, the states have acquired more flexibility to design their own programs and strategies for aiding needy families, including those for helping welfare recipients move into the workforce. In addition, the states can decide how to allocate their TANF funds between cash assistance and support services, such as employment services and child care. As discussed in chapter 2, the states, on average, have more budgetary resources available under TANF for their low-income family assistance programs than they did under the AFDC program, at least at this time. While welfare reform shifted responsibility to the states, HHS, as discussed in chapter 1, is responsible for overseeing the states’ implementation of the law, and other federal agencies are involved in welfare-to-work efforts. HHS also distributes the majority of the federal funds for social service block grant programs, which are important components of the states’ welfare reform efforts. For example, HHS administers the Child Care and Development Block Grant and the Social Services Block Grant. Additionally, the Department of Labor has a prominent role under welfare reform because it operates jobs programs, such as the welfare-to-work grants, the Job Training Partnership Act (JTPA) Title II-A Adult Training grant program, and the One-Stop Career Center initiative. Other agencies, such as the Department of Transportation and the Small Business Administration, have also initiated efforts to support welfare reform. At least in part because tenants’ incomes could decline under welfare reform and thus potentially lower housing agencies’ revenue, HUD has made the success of welfare reform a priority for the Department. HUD also recognizes that it is in a unique position to assist people moving from welfare to work because its programs—such as public housing, Section 8, and the Community Development Block Grant (CDBG) program—have a physical presence where the poor live. HUD’s 1998 budget stated that the Department would play its part by pursuing several strategies to make welfare reform work: (1) creating jobs for welfare recipients; (2) using housing assistance and community facilities strategically to link welfare recipients to jobs and to help ensure that work will pay; and (3) providing and leveraging services to link welfare recipients to jobs and to help them stay employed. HUD also discussed the importance of making welfare work in its 1997 HUD 2020 Management Reform Plan. In the plan, HUD stated that it is “the agency with potentially the largest economic development portfolio in the federal government; and the branch that deals most directly with the fate of cities, where most people on welfare live.” In the plan, HUD said that its long-term success as an agency will largely depend on the degree to which welfare reform works. In its September 30, 1997, strategic plan, prepared under the Results Act for fiscal years 1998-2003, HUD proposed a two-pronged approach for implementing welfare reform: Create and retain jobs through its economic development programs, such as CDBG, a flexible formula grant program that provides resources to communities; Section 108, which allows communities that receive CDBG grants to leverage private funds for loans for large-scale projects that could result in job creation and community development initiatives; the Economic Development Initiative, a grant program that supplements Section 108; and the planned second round of the Empowerment Zones and Enterprise Communities program, which would focus on moving residents from welfare and poverty to work. Coordinate housing assistance with welfare reform efforts by supporting rent incentives that reward work, encouraging partnerships, and providing services. In the plan, HUD said that it supports changing the public and assisted housing rent rules that discourage work and would encourage housing agencies to use the flexibility they have in establishing rents and managing their units to support the goals of welfare reform. In addition, HUD said that it would encourage partnerships between housing agencies and local social service agencies so that housing agencies do not create redundant case management programs for residents. HUD also discussed how some of its self-sufficiency and housing programs and programs for the homeless provide services for the residents of assisted housing and for homeless people seeking employment. HUD has provided information to its field offices and housing agencies on welfare reform and how it may affect them and has provided additional guidance during training sessions. However, some field offices and housing agencies we visited did not recall receiving guidance from HUD and were confused about HUD’s role and about how HUD’s programs can be used to promote welfare reform. In addition, housing interest groups, researchers, and public housing officials discussed the need for data on tenants’ characteristics and information on how welfare reform could affect housing agencies. In October 1996, HUD’s Acting Assistant Secretary for Public and Indian Housing and Assistant Secretary for Policy Development and Research issued a package of information to HUD’s field offices and housing agencies. This information summarized the major changes resulting from welfare reform and discussed the steps housing agencies could take to adapt to the new environment. Through the information package, HUD urged housing agencies to learn about their state’s welfare reform plan and to consider how the plan would affect the housing agency and its tenants. HUD also suggested that housing agencies examine how ceiling rents, adjustments to earned income, and local preferences in admission could be used to reinforce the benefits of work. In addition, HUD asked the housing agencies to examine their resources and find out how their facilities could be used in partnership with others in the community. Finally, HUD discussed the importance of having a good working relationship with local public and private service organizations in order to bring resources to the housing agency. HUD also provided guidance on welfare reform during training sessions. For example, officials from the Office of Public and Indian Housing in HUD headquarters provided welfare-to-work training sessions in four states/areas—Massachusetts, New York, California, and Kansas/Iowa. According to these headquarters officials, the training, which they provided for HUD field and housing agency officials from the four states, addressed the notice of funding availability (NOFA) for the grant programs—Drug Elimination, Economic Development and Supportive Services (EDSS), and Tenant Opportunity Program (TOP)—and how these programs could be used to foster coordination with local welfare reform efforts. During the training sessions, participants were also briefed on the federal welfare reform law and their state’s implementing legislation, the possible impact of this legislation on public housing agencies, and best practices in housing and welfare department cooperation. In addition, some field offices arranged their own welfare reform training sessions by inviting state and/or local welfare officials to brief staff, according to Public and Indian Housing officials. The Director of Planning and Coordination for HUD’s Office of Community Planning and Development said that his office included a welfare reform component in training sessions that it held for field office staff in four or five locations during calendar year 1997. Community Planning and Development officials also discussed using the Internet to transmit guidance to HUD’s field offices and provide information on best practices. Although HUD headquarters has made efforts to educate the field offices and housing agencies about welfare reform, some field offices we visited did not recall receiving guidance from HUD and one was confused about HUD’s role and about how HUD’s programs could be used to promote welfare reform. For example, the Director of the Office of Public Housing in HUD’s Louisiana state office said that HUD headquarters did not provide any instructions or direction to his office on welfare reform. He said that his office expected information, such as abstracts on related welfare reform activity, to be sent by HUD headquarters. The HUD Secretary’s representative in the New England field office said that it was hard to answer a question about what guidance on welfare reform her office had received because HUD operates through several different divisions. She said that although she knew welfare reform was a priority for HUD and her office had formed a committee to work on welfare reform, she and her staff were confused about HUD’s role and did not plan to do anything on the subject except what they were told to do. HUD field officials in Minnesota and California said that with HUD’s reorganization under way, it was difficult for them to discuss HUD’s role. HUD field officials in San Francisco said that although they had received some written information from HUD headquarters, they got most of their information on welfare reform from meetings of the National Association of Housing and Redevelopment Officials and from television. Through their links with HUD headquarters, on the one hand, and local housing agencies, on the other, HUD field offices are in a position to receive, consolidate, and transmit information and guidance from headquarters and its multiple program offices to the local housing agencies and, in turn, to relay the housing agencies’ questions and concerns to headquarters. As discussed later in this chapter, HUD has taken steps to coordinate its national program offices’ welfare reform efforts, but it has not taken parallel steps to keep its field offices abreast of welfare reform issues. Given the field offices’ proximity to the state welfare offices that administer most of the funds available for implementing welfare reform, vertical as well as horizontal coordination would appear to be in HUD’s best interests. Some of the housing agencies we visited also said they had not received guidance from HUD or were unsure about HUD’s role in welfare reform. For example, the executive director of the New Orleans housing agency said the agency had not received any guidance from HUD, and the executive director of the Bogalusa housing agency and the manager of the welfare-to-work department at the Minneapolis housing agency said most of the guidance their agencies received from HUD arrived over a year ago. Furthermore, executive directors at three housing agencies we visited in Louisiana said they were unsure of, or were struggling to figure out, HUD’s role in welfare reform. According to the executive director of the St. Paul housing agency, HUD is recommending some policies to encourage tenants to stay—such as ceiling rents—and others to encourage them to go—such as the Moving to Opportunity demonstration program, which is evaluating the impact of using Section 8 certificates at five sites to move families into low-poverty areas. According to the executive director of the San Bernardino housing agency, HUD rarely visits the housing agencies and is unable to assist them because of the downsizing occurring at the field level. The executive director said he offered to pay the travel costs for HUD staff so they could provide on-site technical assistance to the housing agency, but the HUD officials said their office’s ethics code prevented them from accepting the offer. Housing interest groups, researchers, and public housing officials discussed the housing agencies’ need for data on tenants’ characteristics and information on how welfare reform could affect housing agencies. Because the recent changes in rent policies have given housing agencies more flexibility in choosing their tenants and because housing agencies now provide or coordinate supportive services as well as provide housing, sound management practices dictate that housing agencies know something about the tenants they serve, according to the Interim Director for the Institute for Policy Studies at Johns Hopkins University and the Co-Director of the Urban Institute’s New Federalism Project. HUD is already in a position to provide data to local housing agencies. Through the annual recertification process, housing agencies collect information about individual households—such as their sources of income, family composition, and minority status—that the agencies use primarily to determine rents. The housing agencies are required to submit these data to HUD, and HUD compiles the data into its Multifamily Tenant Characteristics System (MTCS) but does not routinely return the data to the housing agencies. The larger housing agencies tend to keep the data or have their own data systems, but some of the smaller housing agencies do not have the capacity or resources to maintain their own systems. Although HUD has summarized data for each housing agency on the Internet and in printed documents that can be ordered from HUD, eight of the housing agencies we visited said that they do not use HUD’s MTCS data in their operations. While some of the larger housing agencies collect their own data, the smaller ones tend not to collect their own data or use MTCS. As discussed in chapter 2, at most of the locations we visited, housing agency staff said they did not have the resources or expertise to compile and analyze the data to determine the impact of welfare reform. The St. Paul public housing agency commented that the MTCS data on the Internet provide an interesting overview, but the agency is concerned about the accuracy of these data and has had difficulty reading and manipulating them. Policy analysts at the Council of Large Public Housing Authorities and the National Association of Housing and Redevelopment Officials suggested that HUD reformat its data to be more user-friendly. They also said that HUD could gather and disseminate data easily and should consider sending the MTCS data back to the housing agencies along with instructions for analyzing the data to help the agencies develop sound management practices and determine which programs their tenants need to become self-sufficient. Although prior GAO and HUD studies have questioned the reliability and accuracy of HUD’s data, interest group officials and researchers said that the more housing agencies use the data, the more they will demand that the current data problems be corrected. HUD has redirected several self-sufficiency programs to emphasize the importance of coordination for housing agencies and discussed the potential for using some of its other programs to promote welfare reform. HUD also operates four demonstration programs that are testing the impact of providing services on tenants’ ability to move toward self-sufficiency. However, most of the self-sufficiency programs are small, and the opportunities for housing agencies to receive funds are limited. While HUD’s CDBG program provides a steady stream of funding to over 4,000 communities nationwide, the bulk of this funding has historically been used for housing activities and public facilities that have not directly benefited the residents of public and assisted housing. To more closely align its self-sufficiency programs with the goals of welfare reform, HUD has redirected several programs to emphasize the importance for housing agencies of coordinating with local welfare efforts and has proposed new welfare-to-work vouchers. For example, applicants for the 1997 EDSS and TOP grants are required to explain how they will use their grant funds to coordinate programs with the local welfare offices. HUD has broadened the applicability of the Drug Elimination grant so that the funding can be used to develop employment programs that are consistent with local welfare reform efforts. In addition, in its fiscal year 1999 budget, HUD requested 50,000 new welfare-to-work vouchers to help meet the housing needs of those moving from welfare to work. HUD has also discussed ways in which some of its other programs can be used to support welfare reform. For example, HUD said that its core economic development programs, such as the Empowerment Zones/Enterprise Communities, Economic Development Initiative, Section 108, and CDBG programs, have the dual purpose of restoring communities and providing funds for activities that may lead to the creation of jobs. In a December 1996 policy paper, HUD outlined the importance of CDBG—funded at about $4.5 billion for fiscal years 1996 and 1997—as a potential major contributor to employment and training programs that could be used to support welfare reform. In the paper, HUD discussed the flexibility that the CDBG program gives communities to tailor their local programs to fit their particular needs. The paper also emphasized the potential for using CDBG funds in strategies for creating jobs, providing public services, assisting microenterprises, and revitalizing neighborhoods. Finally, HUD operates demonstration programs that are examining how providing services will affect tenants’ ability to move toward self-sufficiency. For example, HUD’s Bridges to Work demonstration program is evaluating the utility of linking inner city jobs with a package of services, such as transportation and child care referrals. The Moving to Opportunity program moves tenants to low-poverty areas, and the Moving to Work and Jobs Plus demonstration programs are evaluating how work incentives or services affect tenants’ ability to move toward self-sufficiency. HUD has tried to refocus its programs targeted toward public and assisted housing to facilitate welfare reform; however, the programs are small and the opportunities for housing agencies to receive funds are limited. Although all 3,200 housing agencies are eligible to apply for the self-sufficiency programs, the grants are modest and very competitive. For example, the Drug Elimination program—funded at $310 million in fiscal year 1998—offers the best odds of receiving funding, since over half of the 889 applicants in fiscal year 1997 received funding. The grant awards ranged from $25,000 to $250,000. However, the FSS program—required for housing agencies that receive additional public housing units or Section 8 certificates and vouchers—provides no funding for services, but $25.2 million is available in fiscal year 1998 for FSS program coordinators. The EDSS program—with $43.6 million available in fiscal year 1998—and the HOPE VI program—with $550 million in fiscal year 1998 funding—were also competitive. For example, in 1997 HUD received 221 applications for EDSS and awarded 112 grants. For HOPE VI , 28 out of 127 applicants received grants in 1997. HUD acknowledges the small-scale nature of these programs but views the funding as a mechanism to leverage other resources. HUD’s demonstration programs serve few sites, as is consistent with their purpose, and are targeted toward large metropolitan areas. Bridges to Work is limited to 5 distressed neighborhoods in large metropolitan areas, and Jobs Plus is restricted to 7 large housing agencies in large metropolitan areas. Commonly, the selection criteria for demonstration programs are controlled, and not all housing agencies are eligible to participate. Consequently, given the small number of awards available and the restrictions on participating in the demonstration programs, three of the housing agencies we visited said they sometimes decide not to spend the time developing applications. While HUD’s CDBG program provides a steady stream of funding to over 4,000 communities nationwide, most of the funding has not been used for economic development and public services activities, and data are not available to determine whether the jobs that are created benefit those who formerly received cash assistance. For example, in fiscal year 1994, entitlement communities—which receive 70 percent of the funding—used 36 percent of their funds for housing activities and 23 percent for public works. These communities used only 8 percent of their funds for economic development activities and 13 percent for public service activities. Furthermore, data do not exist to determine whether the jobs created using CDBG funds help those with incomes as low as those of individuals receiving cash assistance. However, HUD’s Deputy Assistant Secretary for Policy Development said that HUD is exploring ways to modify its data collection procedures to track jobs created for those receiving TANF benefits. According to the Executive Director of the Council of State Community Development Agencies, it is doubtful that CDBG funds are helping people on welfare get jobs because most jobs created probably go to individuals with incomes at about 80 percent of median income. He said that in most areas, welfare recipients’ incomes would be less than 50 percent of median income. Finally, CDBG was mentioned as a source of income by only 2 of the 18 housing agencies we visited. Internally, HUD department and office program managers meet periodically to coordinate and share information on welfare reform issues. Internal coordination is important for HUD because at least five of its departments and offices have responsibility for self-sufficiency and economic opportunity programs that it believes will support welfare reform. HUD program managers also meet with managers from other federal agencies. However, HUD has not developed a comprehensive strategy for bringing the needs of its tenants on cash assistance to the attention of the state offices that administer most of the funds available for welfare reform. HUD has resources that it could use to leverage benefits for its tenants. Internally, HUD department and office program managers meet periodically to coordinate and share information on welfare reform issues. For example the Deputy Assistant Secretary for Policy Development in HUD’s Office of Policy Development and Research—who is responsible for coordinating welfare reform activities within HUD and with other federal agencies—and program directors in the Office of Public and Indian Housing and Community Planning and Development said that HUD does not follow a specific process for coordinating welfare reform efforts, but internal coordination takes place through the Department’s Welfare Task Force, through the NOFA review process, or informally, in the course of administering programs. HUD’s Welfare Reform Task Force met biweekly prior to the passage of welfare reform. After the passage of welfare reform, the group met less frequently but has recently begun to meet again. Internal coordination also occurs when a NOFA is circulated, before its release, to the various assistant secretaries so they can review and comment on it and look for ways to maximize funding opportunities and provide additional services to support welfare reform. Finally, coordination occurs in administering programs, such as the Jobs Plus demonstration program, which is managed by the Office of Policy Development and Research and is a component of the Moving to Work initiative, administered by the Office of Public and Indian Housing. According to managers of both programs, representatives from the two offices met regularly to develop and share information on the selection and evaluation criteria for both programs. Internal coordination is particularly important for HUD because at least five of its departments and offices have programs that assist housing agencies and their residents. Together, these departments and offices administer 24 programs designed to move tenants toward self-sufficiency. The Office of Public and Indian Housing operates roughly 13 self-sufficiency programs, the Office of Community Planning and Development operates 6 programs, the Office of Housing manages 1 program, the Office of the Secretary’s Office of Labor Relations operates 1 program, and the Office of Policy Development and Research oversees 3 demonstration programs. HUD program managers also coordinate with managers from other federal agencies. For example, the Deputy Assistant Secretary for Policy Development in HUD’s Office of Policy Development and Research serves on the Department of Labor’s Welfare Reform Task Force. HUD’s Office of Labor Relations manager for Step-Up provides work experience through registered apprenticeships and works closely with the Environmental Protection Agency (EPA) to develop a mechanism for creating jobs through Step-Up and EPA’s brownfields cleanup program. In addition, several Office of Public and Indian Housing managers have developed relationships with officials in HHS. For example, HHS’ Office of Community Services and HUD’s Office of Public and Indian Housing developed a partnership between housing agencies and community development corporations to provide EDSS in six communities. In addition, representatives from HHS’ Administration for Children and Families and HUD officials said that they have met several times to discuss issues such as income verification, and HHS officials have provided information on welfare reform at HUD training sessions. The HHS officials said they saw HUD as a proactive agency and were impressed with the way its demonstration programs, such as Bridges to Work and Moving to Work, anticipated the reforms of the welfare system. However, the Branch Chief for the Office of Family Assistance within the Office for Children and Families said that better coordination is needed between all federal agencies and that his office within HHS had been directed to establish a federal welfare reform coordinating body. HUD has also collaborated with the Department of Labor, HHS, and a number of private foundations in the Jobs Plus demonstration program; coordinated efforts on the Bridges to Work demonstration program with the Department of Transportation; provided information to the Small Business Administration in support of its efforts to help women make the transition to work; and signed a memorandum of understanding with the Department of Agriculture that resulted in the delivery of Agriculture’s services at six public housing communities. According to HUD officials, the expiration of legislative authority to transfer funds from one federal agency to another has limited interagency coordination. The Joint Funding Simplification Act of 1974, which permitted interagency transfers, expired on February 3, 1985. HUD officials believe that without the ability to move funds from one agency to another, it is difficult for federal agencies to operate joint programs because agencies must separate funds and operate under two sets of federal rules. HUD believes that this requirement makes coordination more challenging at the federal and local levels. The devolution of decision-making authority for cash assistance programs to the states and sometimes to localities has created a new need for HUD and public housing agencies to interact with state and local decisionmakers. In the past, housing agencies carried out federal public and assisted housing programs—relying on dedicated funds from HUD—and seldom interacted with broader community development agencies. Today, as the states exercise greater control over welfare benefits and administer additional funds for employment and supportive services, HUD and the housing agencies have a greater stake in the results of state and local decision-making. To the extent that HUD and housing agencies can reach out and inform state and local decisionmakers of their tenants’ needs, they may be able to reduce the historical isolation of public housing residents from the community at large and help the tenants obtain needed services. Greater interaction between local housing professionals and welfare administrators could also streamline the delivery of services to assisted households and create mutually beneficial opportunities for collaboration. Despite the advantages of working more closely with state agencies, HUD has not developed a comprehensive strategy for bringing the needs of its tenants to the attention of these agencies. Although HUD has an organizational presence at both the national (headquarters) and state (field office) levels, it has not systematically taken advantage of its field structure to establish connections with state welfare offices and agencies that have more resources than it does to provide employment and supportive services. While HUD’s strategic plan and other management documents stress the importance of making welfare reform work and explain how HUD’s own programs can facilitate welfare reform, they do not recognize a role for HUD at the state level and do not include a formal strategy for increasing the states’ awareness of the assisted housing population and for improving coordination among HUD, the states, and the public housing agencies. Such a strategy is critical, for, as we reported in chapter 2, officials at three state welfare offices said they did not reach out to the public housing community for input into state welfare reform plans. In addition, we found little evidence that the states were targeting funds for services to public housing developments. HUD recognizes that it is in a unique position to assist people moving from welfare to work because it has a physical presence where the poor live. Nationwide, in 1996, about one-fourth of the households on AFDC also benefited from housing assistance provided by HUD. In the states we visited, the proportion ranged from a low of 12.1 percent in California to a high of 43.1 percent in Massachusetts. In Louisiana and Minnesota, 27.8 percent and 40.1 percent, respectively, of the households on welfare also received housing assistance. Especially in states such as Massachusetts and Minnesota, where many of the same households receive both types of assistance, public housing agencies could use place-based strategies to help welfare recipients move to work. Because HUD’s funding is limited and housing agencies vary in their ability to administer programs, public housing, local government, and interest group officials believe that HUD and housing agencies should establish partnerships with other social service providers to bring services to housing agencies. For example, the Assistant Director of the American Public Welfare Association said that HUD could play a valuable role by marketing housing agencies’ facilities, making them available to state and local providers for the delivery of supportive services. She said that HUD could also help educate service providers by sharing demographic data with them on TANF recipients who reside in public and assisted housing, together with findings from HUD’s demonstration programs. Moreover, according to the Research Director for the Council of Large Public Housing Authorities, the data and expertise HUD has acquired through its supportive service programs could help service providers understand how local housing agencies operate and what their tenants need. Although HUD has provided guidance on welfare reform, it has not ensured that all of the field offices and public housing agencies have received and understood the guidance. As a result, some offices and agencies are confused about HUD’s role under welfare reform. Without vertical as well as horizontal coordination within HUD, information available at the national level may not be reaching the field and local levels, and the field offices and local housing agencies may be missing opportunities to obtain funds or services for their tenants from the states, other federal agencies, or HUD itself. With greater emphasis on vertical coordination, field officials might also be encouraged to consolidate and clarify information and guidance from HUD’s multiple national program offices for the local housing agencies within each field office’s jurisdiction. Just as HUD has made guidance on welfare reform available to the field offices and housing agencies but not followed through to make sure they have received and understood the guidance, so the Department has made data available to the housing agencies but not followed through to make sure they are using the data. HUD has made summaries of the data that it collects from public housing agencies available electronically and in printed documents, but the agencies are not using the data. Many of the agencies, particularly smaller ones, lack experience in analyzing the data and in translating the results of analyses into actions—such as establishing appropriate self-sufficiency programs or rent policies, as discussed in chapter 2. Providing the agencies with data and guidance for analyzing the data could assist them in assessing the impact of welfare reform on their tenants’ incomes and their own rental revenue. While such an effort might take time in the short run, it could pay off in the long run by equipping the agencies to monitor, analyze, and respond to the needs of their tenants and thus to operate more independently and effectively in the future. In the states that we visited, public housing agencies’ historical lack of involvement in state and local decision-making has continued under welfare reform, as we learned from the agencies’ executive directors, most of whom did not help to develop their state’s welfare reforms. Now, as the states implement their welfare reforms, the agencies may remain on the sidelines unless HUD makes a comprehensive effort to let the state offices know that, in many locations, housing agencies could provide good places for delivering services. HUD can use its resources—data, expertise, and staff at the state level—and encourage housing agencies to use their physical facilities, to build links with the state offices and leverage federal and state funds for tenants. For example, HUD can rely on staff in its field offices to contact state offices and statewide service providers to market the benefits of using assisted housing developments as places to deliver services related to welfare reform. Similarly, HUD can systematically encourage housing agency officials to initiate such contacts at the local level. To assist public housing agencies in their efforts to help residents move from welfare to work, GAO recommends that the Secretary of Housing and Urban Development increase communications with field offices and housing agencies to clarify HUD’s role in welfare reform, explain how current programs can be used to complement welfare reform efforts, and identify sources of information about other federal welfare reform efforts; provide additional technical assistance and data on tenants’ characteristics along with guidance that would help housing agencies use the data to assist in managing the units and in determining what impact welfare reform might have on the agencies; and develop a comprehensive strategy that relies on each field office to promote the benefits of using assisted housing developments as places to deliver services related to welfare reform and to help link other field office and housing agency staff with federal, state and local welfare reform efforts. HUD commended us for the report’s overall conclusions and said that they reflect many of the agency’s own concerns. HUD was also pleased that the report recognized the Department’s commitment to making welfare reform work. According to HUD, it is important that our report recognizes the need for a great deal of coordination within HUD; between HUD and the housing agencies; and among HUD, the housing agencies, and the other players in the welfare reform effort. In addition, HUD said that all three of the report’s recommendations have a great deal of merit and that it plans to implement them. HUD did not believe that the draft report sufficiently acknowledged the initiatives undertaken by the Department to deal with welfare reform. For example, HUD said the report did not address (1) departmental legislative proposals containing a number of provisions related to welfare reform and (2) new program initiatives undertaken or planned by HUD’s Office of Public and Indian Housing and Office of Policy Development and Research. In addition, HUD said the report did not sufficiently acknowledge the numerous efforts taken by the Department to coordinate with other federal agencies and that references to “informal” coordination seemed inadequate. After reviewing HUD’s comments, we added additional references to HUD’s legislative proposals in the introductory chapter; however, EDSS and TOP, the welfare-to-work vouchers, and the expanded empowerment zones were already mentioned in this chapter. We considered the comments about HUD’s new program initiatives but concluded that the report already included the primary efforts that could be documented at the time of our review. In response to HUD’s comments about its coordination with other federal agencies, we expanded our description of HUD’s efforts to coordinate with other federal agencies and eliminated references to “informal” external coordination. We also added several additional examples of HUD’s external coordination efforts. Finally, HUD said that it has several efforts under way or about to begin that will result in information sharing and will make use of the lessons in our report. For example, HUD said that Policy Development and Research staff are preparing guidelines for housing agencies to help them look at the data they have in hand and the data they might need to gather to do their own assessments of the impact of welfare reform on their rental revenue. In addition, Public and Indian Housing staff are finalizing a best practices guidebook on welfare-to-work programs and techniques being used in public housing agencies. HUD said that it has also begun work on a book of welfare-to-work case studies that will expand the scope of the best practices guidebook to show how a variety of HUD funding sources are already being used to help families on welfare make the transition to work. | Pursuant to a legislative requirement, GAO reviewed the implications of welfare reform on public housing agencies and their tenants, focusing on the: (1) impact of welfare reform on the revenue, employment status of tenants, and roles of selected housing agencies; and (2) Department of Housing and Urban Development's (HUD) role in assisting housing agencies and their clients as they adapt to welfare reform. GAO noted that: (1) it is too early to be certain what impact welfare reform will have on the revenue of the housing agencies that GAO selected, the employment status of their tenants, and the roles of these housing agencies; (2) most of the agencies had not attempted to estimate welfare reform's impact on their revenue for multiple reasons, including a lack of resources to undertake detailed analyses of the impact of their state's welfare reform plan; (3) welfare rolls had declined in the states that GAO visited, and state officials described services being provided to help Temporary Assistance for Needy Families (TANF) recipients overcome obstacles to employment; (4) housing agency officials, residents, and others believed that tenants would face significant challenges in moving from welfare to work; (5) their concerns are supported by research, based on past behavior, which shows that welfare recipients with housing assistance tend to have longer stays on welfare than those without housing assistance; (6) executive directors recognized that the role of housing agencies increasingly includes providing social services as well as housing; (7) however, agencies' social service activities were generally operated separately from states' welfare reform efforts; (8) the agencies that GAO visited had limited involvement in their state's welfare reform efforts; (9) state and local government offices with welfare reform responsibilities rarely targeted funds and programs to public housing developments; however, TANF recipients with housing assistance are eligible for the same services as other TANF recipients; (10) HUD has a smaller role in welfare reform than the states or some other federal agencies; however, HUD said that it is committed to making welfare reform work; (11) HUD's role is driven, in part, by the large numbers of tenants who currently receive welfare benefits whose incomes will decline if they do not find jobs or other sources of income within the time limits; (12) HUD's own financial status depends, to some extent, on these tenants' success in replacing welfare benefits with earnings; (13) to date, HUD has emphasized the importance of welfare reform in at least two strategic planning documents, issued guidance on welfare reform, redirected some programs to focus on welfare reform, and begun to coordinate its welfare reform activities internally and externally; and (14) HUD's strategic plans do not include a comprehensive strategy for bringing together HUD's resources for welfare reform and the funds and programs available from the states and other federal agencies. |
In October 2004, Congress included a provision in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 that required the Secretary of Defense to develop a comprehensive policy for DOD on the prevention of and response to sexual assaults involving members of the armed forces. In part, the legislation required DOD to develop a uniform definition of “sexual assault” for all the armed forces and submit an annual report to Congress on reported sexual assault incidents involving members of the armed forces. The statute also required the Secretaries of the military departments to prescribe procedures for confidentially reporting sexual assault incidents. DOD issued its first annual report to Congress in May 2005, and in August 2008 we conducted a review that, among other things, evaluated the extent to which DOD had visibility and exercised oversight over reports of sexual assault involving servicemembers. We found that DOD’s annual reports to Congress may not effectively characterize incidents of sexual assault in the military services because the department had not clearly articulated a consistent methodology for reporting incidents and because the means of presentation for some of the data did not facilitate their comparison. Further, we found that while DOD’s annual reports to Congress included data on the total number of restricted and unrestricted reported incidents of sexual assault, meaningful comparisons of the data could not be made because the offices providing the data to DOD measured incidents of sexual assault differently. As a result, we recommended that DOD improve the usefulness of its annual report as an oversight tool by establishing baseline data to permit analysis of data over time. DOD concurred with this recommendation and has taken steps to develop baseline data through the development of DSAID. Also, in 2008, Congress mandated that DOD implement a centralized, case-level database for the collection and maintenance of information regarding sexual assault involving a member of the armed forces. Additional mandates have since required the DOD-wide collection of additional data, such as case disposition and military protective orders for annual reporting purposes. We conducted a review of DOD’s efforts to implement a centralized sexual assault database, and in 2010 we reported that while DOD had taken steps to begin acquiring a centralized sexual assault database it did not meet the statutory requirement to establish the database by January 2010. Moreover, we found that DOD’s acquisition and implementation of DSAID did not fully incorporate key information technology practices related to the following: economic justification, requirements development and management, risk management, and test management. DOD concurred with all of our findings and recommendations and has taken some actions to address them. Economic justification: We found that DOD’s cost estimate for DSAID ($12.6 million) did not include all costs over the system’s life cycle and had not been adjusted to account for program risks. In 2012, DOD reassessed DSAID’s costs to include additional expenses not included in the original estimate; however, as of November 2016, DOD has not been able to provide DSAID life cycle documentation that would demonstrate that DOD had taken steps to ensure that all costs and program risks were accounted for. Requirements development and management: We found that DOD had taken initial steps to engage some users in the development of DSAID requirements and in 2009 had developed its initial requirements management plan. DOD’s initial requirements management plan established processes and guidelines for requirements management activities. This plan has been updated three times with the latest update in January 2016. DOD has some systematic methods in place for tracking user feedback, which is a key step in identifying system requirements. In addition, DOD has elicited feedback on users’ experience with DSAID since the database’s implementation. For example, in 2012, 2013, and 2015 DOD collected non generalizable feedback from DSAID users, including SARCs, SAPR program managers, and the military services legal officers. Risk management: We found that during development of the system, DOD had begun to identify key risks such as staffing shortages and competing priorities among the military services. In 2011, DOD developed a risk management plan that identified risks associated with DSAID. In the risk management plan, DOD assigned probability and impact ratings to some of the identified risks. In addition, DOD reported discussing program risks and technical risks to the database at its management meetings and has an issues tracker to track, among other things, risks to the database. However, as of August 2016, DOD had not demonstrated that it had established and implemented defined processes for mitigating risks identified in its risk management plan. Test management: At the time of our review in 2010, DOD officials told us that they were planning, but had not started to work with a development contractor to establish an effective test management structure, develop test plans, and capture and resolve problems found during testing. As of October 2016, DOD had developed several test management plans. As of October 2013, DOD had implemented DSAID across the military services, and the military services were using it to track and collect data on sexual assault cases. DSAID has since been used to generate data included in DOD’s Annual Reports on Sexual Assault in the Military for Fiscal Years 2014 and 2015, DOD’s Fiscal Year 2014 Report to the President of the United States on Sexual Assault Prevention and Response, and DOD’s Annual Report on Sexual Harassment and Violence at the Military Service Academies for Academic Program Year 2014-15. DSAID captures DOD-wide data on reports of sexual assault that allow victims to receive treatment and services. Reports can be “restricted” (i.e., confidential reporting of alleged sexual assault without initiating an investigation) or “unrestricted” (i.e., nonconfidential reporting that may initiate an investigation). Reports of sexual assault included are those in which either the victim of the assault or the subject of the investigation are members of the armed forces, or in some cases, when a victim is a servicemember’s spouse or adult family member, or is a DOD civilian or contractor. Data are input into DSAID through both manual and automated data entry processes, and include, as applicable, victim and referral support information; investigative and incident information; and case outcome data. DSAID cases are originated by SARCs, based on a report of sexual assault made by a victim to a SARC, a military service Sexual Assault Prevention and Response (SAPR) victim advocate, or military criminal investigative organization (MCIO) investigator. Generally, victim data are manually input into DSAID by SARCs and investigative data are collected by each military service’s MCIO and transferred into DSAID through an automated interface process. For details on DOD’s process for inputting data elements into DSAID, see figure 1. DSAID can be accessed only by authorized users, who are assigned different access rights depending on their roles and responsibilities pertaining to the collection of sexual assault data. SARCs with DSAID access are required to have a valid DOD Sexual Assault Advocate Certification, and all DSAID users must meet background check and Privacy Act/Personally Identifiable Information training requirements as well as complete user-role specific system training. According to DOD officials, as of July 19, 2016, DSAID had 1,009 users, including 938 SARCs; 34 program managers; 11 SAPRO analysts; 25 military service legal officers; and 1 SAPRO super user See table 1 for a description of the roles and access rights for each of these user groups. Since 2012, DOD has taken several steps to standardize the use of DSAID throughout the department, including the development of (1) policies, processes, and procedures for use of the system; (2) training for system users; and (3) processes for monitoring the completeness of data. Since its implementation, DOD has developed multiple policies, processes, and procedures to guide the use of DSAID. Specifically, DOD’s sexual assault prevention and response instruction requires that information about sexual assaults reported to DOD involving persons covered by the instruction be entered into DSAID, and also established rules for DSAID access and procedures for entering data. Similarly, three of the services have added and officials from one of the services told us that they are in the process of adding language to their military service- specific sexual assault guidance requiring the use of DSAID. To assist users, DOD also developed a DSAID user manual that is revised with each new system update. Further, DOD’s instruction on the investigation of adult sexual assault requires MCIOs to ensure that data obtained through unrestricted reports of sexual assault, such as the investigative case number, are available for incorporation into DSAID. According to DOD officials, this instruction is currently being reviewed to provide more specific instructions to investigators. Further, DOD has instituted formal processes to facilitate changes to DSAID. In 2011—prior to DSAID becoming fully operational—DOD established its DSAID Change Control Board, which provides a framework to formally manage updates or modifications to the system, and includes representation from each of the military services. The board has a formal charter, is to use established processes and procedures, and members are to meet monthly to discuss proposed changes to the system. In order for a change to be approved, a majority of members must agree to the modification unless there is a legislative or DOD mandate for modification. Through its change control processes, the CCB approves, prioritizes, and implements change requests. As of October 2016, there have been 135 change requests submitted since the system became operational in 2013—56 of which have been implemented through the change control process. DOD has developed and conducted several training courses for DSAID users. Initially, DSAID users were required to attend in person training on DSAID prior to being granted access to the system. However, as of April 2013, DOD converted this required training from in person to a web- based self-guided training that consists of simulations demonstrating DSAID’s capabilities. Further, a DOD official told us that SAPRO conducts in-person training for program managers as well as virtual training for military services’ legal officers. In addition, since June 2013, DOD has hosted a regular webinar series to inform and train users on a range of DSAID topics, including policy, new releases, or updates to DSAID. According to DOD officials, as of April 2016, DOD had implemented required annual refresher training for program managers and military service legal officers and, according to DOD officials, they are considering conducting required refresher training for SARCs. DOD and each of the military services have developed processes for monitoring the completeness of data input into DSAID. The primary tool used to monitor DSAID data is DOD’s DSAID quality assurance tool. This tool allows users to run point-in-time reports that identify missing data in DSAID; validate the accuracy of selected data fields; and perform cross- checks of selected data fields to identify potential conflicts of information. Officials from each of the services’ headquarters-level SAPR offices said that they distribute quality assurance reports monthly to their installations and request that SARCs correct any issues identified before the next monthly report is generated. According to DOD and SAPR officials for two of the military services, these reports allow them to identify trends in data quality issues. SAPR officials for two of the military services also told us that they use quality assurance reports to perform more targeted training to address installation-specific needs. According to DOD officials, data errors identified by the quality assurance tool provides DOD and the military services the opportunity to fix or improve the data entry quality or processes. For example, DOD recently used the quality assurance tool to identify some cases without any subject record. As a result, DOD officials have made plans to meet and develop solutions. Additionally, DOD officials conduct regular manual and automated data validation checks of DSAID to help ensure that sensitive information is protected as well as to help ensure the general integrity of the data. SAPR officials for three of the military services’ SAPR offices also told us that they conduct military service-specific reviews of DSAID data on an ongoing basis to help ensure their completeness and accuracy, and to identify any systemic issues. DSAID users have identified a variety of technical challenges with the system and DOD officials told us they have plans to spend approximately $8.5 million to address most of these issues in fiscal years 2017 and 2018. Some of the key technical challenges users have reported experiencing with the system are related to DSAID’s system speed and ease of use; interfaces with MCIO databases; utility as a case management tool; and users’ ability to query data and generate reports. DOD has plans in place to implement modifications to DSAID that are expected to alleviate these challenges; however, officials stated that they will not get approval to fund these modifications until after having conducted an analysis of alternatives in line with DOD’s acquisition policy framework. This framework, and the GAO Cost Estimating and Assessment Guide outline key elements that should be included in this analysis, such as relative lifecycle costs and benefits; the methods and rationale for quantifying the lifecycle costs and benefits; the effect and value of cost, schedule, and performance tradeoffs; the sensitivity to changes in assumptions; and risk factors for any proposed modifications. DOD plans to complete the first draft of this analysis by the end of November 2016. Based on our review of the nearly 600 DSAID help desk tickets that were generated from January 2015 through April 2016; DSAID change requests; user feedback reports; interviews with SARCs, program managers, and DOD officials; and our first-hand observations made during visits to selected installations, we identified technical challenges that users reported with DSAID that hinder its use across the military services. These challenges are related to, DSAID’s system speed and ease of use; interfaces with MCIO databases; DSAID’s utility as a case management tool; and users’ ability to query data and generate reports. System speed: According to our review of DSAID help desk tickets and interviews with service SAPR officials and SARCs, DSAID’s slow system speed presented a challenge in efficient use of the database. Specifically, users report that slow system speed caused them to spend an inordinate amount of time on data input, and limited their ability to save data and run reports because the system is programmed to time out after a certain period. In our review of the DSAID developer’s monthly system performance reports from December 2014 through January 2016, we learned that DSAID is rebooted on an almost daily basis to prevent or minimize system slow down. Further, in our review of nearly 600 help desk tickets, we found that DSAID’s slow system speed was one of the challenges cited by users. Users reported that, due to issues with system speed, it was cumbersome to perform their required DSAID functions along with other job responsibilities. For example, SARCs we interviewed representing 7 of the 13 installations said that in their estimation, DSAID’s slow system speed regularly resulted in data input taking up to two to three times longer than it should have. Additionally, according to interviews with military service officials and SARCs representing 8 of the 13 installations, computers would frequently time out during the lengthy period of time it took to input and save data in DSAID and, if all required fields to save were not complete, the time- out would result in the need to reenter the data. In addition, officials from the Department of the Army told us they were unable to run all- Army reports during the last half of 2015 because DSAID would time- out before a full report could be generated. Therefore, according to DOD SAPRO officials, they ran the Army’s reports for them on a regular basis, and in February 2016, they resolved this immediate issue by implementing a report scheduler capability in DSAID. According to DOD SAPRO officials, this capability allowed the Army and the other military services to run the full report without timing out. DOD SAPRO officials acknowledge the latency issue overall and are addressing it with software and server upgrades that are designed to reduce page load time and ease the burden of data entry on SARCs. DOD SAPRO officials stated that the software upgrades are scheduled for completion in December 2016 and server upgrades in early 2017, but DOD officials emphasized that when DSAID users experience DSAID slow system speeds it can also be an issue with the user’s local network, and not with DSAID. Ease of use: DSAID users we interviewed and DOD documents that we reviewed cited the inability to easily navigate DSAID as a challenge. According to a DSAID user feedback report, in 2015 the biggest issue SARCs reported to their military service headquarters officials was that the DSAID user interface and navigation could be improved. This was supported by SARCs we met with from 7 of the 13 installations who said that it is easy to miss or skip data fields and pages because the logic flow from one page to the next in DSAID is not intuitive, often leaving those SARCs unsure of how much progress they have made in completing a case record. For example, during our site visit to one installation, we observed instances in which the selection of certain data elements would trigger other data fields that needed to be completed, but the system did not prompt the user that additional data were required. Additionally, Army headquarters officials raised concerns with DSAID’s ease of use, stating that improvements to the system’s flow would increase data accuracy by ensuring that users enter relevant information when a case was initiated and also decrease the frequency that “relevant data missing” is noted in DOD’s annual report. However, DOD officials stated that they are limited in their ability to make some changes to DSAID’s workflow because DSAID is a commercial-off-the-shelf system, which does not allow for such customization. Automated interfaces with military service investigative databases: Based on DSAID quality assurance tool reports generated by the military services for the first three quarters of fiscal year 2016 and discussions we had with DSAID users, we found that DSAID data that are populated through interfaces with MCIO databases vary in completeness. According to military service officials and SARCs, this is because MCIO data systems are not required to capture the same information that is required for DSAID. For example, an official from one MCIO said that investigators are not required to complete the data field in their database for whether alcohol was used by the subject or victim; however, this same data field in DSAID is designed to be populated automatically with data from the MCIO databases. Further, in September 2014, DSAID was modified to address technical issues with the interface by allowing SARCs or program managers to manually enter these data in instances where MCIO data are regularly omitted. However, according to DOD officials, any additional data received from the MCIOs during the weekly interface will overwrite what the SARC or program manager has entered as the MCIO is the authoritative source for such data. Utility as a case management tool: According to DOD documents, DSAID is intended to be a case management tool, and according to DOD’s Fiscal Year 2015 annual report, DSAID enhances a SARC’s ability to provide comprehensive and standardized victim case management; however, users of DSAID told us that the system is of limited usefulness for case management. According to the DSAID user manual, the system allows for case management in that it enables a victim’s incident and referral data to seamlessly transition between locations or SARCs. While DSAID is DOD’s system of record and the only system in which SARCs are permitted to maintain case data, according to headquarter-level officials from the Army, the Navy, and the Air Force, and according to Marine Corps SARCs, DSAID is of limited usefulness to the personnel working with victims. Specifically, officials stated that DSAID does not provide the requisite functionality, such as the ability to input case notes to manage individual cases. Officials from one service’s headquarters-level SAPR office said that this functionality would be helpful in ensuring continuity of victim case management. However, according to DOD SAPRO officials, as of December 2016, a change request to the change control board to add the functionality has not been submitted. SARCs we met with from 9 of the 13 installations similarly stated that DSAID is missing basic elements of standard case management systems, such as the ability to document victim outreach or record unique incident details that may inform referrals for care or other support services. Further, SARCs we interviewed from 8 of the 13 installations indicated they would, at a minimum, like the ability to document how and when they met with victims to track the level of service victims were provided. DOD officials told us that the decision to limit narrative information in DSAID was by design to protect the victim. According to DOD officials, there is concern that their phrasing of narrative information could inadvertently harm the victim were DSAID data subpoenaed in the course of legal proceedings. Additionally, DOD officials noted that there have not been any change requests made to the change control board for the addition of case management elements to DSAID. Data query and reporting: According to DOD officials, DSAID has been used to provide data for congressionally mandated reports, produce ad-hoc queries, facilitate trend analysis, and support program planning analysis and management. However, officials from three services’ headquarters-level SAPR offices told us that DSAID’s reporting capabilities are limited. As a result, many users told us they have developed their own tools to track sexual assault outside of DSAID. For example, the Army’s and the Marine Corps’ SAPR offices have each developed “dashboard” systems that use raw data from DSAID to identify specific trends that are useful to their leadership and accessible by SARCs for their military service. Further, SARCs we met with from 11 of the 13 installations told us that they keep informal “databases” or hard copy documents on their cases in order to brief their leadership, because they cannot run ad-hoc queries and reports of cases in DSAID for which they are responsible. According to the SARCs, these trackers duplicate key data points input into DSAID (e.g., victim demographics, type of assault, involvement of alcohol, etc.), but do not include any personally identifiable information. According to DOD officials, they have planned to spend $8.5 million in funding for fiscal years 2017 and 2018 for DSAID modifications, which officials stated will allow them to implement specific modifications that should alleviate most of the technical challenges identified by users. See table 2 for a complete list of DOD’s initiatives planned for fiscal years 2017 and 2018 and the purpose of these initiatives. DOD plans to spend $3.59 million in fiscal year 2017 and $4.916 in fiscal year 2018 for specific modifications to DSAID to support these initiatives, and officials are in the process of beginning to conduct a formal analysis to identify the costs and benefits of alternative options for implementing each modification. (See app. I for a list of modifications to DSAID that DOD plans to implement to support the initiatives in fiscal year 2017.) For example, a DOD official stated that DOD plans to implement an encrypted file storage mechanism for DSAID, but they have not yet determined how they plan to do this. Rather, this DOD official stated that the analysis of alternatives will establish options for this mechanism and weigh the costs and benefits. This DOD official also stated that the Defense Human Resources Activity—which is responsible for funding DSAID—will not approve DOD to spend resources on the individual modifications until an analysis of alternatives addressing each modification is conducted. According to this DOD official, the initial draft of this analysis will be completed by the end of November 2016, and all planned modifications can be implemented within the planned available budgetary resources. DOD officials based these budgets on rough-order-of-magnitude cost estimates that were derived from costs they have experienced in recent years. For example, costs for adding a module in DSAID to document reports involving retaliation were based on DOD’s costs for building DSAID’s legal officer module, which was purchased in 2013.The GAO Cost Estimating and Assessment Guide states that rough-order-of- magnitude estimates are useful to support “what-if” analyses and can be developed for a particular phase or portion of an estimate, but unlike an analysis of alternatives, they do not rise to the level of analysis recommended by best practices to support an investment decision and are not considered budget-quality estimates. In addition to DOD acquisition requirements, an analysis of alternatives is supported by the GAO Cost Estimating and Assessment Guide. These documents identify key elements that should be included in this analysis. For example, an organization should identify relative lifecycle costs and benefits; methods and the rationale for quantifying the lifecycle costs and benefits; the effect and value of cost, schedule, and performance tradeoffs; sensitivity to changes in assumptions; and risk factors. Further, according to GAO guidance, a comparative analysis of alternatives is essential for validating decisions to sustain or enhance a program. Because these elements are part of DOD’s acquisition requirements, if DOD’s analysis of alternatives complies with these requirements, it should incorporate these key elements. Conducting a comparative analysis of alternatives, including identifying and quantifying lifecycle costs and benefits and weighing the cost, schedule, and performance tradeoffs, is key to ensuring that DOD appropriately manages its modifications to DSAID. In 2010, we found that DOD had failed to demonstrate adherence to these key elements in the initial development and implementation of DSAID. By the end of fiscal year 2018, DOD spending on DSAID will exceed initial cost estimates by over $13 million. In 2010, we reported that DOD estimated development and implementation of DSAID to cost $12.6 million, but DOD’s estimate did not include costs for program management or sustainment and for lifecycle costs such as operations and maintenance. In December 2012, DOD documentation shows that DOD had adjusted its estimate to $17.9 million to reflect research and development costs for fiscal years 2011 and 2012 and operations and maintenance costs for fiscal years 2013 through 2018. DOD projected it will have spent a total of approximately $31.5 million as of November 2016 on implementing and maintaining DSAID through fiscal year 2018. This is approximately $13 million more than the revised 2012 estimate. If DOD conducts an accurate and complete analysis of alternatives, it should result in more precise cost estimates for planned enhancements. DOD’s plan to conduct an analysis of alternatives that adheres to the department’s acquisition framework and adequately considers key elements identified in the GAO Cost Estimating and Assessment Guide, as DOD officials have stated that their analysis will do, should position DOD to more accurately assess whether planned modifications to DSAID can be implemented within budget and with the desired outcome. DOD manages modifications to DSAID through its change management process, which we found, based on our review of DOD documentation, substantially align with the elements described in the project management and information technology industry standards that we reviewed. “Change management” is the process of controlling changes requested to work products to help ensure that project baselines are maintained. According to the PMBOK® Guide, the activity of change control allows for documented changes within the project to be considered in an integrated fashion while reducing project risk, which often arises from changes made without consideration to the overall project objectives or plans. Configuration management activities can be included as part of an organization’s change control process. While change control is focused on managing project change such as identifying, documenting, and approving or rejecting changes to the project documents, deliverables, or baselines, configuration management is typically focused on managing changes to a configuration item or system. Industry standards include descriptions of the following elements of change and configuration management that are applicable to DOD’s efforts to manage DSAID: (1) managing change requests; (2) configuration status accounting, tracking and communicating to stakeholders the changes made to the database; (3) interface control, managing the database interfaces; and (4) release management, managing the publication and communication of updates to users. (1) Managing change requests: According to the PMBOK® Guide, changes may be requested by any stakeholder involved with the project. Although changes may be initiated verbally, they should be recorded in written form and entered into the change management and/or configuration management systems. Every documented change request—which may include corrective actions, preventive actions, and defect repairs—needs to be either approved or rejected by a responsible individual who is identified in the project management plan or by organizational procedures according to the PMBOK® Guide. When required, the change control process includes a change control board, which is a formally chartered group responsible for meeting and reviewing the change requests and approving, rejecting, or otherwise disposing of those changes and for recording and communicating such decisions. According to the PMBOK® Guide, the roles and responsibilities of this board are to be clearly defined and agreed upon by appropriate stakeholders and documented in the change management plan. Further, the disposition of all change requests, approved or not, are to be updated in the change log that is used to document changes that occur during a project. DOD has established a change request process in the DSAID change control management plan and DOD has documented and formally chartered its Change Control Board. The board’s roles and responsibilities are defined in the DSAID Change Control Board charter and board members include representation from SAPRO and each military service’s SAPR office. Change requests can be submitted only by board members or their designees. The board meets monthly to evaluate and vote on change requests. The DSAID Change Control Board charter outlines the requirement that change requests will be captured through a change request form, which will then be uploaded to the board’s website and made available to the DSAID community. Both the DSAID Change Control Board charter and the DSAID change control management plan outline DOD’s procedures for evaluating these change requests. During its evaluation of each proposed change request, DOD conducts an impact analysis that includes an assessment of the change’s potential impact on requirements, development, training, communications, policy, and testing. This impact analysis also assesses the expected level of effort to implement the request. Documentation from the Change Control Board meetings shows that DOD considers approximate costs and time to implement in change request discussions. DOD also documents and tracks testing and implementation of approved changes in a requirements log. The log includes the approval status, prioritization, and tracking notes for each change request as each moves through the approval process. Once a change request is implemented, DOD updates the requirements log to note which baseline requirement was affected and which system release was included the change. In the requirements log, DOD also documents baseline requirement changes associated with the change requests that have been disapproved and closed. (2) Configuration status accounting: According to the IEEE standard on configuration management, the purpose of configuration status accounting is to track the status of configuration items. In this process, organizations track baseline requirements and total changes requested and implemented. This information should provide objective insights into a system’s performance overtime and the status of the system as changes are implemented. DOD has documented and established baseline requirements for DSAID and, through the change request tracker, DOD tracks total changes requested, implemented, disapproved, deferred, and pending. As previously discussed, DOD conducts an impact analysis of each proposed change request as part of the evaluation process. DOD tracks DSAID’s requirements and change requests until release and monitors and documents identified defects at each stage until they are resolved, which allows DOD to monitor the system’s status as changes are implemented. (3) Interface control: According to the IEEE standard on configuration management, organizations use interface controls to manage the interfacing effects that hardware, system software, and other projects and deliverables have on the project. Interface control activities include identifying the product’s key interfaces and controlling the interface specifications. DOD is currently managing interfaces between DSAID and the MCIO databases to collect sexual assault case information, and DOD plans to incorporate additional interfaces with other DOD systems to collect more case information. DOD documentation shows that the department has identified DSAID’s key interfaces and specifications. Specifically, DOD SAPRO has established a memorandum of understanding with each service investigative agency that describe roles and responsibilities and data mapping parameters, which includes a technical description of the fields and types of data that will be interfaced between DSAID and each service investigative agency’s system. Through these mechanisms, DOD manages the parameters of these interfaces that provide key information to DSAID. While DOD has met industry standards for identifying key interfaces and controlling interface specifications, as discussed earlier in this report, some DOD users reported some technical challenges with data from the MCIO database interfaces overwriting manually input DSAID data. According to DOD documentation, DOD is taking steps to mitigate them in enhancement efforts, which include improving how investigative data transferred into DSAID, and adding additional database interfaces. (4) Release management: According to the IEEE standard on configuration management, release management allows an organization to ensure that the proper deliverables such as changes and fixes to a system are delivered to the designated receiving party, in the designated form, and to the designated location. Release management activities include delivering approved releases and defining the following: a release policy, release planning, release contents, release format and distribution, and release tracking. In line with defining release policy, DOD’s change control board charter defines board members as the authority for establishing DSAID release schedules and prioritize and assign changes to a release. With respect to release planning, we found that DOD has defined the types of releases it delivers and the activities conducted during DSAID’s formal release process. DOD has also defined the content, format, and distribution materials to be included in each release. Communication of the release follows a defined process starting with limited distribution to select users and then distribution to the full user community. DOD uses its master project schedule for DSAID to track and monitor release activities. We are not making recommendations in this report. We provided a draft of this report to DOD for review and comment. DOD provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Table 3 describes change requests to the Defense Sexual Assault Incident Database (DSAID) that the Department of Defense (DOD) has prioritized for implementation in fiscal year 2017. These change requests were approved through DOD’s change control process and determined to be priority modifications by the DSAID Change Control Board. In addition to the staff named above, key contributors to this report include Kim Mayo (Assistant Director); Michael Holland; Jim Houtz; Mae Jones; Anh Le; Amie Lesser; Oscar Mardis; Shahrzad Nikoo; Tida Reveley; Monica Savoy; Jasmine Senior; Maria Staunton; and Randall B. Williamson. Sexual Assault: Actions Needed to Improve DOD’s Prevention Strategy and to Help Ensure It Is Effectively Implemented. GAO-16-61. Washington, D.C.: November 4, 2015. Military Personnel: Actions Needed to Address Sexual Assaults of Male Servicemembers. GAO-15-284. Washington, D.C.: March 19, 2015. Military Personnel: DOD Needs to Take Further Actions to Prevent Sexual Assault during Initial Military Training. GAO-14-806. Washington, D.C.: September 9, 2014. Military Personnel: DOD Has Taken Steps to Meet the Health Needs of Deployed Servicewomen, but Actions Are Needed to Enhance Care for Sexual Assault Victims. GAO-13-182. Washington, D.C.: January 29, 2013. Military Personnel: Prior GAO Work on DOD’s Actions to Prevent and Respond to Sexual Assault in the Military. GAO-12-571R. Washington, D.C.: March 30, 2012. Preventing Sexual Harassment: DOD Needs Greater Leadership Commitment and an Oversight Framework. GAO-11-809. Washington, D.C.: September 21, 2011. Military Justice: Oversight and Better Collaboration Needed for Sexual Assault Investigations and Adjudications. GAO-11-579. Washington, D.C.: June 22, 2011. Military Personnel: DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs Need to Be Further Strengthened. GAO-10-405T. Washington, D.C.: February 24, 2010. Military Personnel: Additional Actions Are Needed to Strengthen DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-10-215. Washington, D.C.: February 3, 2010. Military Personnel: Actions Needed to Strengthen Implementation and Oversight of DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-08-1146T. Washington, D.C.: September 10, 2008. Military Personnel: DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs Face Implementation and Oversight Challenges. GAO-08-924. Washington, D.C.: August 29, 2008. Military Personnel: Preliminary Observations on DOD’s and the Coast Guard’s Sexual Assault Prevention and Response Programs. GAO-08-1013T. Washington, D.C.: July 31, 2008. Military Personnel: The DOD and Coast Guard Academies Have Taken Steps to Address Incidents of Sexual Harassment and Assault, but Greater Federal Oversight Is Needed. GAO-08-296. Washington, D.C.: January 17, 2008. | GAO has reported that DOD has not collected uniform data on sexual assaults involving members of the armed forces. In 2008, Congress required DOD to implement a centralized, case-level database for the collection and maintenance of these data. In 2012, DSAID reached initial operational capability to capture sexual assault data. House Report 112-479 included a provision for GAO to review DSAID no sooner than 1 year after it was certified compliant with DOD standards by the Secretary of Defense. This report (1) describes the current status of DOD's implementation of DSAID and steps DOD has taken to help standardize DSAID's use, (2) assesses any technical challenges DSAID's users have identified and any DOD plans to address those challenges, and (3) assesses the extent to which DOD's change management process for modifying DSAID aligns with information technology and project management industry standards. GAO reviewed DOD documents, and interviewed DOD program officials as well as DSAID users. Specifically, GAO conducted site visits to 9 military installations and met with 42 DSAID users. Views obtained are nongeneralizable. Installations were selected based on their use of DSAID, number of users, geographic diversity, and other factors. GAO is not making recommendations in this report. DOD provided technical comments, which GAO incorporated as appropriate. As of October 2013, the Department of Defense's (DOD) Defense Sexual Assault Incident Database (DSAID) was fully implemented and in use across the military services, and DOD had taken several steps to standardize DSAID's use throughout the department. Sexual assault incident data are input into DSAID through both manual and automated data entry processes and include, as applicable, victim and referral support information, investigative and incident information, and case outcome data for certain incidents of sexual assault that involve a servicemember. Additionally, in some instances DSAID includes sexual assault cases involving a servicemember spouse, an adult family member, and DOD civilians and contractors. Further, DOD has taken several steps to standardize DSAID's use through the development of (1) policies, processes, and procedures for using the system; (2) training for system users; and (3) processes for monitoring the completeness of data. DSAID users have identified technical challenges with the system and DOD officials stated that they have plans to spend approximately $8.5 million to implement modifications to DSAID that address most of these challenges in fiscal years 2017 and 2018. Some of the key technical challenges users have identified experiencing with the system relate to DSAID's system speed and ease of use; interfaces with other external DOD databases; and users' ability to query data and generate reports. DOD has plans in place to implement modifications to DSAID that are expected to alleviate these challenges; however, officials stated that they will not be approved to fund these modifications until they have conducted an analysis of alternatives that is in line with DOD's acquisition policy framework. This framework, as well as the GAO Cost Estimating and Assessment Guide , outline key elements of this analysis, such as relative lifecycle costs and benefits and the effect and value of cost and schedule, among others. Conducting an analysis of alternatives including these elements is key to ensuring that DOD appropriately manages its modifications to DSAID. In 2010, GAO found that DOD had failed to demonstrate adherence to these key elements in the initial development and implementation of DSAID, and, DOD projects it will have spent a total of approximately $31.5 million on implementing and maintaining DSAID through fiscal year 2018. This is approximately $13 million more than the 2012 estimate. DOD's plan to conduct an analysis of alternatives that adequately considers key elements should position DOD to more accurately assess whether planned modifications to DSAID can be implemented within budget and result in the desired outcome. DOD manages modifications to DSAID through its change management process, which GAO found substantially aligns with key applicable elements established in the industry standards that GAO reviewed. Specifically, DOD has established processes for managing change requests, such as developing a process to evaluate requested changes to the database and establishing a board that approves, tracks, and controls changes to the database. DOD has also established processes for configuration management, including a process to track, communicate, and deliver changes to the database. |
The DI program was established in 1956 to provide monthly cash benefits to individuals unable to work because of severe long-term disability. In fiscal year 2010, the program’s average monthly benefit was about $922. To be eligible for benefits, workers with disabilities must have a specified number of recent work credits under Social Security when they acquired a disability. Individuals may also be able to qualify based on the work record of a deceased or retired parent with a disability or a deceased spouse. Benefits are financed by payroll taxes paid into the DI Trust Fund by covered workers and their employers and are based on a worker’s earnings history. To meet the definition of disability under the DI program, an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or to result in death and (2) prevents the individual from engaging in SGA. Individuals are engaged in SGA if they have earnings that average more than $1,000 per month in calendar year 2010, after applying any work incentives. Program guidelines direct DI beneficiaries to report their earnings to SSA in a timely manner to ensure they are still eligible for benefits and do not incur an overpayment. SSA has several programs that are designed to assist beneficiaries in returning to work. For example, the Ticket to Work and Work Incentives Improvement Act of 1999 provided for the establishment of the Ticket to Work and Self-Sufficiency Program to provide eligible DI and Supplemental Security Income (SSI) beneficiaries with employment services, vocational rehabilitation services, or other support services to help them obtain and retain employment and reduce their dependency on benefits. SSA provides each eligible beneficiary with a ticket to obtain services from SSA-approved public or private providers, referred to as employment networks, or from traditional state vocational rehabilitation agencies. SSA conducts work CDRs to determine if beneficiaries are still eligible or are working above the SGA level. Work CDRs are event-driven and initiated when there is an indication of work activity. While work CDRs can be prompted by several events, most are generated by SSA’s enforcement operation. This process involves periodic data matches between SSA’s MBR database and IRS earnings data. The enforcement operation generates alerts for cases that exceed specified earnings thresholds, which are then forwarded to one of eight processing centers for additional development by staff. In fiscal year 2010, the enforcement operation identified approximately 2 million records of which more than 531,000 were sent to SSA’s processing centers and field offices for review. Appendix I provides detailed information on the results of the enforcement operation for fiscal years 2008 to 2010, which used IRS earnings data from 2007 to 2009 (enforcement matches are typically conducted using the prior year’s earnings data). Work CDRs can also be triggered by other events, such as beneficiaries reporting their earnings to SSA. For example, SSA requires beneficiaries to undergo periodic medical reviews called medical continuing disability reviews, or medical CDRs, to assess whether they continue to have a disabling impairment. During such reviews, the disability examiner sometimes discovers evidence that a beneficiary is working and forwards the case to an SSA field office or processing center for earnings or work development. Third-party reports from state vocational rehabilitation agencies, federal agencies, or anonymous individuals may also trigger a work CDR. Finally, some DI beneficiaries report their earnings to SSA as directed under program guidelines by visiting an SSA field office or calling the agency’s 800 number. While most work CDRs are initially sent to processing centers as a result of action by SSA’s enforcement operation, some of these cases are later referred to one of SSA’s more than 1,300 field offices. Field offices can be asked to assist processing centers in the development of cases when obtaining local information about beneficiaries or their employers would expedite case processing. Field offices also tend to be the focal points for work CDRs generated by medical CDRs, third-party reporting, and beneficiary self-reporting. Work CDRs entail processes that can be both labor-intensive and time-consuming. For each case, SSA staff must review electronic case files in SSA’s eWork and associated data systems, conduct interviews, and contact beneficiaries and their employers to verify earnings and any applicable work incentives, such as subsidies or impairment-related work expenses. After the initial review of cases indicating a cessation of benefits, a “disability processing specialist” or “disability examiner” determines whether benefits should be discontinued and an overpayment assessed. (Fig. 1 provides an overview of the work CDR process.) When a DI work-related overpayment is identified, the beneficiary is notified of the overpayment and may request reconsideration or waiver of that overpayment. SSA may grant a waiver request if the agency finds the beneficiary was not at fault and recovery or adjustment would either defeat the purpose of the program or be against equity and good conscience, as defined by SSA. If SSA denies a reconsideration or waiver request, full repayment is requested. If the beneficiary is receiving DI or certain other SSA benefits, SSA may withhold partial payment of these benefits to recover the debt. However, if no SSA benefits are being received, or if the beneficiary asserts that the proposed withholding amount is too large, the agency generally requests repayment over 12 to 36 months. SSA policy requires a minimum monthly payment of $10 dollars. SSA may also attempt to recover payments due from the individual’s estate or subsequent survivor’s benefits. (Fig. 2 provides an overview of SSA’s debt recovery system.) The agency uses the ROAR system to track DI overpayments and collections. When a debtor is no longer receiving benefits, SSA can also recover debt through several external collection tools. When a beneficiary is not repaying as agreed in the repayment plan, SSA terminates its collection activity and, after a due process period, the debt is referred for external collection. External debt collection tools include tax refund offset, which withholds or reduces federal tax refunds; federal salary offset, which withholds or reduces wages and payments to a federal employee; administrative offset (against other than SSA benefits), which withholds or reduces federal payments other than tax refunds or salary; administrative wage garnishment, which garnishes wages and payments paid by private employers or state and local governments; and credit bureau referral, which refers delinquent accounts to credit bureaus. Once a debt is referred to external collection, the debtor remains subject to offsets until the debt is repaid in full or other resolution of the case. Medical and work-related overpayments in the DI program detected by SSA grew from about $860 million in fiscal year 2001 to about $1.4 billion in fiscal year 2010, and though the true extent of overpayments due to earnings is currently unknown, our review suggests that most of them are related to beneficiaries who work above SGA while receiving benefits. (Fig. 3 shows the increase in overpayments detected by SSA in recent years.) SSA officials estimate that from fiscal years 2005 through 2009, about 72 percent of all projected DI overpayments were work-related, meaning the overpayments were to beneficiaries who returned to work and were no longer eligible. This figure is higher than cited in past years by SSA. SSA officials attribute the increase in the percentage of overpayments that are work-related during this period to improved detection by its enforcement operation, and to changes in how the agency estimates the overpayment numbers. Agency officials also explained that approximately half of the increase in overpayment dollars during this period may be due to the increase in DI program benefit levels. We found that detected overpayments could be even larger than SSA’s data reflect because some overpayments have been accidentally removed from SSA records due to manual processing errors. In our review of 60 work CDR cases, we found two manual processing errors which resulted in overpayments totaling $53,097 being removed from agency records. In one case, staff entered a code to correct an overpayment amount but instead deleted the overpayment entirely. As a result of our detection, SSA officials reentered the overpayment debts into the system and indicated they would proceed with debtor notification and recovery. Because the results of our case review are not generalizable, the incidence of such occurrences is currently unknown and thus the potential impact on total DI overpayments owed by ineligible beneficiaries is not clear. SSA officials said that they do not have a mechanism for detecting, or a process of supervisory review to catch, such processing errors. In the 60 cases we reviewed, we also found that individual overpayment amounts varied widely, ranging from $1,126 to $53,436, with a median of $16,917 per individual. The size of individual overpayments can also be affected by SSA’s Automated Earnings Reappraisal Operations (AERO), which is a computer program that periodically screens a beneficiary’s earnings record for changes, and uses that information to adjust, as needed, the monthly benefit amount. Of the 60 cases, we found 54 in which AERO increases in benefit amount occurred even while SSA conducted a work CDR to determine if the beneficiary was still eligible for benefits at all. In addition, an individual overpayment can result in additional overpayments to dependents (family members who receive benefits). In 8 of the 60 cases, additional overpayments occurred because the primary beneficiary had dependents. For example, in one case, the overpayments to two dependent children added $6,580 to the primary beneficiary’s existing overpayment debt of $17,102, for a total of $23,682 owed. A beneficiary’s total DI overpayment debt can also increase because of multiple periods of employment. DI beneficiaries may reenter and leave the workforce based on their ability to perform SGA. As a result, a beneficiary could be subject to multiple periods of DI overpayments if he or she does not report increased earnings to SSA in a timely manner, as regulations instruct. In 49 of the 60 cases we randomly selected for review, there was no indication in the file that the individual had reported his or her earnings to SSA, and in 15 of the 60, SSA had detected two or more separate periods of earnings which resulted in overpayments. In one of these cases, the ineligible beneficiary owed SSA for multiple overpayments totaling $69,976. SSA does not currently have formal, agencywide performance goals for debt recovery. Specifically, the agency does not have goals for the percentage of DI overpayment debt recovered within the 36-month time frame as required by its own policy. Under the Government Performance Results Act of 1993, federal agencies are required to establish performance goals to define level of performance and establish performance indicators to be used in measuring relevant outputs, service levels, and outcomes for each program activity. Although SSA’s policy manual, the Program Operations Manual System (POMS), requires staff to ask for full repayment within 36 months, the agency has not made this time frame a performance goal. SSA officials said they are currently working to develop debt recovery goals. In the meantime, without agencywide performance goals for debt recovery, SSA cannot adequately assess its performance or fully leverage and target its resources to recover overpayments from ineligible beneficiaries and reduce the total owed to SSA. Despite an increase in DI debt collections—$340 million to $839 million from fiscal year 2001 through fiscal year 2010—outstanding DI debt grew from $2.5 billion to $5.4 billion during this time, including a $225 million increase in fiscal year 2010. (Fig. 4 shows the growth in cumulative overpayment debt in recent years.) Cumulative overpayment debt is comprised of existing debt carried forward from prior years, new debt, and reestablished debts (debts reactivated for collection due to re-entitlement or other event). Write-offs, including waivers and terminated collections, are not included in cumulative overpayment debt. Write-offs generally represent money the agency will never recover because it has either waived an overpayment or has terminated collection activities on the overpayment. Write-offs in the DI program totaled about $4 billion over the 10-year period, including about $460 million in fiscal year 2010. (See app. II for more specific information on overpayment debt in fiscal years 2001 to 2010.) In the 60 cases we reviewed, 20 contained a waiver request from the beneficiary. SSA approved 2 of these requests for a total of $36,413 waived, and denied 14 requests. Four other waiver requests were pending a decision at the time of our review. SSA may waive an overpayment if the agency determines that an individual is not at fault for the overpayment and recovery would defeat the purpose of the program or be against equity and good conscience. Most overpayment debt is collected by SSA through partial benefit withholding or the withholding of future DI benefits for which a beneficiary is still eligible. SSA attributes 77 percent of the approximately $839 million of debt collected in fiscal year 2010 to withholding of DI benefits. The amount withheld from benefits to recoup previous overpayments may be negotiated with the debtor and based on a monthly amount the debtor can afford. The remainder of overpayment debt is collected in a variety of ways, including payments by the debtor and return of uncashed DI benefit checks; withholding of other SSA benefits, such as SSI; or through external collection including federal salary offset, administrative offset (other than against SSA benefits), tax refund offset, and administrative wage garnishment. SSA estimates that only about 11 percent of collections is through external means. In our 60 cases, 5 were referred for external collection at the time of our review, for a total owed of $79,950, but just $2,478 had been recovered through these methods. SSA does not require supervisory review of repayment plans prior to approval, including those in which repayment periods exceed the recommended 36 months. The agency reported that in fiscal year 2010, the average time to collect a DI overpayment debt in full was 48 months. However, in our review of 60 cases, we found that SSA agreed to some initial repayment plans which will take many decades. We analyzed the initial repayment plans established for individuals in these cases and found 42 of the 60 had a repayment plan in place, with a median repayment time for all 42 of approximately 34 months. While SSA’s POMS require that staff should seek full repayment within 36 months, SSA officials reported that no supervisory approval is needed to exceed the 36 months. Of the 42 cases with a repayment plan, 19 had initial plans requiring more than 36 months for repayment in full and 7 of these required 20 years or more. Repayment time frames for the 42 cases ranged from less than 1 year to nearly 223 years for a case with a 60- year-old debtor who was paying $10 a month on $26,715 owed. (Fig. 5 shows the years that it would take SSA to recover each overpayment based on the initial repayment plan established.) SSA officials told us they are often unable to increase monthly repayment amounts and thus shorten repayment time frames because of a debtor’s limited income. For instance, in a case we reviewed with an initial repayment plan of 148 years for $44,465 in overpayments owed to SSA, SSA records show the individual earned less than $100 in 2010. As this case illustrates, SSA’s policy allowing for extended repayment plans means that some of these plans will likely extend beyond the beneficiary’s anticipated retirement age and total lifespan, thereby reducing the prospect of ultimate collection. In the course of analyzing repayment plans, we also identified a system limitation that could further impede SSA’s ability to collect overpayment debt in future years. More specifically, we found that the ROAR system cannot capture and track overpayment debt scheduled to be collected beyond the year 2049. This ROAR system limitation stems from a program modification used to address the change of the century (Y2K) computer issue, and which extended the debt recovery date in the ROAR system from “1999” to “2049.” Under existing SSA policies and procedures, SSA staff manually remove from the ROAR system the portion of any debt that cannot be collected before the year 2050, and create a reminder in the system to recover that balance beginning in the year 2050. However, because this is a manual process, the intended recovery action could be potentially missed by staff responsible for processing the cases in the future. As a result, the overpayment debt on the agency’s books, and reported to the Department of the Treasury (Treasury) for the federal government’s consolidated financial statements, is likely understated to an unknown extent. Equally important, we found that unless this problem is corrected, more overpayments will likely continue to be underreported as the years progress, and the impact on the DI Trust Fund may become more pronounced as time goes on. Overall, we found that 3 of the 60 cases reviewed had a total of $43,285 in overpayments removed from ROAR system records because collection of these payments will occur after the year 2049. However, because the results of our case review are not generalizable, we could not determine how many additional disability overpayment cases detected by SSA fell into this category. Since bringing this issue to their attention, SSA officials told us that the agency has begun to study this ROAR system limitation and an agency working group will recommend a course of action to correct the problem. SSA officials reported that the agency is planning certain initiatives that could improve the recovery of overpayment debt. First, SSA officials told us that the agency is in the process of amending administrative offset and tax refund offset regulations to change the existing requirement regarding referrals of delinquent debt to Treasury. This change would allow SSA to refer debts that are delinquent for 10 years or more to Treasury for collection—something that it does not currently do. Second, Treasury has launched a pilot program providing for reciprocal offset of federal and state payments to individuals who are in debt to either federal or state programs. This program would allow a federal agency such as SSA to, for example, offset a beneficiary’s state income tax refund to collect certain delinquent federal debts (such as DI program overpayments), while allowing for a reciprocal agreement with individual states. Four states— Kentucky, Maryland, New Jersey, and New York—are currently participating in the Treasury pilot program, and Treasury reports that it plans to expand the program to all states. On March 2, 2011, SSA officials published proposed amendments to regulations that would allow SSA to utilize these collection tools. SSA conducts periodic computer matches with wage data from the IRS to independently verify beneficiaries’ earnings. However, earnings data provided through the IRS match are often more than a year old when SSA staff begin the work CDR prompted by the match. Managers and staff at the four processing centers we visited cited this delay as a major obstacle to limiting the occurrence and size of overpayments. Our work shows that this has delayed processing of work CDRs. In the 60 cases we reviewed, the earnings data were already between 6 and 26 months old by the time they were available to SSA staff for performing work CDRs. (Fig. 6 shows how old earnings data were for the 60 cases.) While DI beneficiaries are responsible for notifying SSA when they return to work as a condition of receiving benefits, they sometimes fail to make such notifications. Our review of 60 cases found no indication in 49 that the individual had reported their work and earnings to SSA as instructed by regulation. In the other 11 cases, beneficiaries had reported returning to work, including the name of their employer and the amount of their wages, at some point. Yet 6 of these cases resulted in about $78,000 in total overpayments, even though these beneficiaries reported returning to work more than a year prior to initiation of the work CDR. In the remaining 5 cases, the beneficiary reported working only after the CDR was initiated. Earnings data from IRS or from beneficiaries may age further once received by SSA because no specific program guidelines exist that require immediate action on results of the IRS match, and consequently staff do not always begin a work CDR immediately. From the date of the initial IRS alert to the date staff begin work on the CDR, cases that have been identified by the IRS match and selected for a work CDR (but for which no action has yet been taken to process the work CDR) are categorized by SSA as “pending development.” In the 60 cases we reviewed, the median time cases were pending development was 205 days, or about 7 months, and ranged from 2 to 466 days, or more than 15 months. For example, in the 466-day case, the IRS alert came to SSA in September 2007, when earnings (for 2006) were already 15 months old, then aged an additional 15 months until SSA staff began developing the work CDR. SSA officials could not explain what caused the delay in initiating development of this case or of several others we reviewed. (Fig. 7 shows the number of days that the 60 cases were pending development.) The delays that occur when staff do not act promptly to begin a work CDR, in combination with the initial delays in receiving beneficiary earnings data (either from the IRS enforcement operation or beneficiaries’ failure to self-report earnings), can result in multiple DI overpayments which may continue to accrue for extended periods of time before they are addressed. For example, in the 60 cases we reviewed, delays which occurred after IRS alerts were delivered to SSA resulted in individual beneficiaries being overpaid for up to 38 months. Most received fewer than 12 months of overpayments, but 19 of the cases received 18 or more months of overpayments. According to an SSA official, staff shortages and the need to focus resources on competing workloads, such as initial DI claims and medical CDRs, are among the factors delaying development of work CDRs once earnings information is received. (Fig. 8 shows the amount of time that overpayments accrued pending development.) In 2004, we recommended that SSA seek to use large scale batch matches with an alternative database of earnings, the National Directory of New Hires (NDNH), which was originally established to help states locate noncustodial parents for child support payments. The NDNH could provide SSA with quarterly wage information on employees within 4 months of the end of a calendar quarter. Several federal programs and agencies currently use the NDNH to verify program eligibility, detect and prevent potential fraud or abuse, and collect overpayments. In 2009, SSA conducted a cost-effectiveness study on use of the NDNH, which concluded that a match would generate a large number of alerts needing development that were not of high quality. They also said the study found a return on investment of only about $1.40 in savings for each $1 spent. However, based on the information provided, we believe that some of the assumptions used in the analysis were overly pessimistic. For example, though SSA initially estimated savings of nearly $8 for every dollar spent by using the NDNH, the agency subsequently reduced this estimate by concluding that much of the anticipated savings from this match would ultimately be captured by the current enforcement operation. This assumption does not appear to fully take into account the fact that initial use of the NDNH would likely identify many potential overpayments earlier in the process, thus reducing the duration and size of those overpayments compared with the current enforcement operation. Thus, it is likely that the actual savings that result could in fact be higher. The agency’s actual experience with the NDNH in its SSI program suggests its application in the DI program may be more cost-effective than indicated by SSA’s analysis. According to SSA officials, the NDNH match with the SSI program has resulted in an estimated $200 million in savings per year for the program. Furthermore, even if the savings resulting from use of the NDNH for identifying DI beneficiaries’ earnings is only $1.40 for every $1 spent, as estimated by SSA, this still represents a 40 percent rate of return. In a 2010 report, we also recommended that SSA evaluate the feasibility of incorporating the twice-annual AERO process in the agency’s work CDR process. SSA does not use AERO to identify beneficiaries who work. SSA officials told us they are in the early stages of evaluating this use and, as part of this effort, they said they may include an AERO alert as one of several screening factors to identify cases at high risk for DI overpayments. Agency officials also told us that they plan to match cases that have both an AERO alert and an IRS alert and delay any AERO- awarded changes to benefit amounts until the CDR has been completed to potentially reduce overpayments. Similar to our findings about the lack of agencywide performance goals for overpayment debt recovery, we also found that SSA does not have agencywide performance goals or a consistent approach for processing work CDRs across its processing centers. Specifically, the agency does not have performance goals for the number of days taken to completely process a work CDR. While SSA has established an agencywide goal for processing a certain number of medical CDRs in a fiscal year, and includes this goal in the agency’s annual performance plan, SSA officials told us they have not established similar goals for work CDRs. Instead, they have established “targets” for the processing centers. For example, in fiscal year 2011, SSA set targets for the completion of 95 percent of IRS alerts on earnings generated in 2008 or earlier by September 24, 2010, and for processing centers to complete development of at least 99 percent of cases within 270 days by September 30, 2011. Overall, we found that while SSA’s policies establish steps for work CDR processing to be followed across all processing centers, processing times across the four centers we visited varied widely once development was initiated. More specifically, we found that processing times for the 60 cases we reviewed ranged from 82 to 992 days (with a median of 396 days) and resulted in combined overpayments totaling more than $1 million. We also found the median processing time for the cases we reviewed from three centers ranged from 307 to 397 days, and the median processing time at the fourth center, which processes about 50 percent of all work CDRs, was 626 days. (Fig. 9 shows the variance in processing time across the four processing centers.) Within the last year, SSA has started work on several new initiatives to identify work CDR enforcement alerts from its IRS match that correspond to those cases which are more likely to result in large overpayments. First, the agency has started prioritizing IRS alerts with reported earnings that are greater than or equal to 12 times the current SGA level ($1,000 per month in calendar year 2011). Second, in response to a prior GAO recommendation, SSA is testing a “predictive model” intended to identify those cases with the highest probability of having large overpayments and prioritizing them for review by processing center staff. Using existing screening criteria, the model will assign cases a numeric score based on the beneficiary’s age, type of disability, benefit amount, earnings, time on disability rolls, and number of denials for eligibility, among others. The highest-scoring alerts would be processed first because SSA believes they are most likely to result in cessation of DI benefit payments and thus be incurring overpayments. This initiative is currently being piloted in three processing centers. In a third initiative, SSA is piloting a comprehensive review of how all types of work CDRs are conducted using a sample of 1,000 recently completed CDRs. This review is intended to identify any aspects in the initial process that did not appear to be completed correctly. Because work CDRs are the most common type of CDR the agency encounters, SSA anticipates they will be a majority of the sample. Agency officials shared some of the review’s early findings with us, including inadequate documentation of case development, inconsistencies and vague language in SSA’s operating procedures, and errors in calculating SGA, which may contribute to overpayments. SSA’s Office of Quality and Performance expects to issue its study results in fall 2011. Fourth, SSA staff are working to update and streamline work issue procedures regarding the initiation, follow-up time frames, and overall completion of work CDRs for processing center personnel. The agency anticipates releasing the updated POMS that will incorporate these new procedures in October 2011. Finally, SSA has revised the forms filed by beneficiaries to document work activity and to make SGA determinations by SSA staff. According to agency officials, the forms should be easier for beneficiaries to understand and complete, and should improve compliance with beneficiary reporting requirements. As part of this initiative, SSA officials told us that they have also reduced the number of follow-up requests for these forms before staff are allowed to make a decision concerning a beneficiary’s continued eligibility for DI benefits. According to agency officials, the Office of Management and Budget approved both of the revised work activity reports. In addition, SSA staff are hopeful that the revised form will result in more timely work CDR processing and reduced overpayments. While these initiatives represent promising steps, it is too early to assess what impact they may have on the prevalence and size of DI overpayments. Our review shows that while SSA has some initiatives to improve how it performs work CDRs, existing policies and processes impede its ability to more effectively detect, prevent, and recover overpayments. In particular, the existence of repayment plans that extend out past a debtor’s full retirement age or projected lifespan means that it is unlikely those debts will be repaid in full. In addition, the lack of both agencywide performance goals that specify debt collection time frames and policies that specify the review of individual debt repayment plans to ensure that they are meeting these goals hinder the agency’s debt collection efforts. Compounding this problem, the inability of the ROAR system to process debt recovery actions past the year 2049 jeopardizes the agency’s ability to accurately track and report on the recovery of all overpayments. In addition, SSA’s continued reliance on outdated earnings information to identify beneficiaries who may no longer be eligible for benefits means that overpayments, including some large overpayments, are probably inevitable. SSA’s dependence on such data also means that staff responsible for performing work CDRs are often dealing with cases that are already aged when they receive them, and thus more difficult and time-consuming to develop. As a result, SSA is often in a “pay and chase” mode that further stretches limited staff and budgetary resources. Absent more timely sources of earnings data to inform the work CDR process, this problem is likely to persist. Additionally, other weaknesses such as the lack of agencywide performance goals for processing centers and staff likely contribute to delays in processing this workload. These delays, in turn, can become a factor in the large size of some overpayments. Without performance goals specifying the time that cases should be pending development, or the number of days to completely process a work CDR, improvements to the agency’s operations in this area are made less likely. We recognize that ensuring the integrity of the DI program while also performing other work, such as processing medical CDRs and helping beneficiaries return to work, is a challenge for SSA. However, the continuing weaknesses we identified in SSA’s existing work CDR processes and policies, as well as the mounting overpayment debt and its impact on the financial health of the DI Trust Fund, require sustained management attention and a more proactive stance by the agency. To enhance SSA’s ability to recover debt and to improve the detection and, where possible, prevention of overpayments in the DI program, we recommend that the Commissioner of Social Security 1. develop and adopt agencywide performance goals, for the recovery of DI overpayment debt, such as the percent of outstanding debt collected annually. 2. require supervisory review and approval of repayment plans which exceed SSA’s target of 36 months. 3. correct the ROAR 2049 system limitation so that debt scheduled for collection after 2049 is included in the system and available for SSA management, analysis, and reporting. 4. explore options for obtaining more timely earnings information for DI program beneficiaries who may be working, and thus are more likely to incur overpayments. This would include developing data sharing agreements to access pertinent earnings-related databases, such as the NDNH. 5. develop and adopt additional formal agencywide performance goals for work CDRs to measure the time that cases are pending development, and the number of days taken to process a work CDR. We obtained written comments on a draft of this report from the Commissioner of the Social Security Administration. The comments are reproduced in appendix III. SSA also provided additional technical comments, which have been incorporated in the report as appropriate. SSA agreed with four of five recommendations we made to the Commissioner to strengthen SSA’s processes and management controls over the detection, prevention, and recovery of DI overpayments. SSA agreed with our first recommendation to develop and adopt additional agencywide performance goals for the recovery of DI overpayments, such as the percent of outstanding debt collected annually. The agency noted that the Office of Management and Budget (OMB) guidance for the Improper Payments Elimination and Reduction Act of 2010 requires agencies to establish targets that drive annual performance based on percentage of recovery (i.e., proportion of overpayment dollars identified which are recovered in a fiscal year). The agency said it is working with OMB to develop appropriate debt recovery goals. We agree this is a positive step, as would be including that targeted percentage in its annual agencywide performance goals. SSA disagreed with our second recommendation to require supervisory review and approval of DI repayment plans which exceed SSA’s targeted 36 months. In particular, the agency noted that it negotiates these plans based on a beneficiary’s income, ability to pay, and personal circumstances. SSA said this recommendation would not improve collection rates, and an additional approval step is unnecessary. However, the agency said it will issue guidance to its employees reminding them to make every effort to recover debts within 36 months. We agree that issuing guidance to staff to remind them about repayment time frames is a positive step, but based on recent experiences, we continue to believe supervisory review and approval of plans that exceed 36 months is needed to strengthen SSA’s controls over the process and decrease the incidence of repayment plans that extend past a debtor’s full retirement age or projected lifespan. SSA agreed with our third recommendation that SSA correct the ROAR 2049 system limitation so debt scheduled for collection after 2049 is included in the system and available for current SSA management, analysis, and reporting. SSA noted the agency is considering solutions for this limitation. SSA agreed with our fourth recommendation to explore options for obtaining more timely earnings information for DI program beneficiaries who may be working, and thus more likely to incur overpayments. These options include data sharing agreements to access pertinent earnings- related databases such as the NDNH. The agency noted it has plans in this area. Specifically, SSA said while it had previously determined it would not be cost-effective to use the NDNH for this purpose, it will reevaluate that decision. SSA also said it will continue to explore other options for obtaining timely earnings information from other sources. In addition, the agency said it has developed, based on a previous GAO recommendation, a computer match of Social Security employee payroll records and DI and SSI benefit rolls. We agree this internal SSA computer match should help to obtain more timely earnings information on DI beneficiaries who may be working. We also encourage SSA to develop data-sharing agreements to leverage other databases such as NDNH which allow for regular batch-file matches to verify beneficiaries’ earnings. SSA agreed with our fifth recommendation to develop and adopt additional formal, agencywide performance goals for the time work CDR cases are pending development and the number of days taken to process them. The agency commented that it already has internal targets for overall work CDR processing times. SSA said it will consider GAO’s recommendation as it works on its agency performance goals. We agree the agency’s internal targets for work CDR processing are a positive step. However, we continue to believe that formal, agencywide work CDR performance goals are needed to assist SSA in assessing its performance and fully leveraging its resources in this area. The agency should consider measuring the time cases are pending development, and the number of days taken to process a work CDR, against agencywide annual performance goals it develops. We are sending copies of this report to the Commissioner of Social Security, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Enforcement Cumulative Totals and Distribution for IRS Enforcement Matches Conducted in 2008-2010 (Data for Earnings Years 2007 through 2009) Jeremy Cox (Assistant Director) and Arthur T. Merriam Jr. (Analyst-in- Charge) managed all aspects of the assignment. Angela Jacobs, Joel Marus, and Katharine Kairys made significant contributions to this report, in all aspects of the work. In addition, Vanessa Taylor and Walter Vance provided technical support; Craig Winslow and Sheila McCoy provided legal support; Susan Aschoff assisted with the development of the message and report; David Forgosh, Monika Gomez, Cady Panetta, and Nyree Ryder Tee contributed to quality assurance for this product, and James Bennett provided graphics support. Status of Fiscal Year 2010 Federal Improper Payments Reporting. GAO-11-443R. Washington, D.C.: March 25, 2011. Social Security Disability: Ticket to Work Participation Has Improved, but Additional Oversight Needed. GAO-11-324. Washington, D.C.: May 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Social Security Administration: Cases of Federal Employees and Transportation Drivers and Owners Who Fraudulently and/or Improperly Received SSA Benefits. GAO-10-444. Washington, D.C.: June 2010. Social Security Administration: Cases of Federal Employees and Transportation Drivers and Owners Who Fraudulently and/or Improperly Received SSA Benefits. GAO-10-949T. Washington, D.C.: August 4, 2010. Disability Insurance: SSA Should Strengthen Its Efforts to Detect and Prevent Overpayments. GAO-04-929. Washington, D.C.: September 10, 2004. | The Social Security Administration's (SSA) Disability Insurance (DI) program paid almost $123 billion in benefits in fiscal year 2010 to more than 10 million workers and dependents. The program is poised to grow further as the baby boom generation ages. GAO examined (1) what is known about the extent to which SSA makes overpayments to, and recovers overpayments from, DI beneficiaries who exceed program earnings guidelines, and (2) potential DI program vulnerabilities that may contribute to overpayments to beneficiaries who have returned to work. To answer these questions, GAO reviewed work continuing disability review (work CDR) policies and procedures, interviewed SSA headquarters and processing center officials, visited 4 of 8 processing centers, and reviewed a random nongeneralizable sample of 60 CDR case files across those 4 centers (15 from each). Disability Insurance overpayments detected by SSA increased from about $860 million in fiscal year 2001 to about $1.4 billion in fiscal year 2010. SSA estimates about 72 percent of all projected DI overpayments were work-related during fiscal years 2005 through 2009. While the agency collected, or recovered, $839 million in overpayments in fiscal year 2010, monies still owed by beneficiaries grew by $225 million that same year, and cumulative DI overpayment debt reached $5.4 billion. SSA does not have agencywide performance goals for debt collection-- for example, the percent of outstanding debt collected annually. And while SSA has a policy for full repayment within 3 years, 19 of the 60 work CDR cases GAO reviewed had repayment plans exceeding 3 years. SSA officials said that lengthy repayment plans are often the result of an individual's limited income, but SSA does not review or approve repayment plans which exceed agency policy. During the course of the review, GAO also found a limitation in SSA's Recovery of Overpayments, Accounting and Reporting (ROAR) system. Used to track overpayments and collections, ROAR does not reflect debt due SSA past year 2049, so the total balance due the program is unknown and likely larger than the agency is reporting. SSA officials acknowledged this issue, but are unable to determine the extent of the problem at this time. They told GAO they have a work group which will recommend action to correct the problem. But until this issue is addressed, SSA officials said that the agency can only track and report on overpayments scheduled to be repaid through 2049. The amount owed after that year is unreflected in current totals even as it annually increases. SSA officials reported that the agency has ongoing initiatives to enhance debt collection. SSA has numerous policies and processes in place to perform work CDRs, though two key weaknesses have hindered SSA's ability to identify and review beneficiary earnings which affect eligibility for DI benefits. First, SSA lacks timely earnings data on beneficiaries who return to work. In 49 of the 60 CDR cases GAO reviewed, there was no evidence in the file that the beneficiary reported his or her earnings, as required by program guidelines. To identify unreported work and earnings, SSA primarily relies on data matching with the Internal Revenue Service (IRS), then sends these matches to staff for a work CDR. However, the IRS data may be more than a year old when received by SSA, and SSA says it is not cost-effective to gain access to and use other sources of earnings information, such as the National Directory of New Hires database. In addition, GAO found that cases may wait up to 15 additional months before SSA staff begin work on the CDRs. Second, SSA lacks formal, agencywide performance goals for work CDRs. While it targets 270 days to complete a case, actual processing time ranged from 82 to 992 days (with a median of 396 days) in the 60 cases GAO reviewed, and overpayments which accrued as a result topped $1 million total. SSA officials reported several initiatives to more effectively prioritize work CDR cases--for example, those with the largest potential overpayment amounts--but these efforts are in the early stages, and GAO could not yet assess their effectiveness as part of this review. GAO recommends that SSA develop and adopt agencywide performance goals for recovering DI overpayments and processing work CDRs, require supervisory review of certain repayment plans, address a system limitation which precludes an accurate record of debt owed SSA, and explore options for obtaining more timely earnings information. SSA agreed with four of five recommendations. It disagreed with the need for supervisory review of repayment plans while acknowledging the need for more related guidance to its staff. |
Under the Medicaid program’s federal-state partnership, CMS is responsible for overseeing the program, while state Medicaid agencies are responsible for the day-to-day administration of the program. Although subject to federal requirements, each state develops its own Medicaid administrative structure for carrying out the program, including its approach to program integrity. To monitor program integrity in Medicaid, CMS estimates the national improper payment rate on an annual basis through the Payment Error Rate Measurement (PERM) program. The PERM involves reviews of sampled fee for service claims, payments to managed care entities, and beneficiary eligibility determinations in the states; the national improper payment rate is a weighted average of states’ rates in each of these components. State Medicaid programs do not work in isolation on program integrity; instead, there are a large number of federal agencies, other state entities, and contractors with which states must coordinate. (See fig. 1.) Recognizing the importance of federal state collaboration on program integrity issues, in November 2016, along with the Office of Management and Budget, we convened a meeting with state auditors, CMS, and other federal officials to discuss ways to strengthen collaboration between the federal government and the states. In recent years, Medicaid expenditures and enrollment grew under PPACA. Growth in enrollment is primarily due to more than half of the states choosing to expand their Medicaid programs by covering certain low-income adults not historically eligible for Medicaid coverage, as authorized under PPACA. In addition to expanding Medicaid eligibility, PPACA required the establishment of health insurance exchanges in all states, and provided for federal subsidies to assist qualifying low-income individuals in paying for exchange coverage. States may elect to establish and operate an exchange, known as a state-based exchange, or allow CMS—which is responsible for overseeing the exchanges—to do so within the state, known as a federally facilitated exchange (FFE). As of March 2015, CMS operated an FFE in 34 states, and 17 states were approved to operate state-based exchanges. CMS has taken steps to improve Medicaid program integrity and reduce improper payments; however, additional actions should be taken to help further prevent improper payments. Specifically, our work has identified four key program integrity issues for the Medicaid program—enrollment verification, managed care, provider screening, and coordination between Medicaid and the exchanges—along with CMS’s progress in addressing them, and additional necessary actions. Since 2011, CMS has taken steps to make the Medicaid enrollment- verification process more data-driven to improve the accuracy of eligibility determinations. For example, in response to PPACA, CMS established a more rigorous approach to verifying financial and nonfinancial information needed to determine Medicaid beneficiary eligibility. CMS created a tool called the Data Services Hub that was implemented in fiscal year 2014 to help verify beneficiary applicant information used to determine eligibility for enrollment in qualified health plans and insurance-affordability programs, including Medicaid. The hub routes to and verifies application information in various external data sources, such as the Social Security Administration and the Department of Homeland Security. According to CMS, the hub can verify key application information, including household income and size, citizenship, state residency, incarceration status, and immigration status. Despite CMS’s efforts, there continue to be gaps in the agency’s efforts to ensure that only eligible individuals are enrolled into Medicaid. In particular, our work found that federal and selected state-based marketplaces approved health insurance coverage and subsidies for 9 of 12 fictitious applications made during the 2016 special enrollment period. In another study, we found that CMS also had gaps in ensuring that Medicaid expenditures for enrollees—including enrollees eligible as a result of the PPACA expansion—are matched appropriately by the federal government. Specifically, we found that CMS had excluded from review federal Medicaid eligibility determinations in the states that have delegated authority to the federal government to make Medicaid eligibility determinations through the federally facilitated exchange. To address this gap in oversight of eligibility determinations, we recommended that CMS conduct reviews of federal Medicaid eligibility determinations to ascertain the accuracy of these determinations and institute corrective action plans where necessary. In October 2016, HHS provided additional information indicating that the department is relying upon operational controls within federal marketplaces to ensure accurate eligibility determinations as well as new processes that would identify duplicate coverage. However, we continue to believe that without a systematic review of federal eligibility determinations, the agency lacks a mechanism to identify and correct errors and associated payments. Lastly, CMS requires all states to participate annually in the Eligibility Review Pilots to test different approaches to measuring the accuracy of eligibility determinations under the new beneficiary enrollment processes. Oversight of beneficiary eligibility is important to program integrity. Our prior work has identified thousands of Medicaid beneficiaries involved in potential improper or fraudulent payments. Some of the concerns that we identified included beneficiaries having payments made on their behalf concurrently by two or more states, and payments made for claims that were dated after a beneficiary’s death. CMS has taken steps to provide states with additional guidance on their oversight of Medicaid managed care organizations. In October 2014, CMS made available on its website the managed care plan compliance toolkit to provide further guidance to states and managed care plans on identifying improper payments to providers. In May 2016, CMS issued a final rule on Medicaid managed care, which requires states to conduct periodic audits of financial data submitted by, or on behalf of each Medicaid managed care plan. The final rule takes additional steps to improve oversight of Medicaid managed care, with some provisions applying after 2018. CMS has also taken action in response to recommendations that we made with regard to increasing guidance for states, requiring states to audit managed care organizations, and providing states with additional audit support. Oversight of Medicaid managed care is increasing in importance as states’ use of managed care plans to deliver services has been growing. More than half of all Medicaid beneficiaries are now enrolled in managed care plans, and nearly 40 percent of Medicaid expenditures are for health care services delivered through managed care. The estimated improper payment rate for managed care is currently less than one percent; however, this estimate is based on a review of the payments made to managed care organizations and does not review any underlying medical documentation. Additional actions on the part of CMS and the states are critical to improving program integrity in Medicaid. In particular, we and the HHS Office of Inspector General have identified incomplete and untimely managed care encounter data. Encounter data are data that managed care organizations are expected to report to state Medicaid programs, allowing states to track the services received by beneficiaries enrolled in managed care. Our work found that encounter data for 11 states were not available in a timely manner, and that 6 states had encounter data that we deemed were unreliable. PPACA included multiple provisions aimed at strengthening the screening of providers who enroll to participate in Medicaid. While the act requires that all providers and suppliers be subject to licensure checks, it gave CMS discretion to establish a risk-based application of other screening procedures. According to CMS’s risk-based screening, moderate- and high-risk providers and suppliers additionally must undergo pre- enrollment and post-enrollment site visits, while high-risk providers and suppliers also will be subject to fingerprint-based criminal-background checks. This requirement may address some of the potentially fraudulent or improper payments. Additionally, CMS regulations now require that the state Medicaid agency enroll all Medicaid managed care providers, which has the potential to improve oversight of providers in managed care. Prior to PPACA, if one state terminated a provider from its Medicaid program, a provider could potentially enroll in or continue participation in another state’s Medicaid program, leaving the latter state’s program vulnerable to potential fraud, waste, and abuse. Our prior work has identified hundreds of Medicaid providers who were potentially improperly receiving Medicaid payments. Potential improper behavior included providers with suspended or revoked licenses, improper mailing addresses, or deceased providers. Actions to ensure appropriate oversight of Medicaid providers, however, continue to require additional action on the part of CMS and the states. Our work, which was based on 2 states and 16 health plans, found that these states and health plans used information that was fragmented across 22 databases managed by 15 different federal agencies to screen providers—and that these databases did not always have unique identifiers. Our work resulted in in a recommendation that CMS identify databases best suited for oversight of provider eligibility and coordinate with other agencies to explore the use of a unique identifier. CMS regulations now require that the state Medicaid agency enroll all Medicaid managed care providers, which has the potential to improve oversight of providers in managed care. However, CMS has not yet evaluated whether the additional database merit further action or considered ways to ensure that a unique identifier is available so that providers can be accurately identified. We also found that the 10 selected states that we reviewed used inconsistent practices to make data on ineligible providers publicly available, which could result in provider screening efforts that do not identify ineligible providers. CMS has taken action that is responsive to another recommendation on providing guidance to state Medicaid programs, establishing expectations and best practices on sharing provider screening data among states and Managed care plans. In addition, the recently enacted 21st Century Cures Act takes important steps to address this recommendation including requiring CMS to establish a provider termination notification database by July 2018 and requiring the agency to establish uniform terminology for reasons for provider terminations. Regarding coordination between Medicaid and the exchanges, CMS implemented policies and procedures to ensure that individuals do not have duplicate coverage (enrolled in Medicaid and in subsidized exchange coverage). Due to changes in income and other factors, it is likely that under PPACA many low-income individuals will transition between Medicaid and subsidized exchange coverage. Our prior work found that despite CMS policies and procedures designed to prevent duplicate coverage, it was occurring. In response, CMS has conducted three checks to identify individuals with duplicate coverage. CMS has also reported that the agency intends to complete these checks at least two times per coverage year, which has the potential to save federal—as well as beneficiary—dollars. While CMS has made progress by implementing checks for duplicate coverage, weaknesses remain. CMS has not developed a plan for assessing whether the checks and other procedures are sufficient to prevent and detect duplicate coverage. In March 2016, CMS reported that it was reviewing data on the number of people identified as having duplicate coverage through the first CMS check who subsequently disenrolled from subsidized exchange coverage. CMS reported reviewing these data as a means of assessing the effectiveness of the checks for duplicate coverage. We are continuing to monitor CMS’s efforts in this area, particularly whether CMS develops a plan, including thresholds for the level of duplicate coverage it deems acceptable, to routinely monitor the effectiveness of the checks and other planned procedures to prevent and detect duplicate coverage. In closing, Medicaid represents significant expenditures for the federal government and states, and is the source of health care for tens of millions of Americans. Its long-term sustainability is critical, and will require, among other things, effective federal and state oversight. Chairman Murphy, Ranking Member DeGette, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you might have. If you or your staff have any questions about this testimony, please contact Carolyn L. Yocom, Director, Health Care at (202) 512-7114 or YocomC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Ann Tynan (Assistant Director), Susan Barnidge, Leslie Gordon, Drew Long, Andrea E. Richardson, and Jennifer Whitworth. Patient Protection and Affordable Care Act: Results of Enrollment Testing for the 2016 Special Enrollment Period. GAO-17-78. Washington, D.C.: November 17, 2016. Medicaid Fee-For-Service: State Resources Vary for Helping Beneficiaries Find Providers. GAO-16-809. Washington, D.C.: August 29, 2016. Patient Protection and Affordable Care Act: CMS Should Act to Strengthen Enrollment Controls and Manage Fraud Risk. GAO-16-506T. Washington, D.C.: March 17, 2016. Medicaid Managed Care: Trends in Federal Spending and State Oversight of Costs and Enrollment. GAO-16-77. Washington, D.C.: December 17, 2015. Medicaid: Additional Efforts Needed to Ensure that State Spending is Appropriately Matched with Federal Funds. GAO-16-53. Washington, D.C.: October 16, 2015. Medicaid: CMS Could Take Additional Actions to Help Improve Provider and Beneficiary Fraud Controls. GAO-15-665T. Washington, D.C.: June 2, 2015. Medicaid: Service Utilization Patterns for Beneficiaries in Managed Care. GAO-15-481. Washington, D.C.: May 29, 2015. Medicaid: Additional Actions Needed to Help Improve Provider and Beneficiary Fraud Controls. GAO-15-313. Washington, D.C.: May 14, 2015. Medicaid Program Integrity: Increased Oversight Needed to Ensure Integrity of Growing Managed Care Expenditures. GAO-14-341. Washington, D.C.: May 19, 2014. The following table lists selected recommendations GAO has made to the Department of Health and Human Services regarding Medicaid program integrity. The agency has implemented 3 of these recommendations. The agency has either not taken or has not completed steps to implement the remaining 8 recommendations, as of January 2017. | Medicaid, a joint federal-state health care program, is a significant component of federal and state budgets, with estimated outlays of $576 billion in fiscal year 2016. The program's size and diversity make it particularly vulnerable to improper payments. In fiscal year 2016, improper payments were an estimated 10.5 percent ($36 billion) of federal Medicaid expenditures, an increase from an estimated 9.8 percent ($29 billion) in fiscal year 2015. States, which are responsible for the day-to-day administration of the Medicaid program, are the first line of defense against improper payments. Specifically, states must implement federal requirements to ensure the qualifications of the providers who bill the program, detect improper payments, recover overpayments, and refer suspected cases of fraud and abuse to law enforcement authorities. At the federal level, CMS is responsible for supporting and overseeing states' Medicaid program integrity activities. This testimony highlights key program integrity issues in Medicaid, the progress CMS has made improving its oversight of program integrity, and the related challenges the agency and states continue to face. This testimony is based on 10 products and 11 recommendations. Of these 11 recommendations, 3 have been implemented based on agency action. GAO's prior work has identified four Medicaid program integrity issues—where the program is vulnerable to improper payments such as those made for services that were not covered, were not medically necessary, or were not provided—as well as actions taken by the Centers for Medicare & Medicaid Services (CMS) to address the issues and additional actions that should be taken. Enrollment Verification: In response to the Patient Protection and Affordable Care Act (PPACA), CMS established a more rigorous approach for verifying financial and nonfinancial information needed to determine Medicaid beneficiary eligibility. Despite CMS's efforts, however, there continue to be gaps in efforts to ensure that only eligible individuals are enrolled into Medicaid, and that Medicaid expenditures for enrollees—particularly those eligible as a result of the PPACA expansion—are matched appropriately by the federal government. Oversight of Medicaid Managed Care: CMS has provided states with additional guidance on their oversight of Medicaid managed care. Oversight of managed care is increasing in importance and improvements in measuring the improper payment rate are needed. For example, the estimated improper payment rate for managed care is based on a review of payments made to managed care organizations, and does not review any underlying medical documentation. GAO and the Department of Health and Human Services (HHS) Office of Inspector General have identified incomplete and untimely managed care encounter data—data that managed care organizations are expected to report to state Medicaid programs, allowing states to track the services received by beneficiaries enrolled in managed care. Provider Eligibility: PPACA included multiple provisions aimed at strengthening the screening of providers who enroll to participate in Medicaid. While the act requires that all providers and suppliers be subject to licensure checks, it gave CMS discretion to establish a risk-based application of other screening procedures, such as fingerprint-based criminal-background checks for high-risk providers. Also, CMS regulations now require that all Medicaid managed care providers enroll with the state Medicaid agency, which has the potential to improve oversight of providers in managed care. However, GAO's work based on 2 states and 16 health plans identified challenges screening providers for eligibility, partially due to fragmented information. Coordination between Medicaid and the Exchange: CMS implemented a number of policies and procedures to ensure that individuals do not have duplicate coverage (enrolled in both Medicaid and in subsidized coverage through an exchange, which is a marketplace where eligible individuals may compare and purchase private health insurance). CMS has conducted checks to identify individuals with duplicate coverage, and plans to complete these checks at least two times per coverage year, which has the potential to save federal—as well as beneficiary—dollars. However, CMS has not developed a plan for assessing whether the checks and other procedures—such as thresholds for the level of duplicate coverage deemed acceptable—are sufficient to prevent and detect duplicate coverage. |
By the late 1970s, the First City Bancorporation of Texas, Inc., through its subsidiary banks, had a high concentration of loans to the energy industry in the Southwest United States and was regarded as a principal lender in that industry. In the early and mid-1980s, when the energy industry experienced financial difficulties, so did First City. By 1986, First City was reporting operating losses. First City, its regulators, and FDIC recognized that many of the subsidiary banks could not survive without major infusions of capital. These parties agreed that the capital needed for long-term viability could not come wholly from the private sector due to the financially strained condition of the Southwest’s economy and banking industry. A chronology of events leading to FDIC’s open bank assistance in 1988 and the final resolution of the First City banks in 1993 appears below. A description of changes in various legal authorities over the same period, 1987 to 1993, is contained in appendix II. After considering available alternatives, FDIC and First City entered into a recapitalization agreement—commonly referred to as open bank assistance—that called for First City to reduce its subsidiary banks from nearly 60 to about 20. The agreement also required the creation of a “collecting bank” to dispose of certain troubled assets held by the subsidiary banks. The open bank assistance included a $970 million capital infusion from FDIC along with $500 million of private capital raised by the new bank management to restore First City’s financial health. As part of the agreement, FDIC received $970 million in preferred stock of the collecting bank. FDIC also received a guarantee, from both First City Bancorporation and the subsidiary banks, for $100 million payable in 1998 toward the retirement of the collecting bank preferred stock. The recapitalized First City banks embarked on a short-lived aggressive growth policy that resulted in First City banks’ assets increasing from about $10.9 billion as of April 19, 1988, to about $13.9 billion as of September 30, 1990. First City banks’ loan portfolios included high-risk loans, such as loans to finance highly leveraged transactions, international loans, and out-of-territory lending. During this period, First City reported $183 million in profits and paid $122 million in cash dividends. In part, the earnings used to justify the cash dividends were profits that depended on income from nontraditional and onetime sources, such as the sale of its credit card operations. By September 1990, problems with the quality of its loan portfolios not only caused operating losses but also started to erode First City’s capital. A 1990 OCC examination report strongly criticized the lending practices of First City’s lead bank, First City-Houston. Some of its loan losses resulted from continued deterioration in loans made before April 1988. However, other losses were attributed to new loans associated with an aggressive risk-taking posture by new management combined with poor underwriting practices. During and immediately after OCC’s 1990 examination, First City made changes in the lead bank’s senior executive management, and OCC entered into formal supervisory agreements with First City’s Houston, Austin, and San Antonio banks. The agreements required each of the banks to achieve and maintain adequate levels of capital. They also required improvements in (1) underwriting standards, (2) bank management and board oversight, (3) strategic planning, (4) budgeting, (5) capital and dividend policies, (6) management of troubled assets, (7) internal loan review, (8) allowance for loan and lease losses (ALLL), (9) lending activities, and (10) loan administration and appraisals. According to OCC, First City bank management complied with substantially all of the provisions of the formal agreements, except the capital maintenance provisions. While First City significantly strengthened its underwriting criteria, reduced its aggressive high-risk lending practices, and initiated actions to recapitalize, these efforts did not prevent the First City banks from failing. Between September 30, 1990, and October 30, 1992, problems in the loan portfolios continued to mount. First City bank assets decreased from about $13.9 billion to about $8 billion, and First City incurred total losses of about $625 million. Most of the post-recapitalization losses were from loans at First City’s lead bank in Houston and its second-largest bank in Dallas. Among the primary reasons for the banks’ financial difficulties were the continued decline in the Texas economy, weaker-than-anticipated loan portfolios in the recapitalized banks, questionable lending activity by First City management within the first 2 years of the recapitalization, and high bank operating expenses. OCC, as primary federal bank regulator for the lead bank, projected in early 1991 that operating losses would deplete the capital of this bank by year-end 1992. Later, on the basis of First City’s operating results, OCC projected that by the end of 1992 bank losses would either (1) deplete the capital at the Houston bank and cause its insolvency or (2) erode the bank’s capital to less than 2 percent of its assets, in which case OCC had the authority to close the bank effective December 19, 1992, in accordance with the prompt corrective action provisions of the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA). The Federal Reserve System (FRS)—the primary federal bank regulator for the Dallas bank—also projected its likely insolvency by the end of 1992. Under the cross-guarantee provisions of the Financial Institution Reform, Recovery and Enforcement Act of 1989 (FIRREA), FDIC could require the 18 otherwise solvent First City banks to reimburse FDIC for any anticipated losses resulting from the failures of the Houston and Dallas banks. FDIC staff advised the FDIC Board that the capital of the 18 banks would not be sufficient to cover the projected losses from the 2 insolvent banks, and the application of the cross-guarantee provision could result in the insolvency of all 20 First City banks. OCC, FDIC, and the Texas Banking Commissioner closely monitored the First City banks following the recapitalization, and along with FRS, shared information concerning the Houston and Dallas banks’ deteriorating financial conditions. Following its September 1991 and January 1992 examinations of First City-Houston, OCC advised First City that its future viability could not be ensured without a significant capital infusion. Similarly, FRS examination of the holding company in 1991 found that First City lacked adequate capital to support its network of subsidiary banks. Beginning in June 1991, representatives of OCC began working with their counterparts at FDIC on a plan for early intervention and resolution of First City-Houston. These efforts were intense and ongoing throughout 1991 and into 1992. During 1991 and 1992, First City, OCC, and FDIC considered a number of alternative resolution plans. The resolution plans considered by First City involved three types of transactions: (1) Type A—Acquisition of First City banks by a stronger, well-capitalized banking company. (2) Type B—Major capital infusion by an outside investor or investor groups. (3) Type C—A self-rescue by the banking organization through some combination of consolidation or sale of subsidiary banks, new capital, and FDIC concessions and financial support. Initially, First City favored a Type B transaction and indentified a number of potential investors as possible sources of significant new capital. However, in anticipation that raising new capital would be extremely difficult given the banking organization’s precarious financial position and continuing OCC concerns with bank management, First City did not actively pursue a Type B transaction. Thereafter, it pursued a Type A transaction almost exclusively. In October 1991, OCC, FDIC, and First City developed a proposal (Type A transaction) that called for the banks’ acquisition by a stronger institution, with the possible need for FDIC financial assistance. In late 1991, FDIC’s Division of Resolutions (DOR) staff contacted a number of banking organizations to assess their interest in acquiring the First City banks. While a number of institutions expressed considerable interest in the First City banks and conducted in-depth reviews of bank operations, only one institution ultimately submitted a bid. DOR staff recommended that the bid be rejected for a number of reasons, including its estimated $240 million cost to FDIC, which was higher than FDIC’s estimate of $179 million to liquidate the banks at that time. DOR staff also asserted that the proposal was not in the best interest of FDIC because it contained several items that were difficult to quantify and would require costly negotiations with the acquirer. DOR staff asserted that these negotiations could significantly delay completion of any open bank assistance until after December 1992, when FDIC projected the banks would become insolvent. After the Type A transaction for open bank assistance was rejected by FDIC, First City developed two new self-rescue proposals (Type C) to recapitalize the troubled banks. In July 1992, First City submitted its first self-rescue plan, which called for the closure and immediate reopening of four of the largest First City banks under the control of an acquiring bank. Under this proposal, the acquiring bank would purchase about $7.5 billion of First City banks’ performing and fixed assets, and FDIC would enter into a loss-sharing agreement with the acquiring bank for the remaining $1.2 billion of troubled assets. DOR staff recommended this alternative be pursued because they estimated no losses to BIF. According to the staff’s projections, a combination of the financial commitments made by the acquiring bank and the ALLL previously established by First City banks could absorb additional deterioration that might occur in the quality of the loan portfolios. However, the FDIC Board was concerned about the ability of First City bank management to execute the proposal. In August 1992, the FDIC Board rejected the proposal mainly because of a condition in the plan that required FDIC to guarantee payment in full for all deposits, including uninsured deposits. In August 1992, First City management submitted another self-rescue proposal to OCC to recapitalize the banks. This proposal called for First City to merge its four largest banks—Houston, Dallas, Austin, and San Antonio. This plan also called for 13 of the remaining 16 First City banks to be sold for an estimated $200 million. An additional $100 million in new equity capital would be raised through a stock offering to new investors and current shareholders. Approximately $96 million would be raised through cost savings from proposed renegotiation of long-term leases. Finally, the proposal would have required FDIC to make concessions totaling over $100 million—stemming largely from the 1988 open bank assistance. The plan projected that the reconstituted and recapitalized First City banks would work their way back to profitability. According to First City documents, it had received commitments from potential investors and landlords needed to raise more than $300 million in capital. OCC’s analysis of First City’s August self-rescue proposal concluded that the plan lacked viability due to an estimated $200 million capital shortfall at the reconstituted banks. OCC concluded that the plan did not provide sufficient incoming capital to cover asset quality problems and to provide the capital base required to reestablish the banks for long-term viability. OCC documents also showed that the planned lease renegotiations would not result in the projected savings. Finally, OCC also believed that First City would not be able to raise sufficient capital through stock issuances. Shortly after receipt of First City’s August 1992 self-rescue plan, OCC determined that an up-to-date examination was necessary to evaluate the likelihood that the plan would result in long-term viability for First City. The examination of the Houston bank, which began in late August 1992, focused on problem loans. OCC noted significant deterioration in several large loans since its last examination. On the basis of the results of its August examination, OCC determined that the bank had underestimated its ALLL by about $67 million. This amount exceeded the Houston bank’s existing equity capital of about $28 million, thus making the Houston bank insolvent and requiring OCC to close it. The Examiner-In-Charge (EIC) and other OCC officials told us that their adjustment of ALLL was based on both objective and subjective considerations. They said they gave consideration to First City-Houston’s history relating to its management’s inadequate recognition of loan quality problems and provision for ALLL. The OCC officials said they were also concerned about deteriorating financial conditions at the bank as reflected in dangerous classification trends within its loan portfolio, whereby a higher percentage of loans were recognized as troubled loans and the bank had not experienced the same recovery pattern as experienced by most banks. Further, OCC officials said they were concerned about the bank’s financial condition relative to other comparable institutions. In comparing First City’s ALLL to that of peer institutions, OCC said that it found that First City had maintained an ALLL level far below that of its peers. OCC said that given First City’s asset problems, it believed that First City’s ALLL should have been far higher than the peer average. OCC officials said they were also concerned about the weakening economic conditions in Texas and First City’s ability to overcome its problems in this environment. Finally, OCC officials said that, by this time, they had lost confidence in First City’s management and its processes for establishing proper reserve levels. On October 16, 1992, OCC advised FDIC of its latest examination findings and its plans to close First City-Houston as soon as practicable so that FDIC could resolve it in an orderly manner. FDIC advised OCC that FDIC could accelerate its projected December 1992 resolution to October 30, 1992, in light of the OCC examination findings. Accordingly, on October 30, 1992, OCC declared the First City-Houston bank insolvent and appointed FDIC receiver. On that same day, the Texas Banking Commissioner closed First City-Dallas on the grounds of imminent insolvency, and FDIC exercised its statutory authority to issue immediately payable cross-guarantee demands on the remaining 18 First City banks. This resulted in the closure of the entire First City banking organization on October 30, 1992. After being advised of OCC’s examination findings, FDIC considered two basic alternatives to provide for the orderly resolution of the First City banks: (1) liquidate them immediately or (2) place them under FDIC control and operate them as bridge banks until a sale could be arranged. FDIC chose the latter alternative, which would provide time for FDIC to compare the cost of liquidation to the cost of selling the banks based on bids it planned to solicit after the banks failed. FDIC assumed potential acquirers would be interested in purchasing the banks only if FDIC removed certain risks associated with asset quality problems, potential litigation liabilities, and costly contractual obligations. The January 1993 sale attracted bids from 30 potential acquirers and resulted in the sale of all 20 of the bridge banks. At the time of sale, FDIC estimated the sale would result in a gain, or surplus, of about $60 million—substantially different from the $500 million loss that FDIC had estimated 3 months earlier. FDIC officials said they were astonished by the proceeds. After resolution and liquidation expenses are paid, FDIC is to return any surplus to First City creditors and shareholders. Shortly after the First City banks were closed, the holding company filed lawsuits on behalf of the shareholders. The lawsuits asserted, among other things, that federal and state banking regulators acted without regard to due process and illegally and unnecessarily closed a solvent banking organization. More specifically, the lawsuits allege that OCC wrongfully closed the lead national bank and that the Texas Banking Commissioner wrongfully closed First City-Dallas. The lawsuits also asserted that FDIC, as the insurer, was responsible for the inappropriate closure of the financially sound First City banks. According to the suit, FDIC used its cross-guarantee authorities to execute the agency’s preconceived plan to gain control of the First City banking organization. The holding company asserted that FDIC’s use of its cross-guarantee provisions was both inappropriate and unnecessary, and violated the Fifth Amendment of the Constitution. The suit also noted that on numerous occasions during the summer of 1992, First City Bancorporation offered to merge all the First City banks and restore the capital at the troubled banks. The holding company asserted that if the regulators had approved such an action, their plans to close the First City banks could not have been carried out. In 1988, FDIC could have waited until the First City banks were insolvent and either liquidated them or sold them to interested potential acquirers. However, FDIC determined that providing $970 million in assistance to the First City banks was the best alternative available. When FDIC approved First City’s open bank assistance, FDIC’s resolution alternatives were limited by both regulatory requirements and economic conditions. In April 1988, OCC could not have closed First City banks for insolvency because, at that time, OCC could close a bank for insolvency only when a bank’s primary capital was negative. At the time, a bank’s primary capital was defined by OCC as the sum of the bank’s retained earnings and the bank’s ALLL. Although First City had negative retained earnings of $625 million, it also had $730 million in ALLL; hence, it had positive primary capital of $105 million. Additionally, in the mid-1980s, the Texas banking industry was experiencing its worst economic performance since the Great Depression, which limited FDIC’s resolution alternatives. According to FDIC, the economic conditions increased the cost to liquidate troubled banks and reduced the number of potential acquirers. Consequently, FDIC considered two resolution alternatives in August 1987. One was to allow First City losses to continue to mount until the banks’ primary capital was depleted, then either liquidate or operate First City banks as bridge banks until potential acquirers could be found. Under the other alternative, FDIC could have provided open bank assistance to willing acquirers of the First City banks—as long as the estimated cost of assistance was less than the estimated cost of liquidation to the insurance fund. FDIC decided against the first alternative for three reasons. First, FDIC believed that allowing First City banks to continue to deteriorate could jeopardize the stability of the regional banking industry. FDIC also was unsure about operating First City as a bridge bank because bridge banks were new to FDIC (the agency had received bridge bank authority in August 1987). Second, the First City banks were far too large and complex to be the agency’s first bridge banks, in FDIC’s opinion. And third, FDIC rejected liquidation because estimated liquidation costs were determined to be higher than the estimated cost to the fund for open bank assistance. FDIC approved $970 million of open bank assistance as the best resolution alternative available. A total of eight parties expressed interest in acquiring the troubled banks, and three submitted bids. FDIC’s estimates of potential insurance fund commitments based on those bids ranged from the $970 million for open bank assistance to $1.8 billion for the bid most costly to the insurance fund. According to FDIC records, one of the bids led to estimated fund costs as low as $603 million, but FDIC found that the bidder had used overly optimistic assumptions in the offer. When adjusted, the insurance fund cost of that bid was nearly $1.3 billion. The Federal Reserve Board (FRB) approved the change of control of these recapitalized banks to the new First City bank management with reservations. FRB’s memo approving the change of control warned the new management that assumptions agreed upon by FDIC and First City and used to forecast the banks’ road to recovery were optimistic. It also warned that if regional economic conditions did not drastically improve, the recapitalization effort was not likely to succeed. We reviewed the First City banks’ performance following the recapitalization to identify the factors that contributed to the October 1992 failures. We found that the failures resulted from a combination of factors, including the payment of dividends to shareholders, deteriorating loan portfolios, and relatively high operating costs. These findings are described in appendix III. On October 28, 1992, the FDIC Board determined that placing the failed First City banks into interim bridge banks constituted the least costly and most orderly resolution to First City’s financial difficulties. On that date the FDIC Board considered three alternatives. Two involved bridge bank resolutions and the third called for a liquidation of First City banks’ assets. The difference between the two bridge bank alternatives was that one alternative contained a loss-sharing agreement on a selected pool of troubled assets. Under this agreement, the acquirer would manage and dispose of the asset pool, and FDIC would reimburse the acquirer for a portion of the losses incurred when selling those assets. The other bridge bank alternative did not provide for loss sharing. The purpose of the two bridge bank alternatives was to provide for an orderly resolution by continuing the business of the banks until acceptable acquirers could be found. FDIC’s belief was that the bridge banks would preserve the First City banks’ value as going concerns while FDIC marketed them. FDIC estimated that a bridge bank resolution would minimize BIF’sfinancial exposure. FDIC was aware of various parties’ interest in acquiring the banks. However, FDIC believed that the potential acquirers would be interested in the banks only after they were placed in receivership, since, after closure, new bank management could renegotiate contractual and deposit arrangements with bank servicers and customers. FDIC staff estimated resolution costs to BIF ranging from a low of about $700 million (bridge bank with loss sharing) to a high of over $1 billion (FDIC liquidation). FDIC estimated both bridge bank alternatives to be less costly than a liquidation primarily because of the likelihood that FDIC would be able to obtain a premium, or a cash payment, from potential acquirers who would be assuming the deposits of the bridge banks. In a liquidation, no such premium would be paid because FDIC pays the depositors directly instead of selling the right to assume the deposits to an acquirer. FDIC also estimated that it could minimize the losses to the insurance fund if it provided loss sharing. While the FDIC Board believed that a bridge bank with a loss-sharing arrangement was the most orderly and least costly alternative presented by DOR, the ultimate cost of resolving the First City banks was uncertain. DOR staff’s initial cost model, which was based on the estimated proceeds and costs of each resolution alternative, estimated that a bridge bank resolution with loss sharing would cost about $700 million. This estimate was based largely on an asset valuation review performed for DOR by an outside contractor. Representatives from FDIC’s Division of Liquidations (DOL), which was responsible for disposing of assets assumed by FDIC, said that liquidating the First City banks would likely cost more than $1 billion. Other FDIC officials—including senior level DOR officials—said that because of the considerable market interest in the banks on a closed-bank basis, the cost to resolve First City banks would likely be about $300 million. The Board determined that placing First City banks into interim bridge banks would cost the insurance fund about $500 million. The then DOR Director told us that the fact that the Board did not rely solely on the initial DOR cost model was not a deviation from the normal resolution process. He explained to us that the resolution process is dynamic and takes into account FDIC Board deliberations. He noted that it was his responsibility to advise the Board regarding the merits and shortfalls associated with the DOR asset valuation process. He pointed out that DOR’s asset valuations estimated the net realizable value for failed bank assets disposed of by FDIC through a liquidation. The methodology determining net realizable value of assets may not always reflect the market value of assets disposed of through such resolution alternatives as an interim bridge bank. Typically, a going concern (including a bridge bank) establishes asset values that attempt to maximize the return to the investor regardless of the period the assets may be held. Net realizable asset valuation in a liquidation, on the other hand, attempts to maximize the return to the investor given a limited holding period, often less than 2 years. According to FDIC documents used in its Board’s deliberations, the October 1992 decision to place the First City banks in bridge banks and commit about $500 million to resolve First City was the least costly of the three alternatives the FDIC Board formally considered when the banks were closed. During the year preceding the failure, FDIC and OCC considered and rejected a number of alternatives to resolve the First City banks because the alternatives were considered too costly, did not ensure the banks’ long-term viability, or included provisions that were unacceptable from a policy perspective. As previously discussed, OCC had projected that operating losses, caused by imbedded loan portfolio problems, would render First City banks insolvent by December 1992. However, OCC’s determination that the Houston bank was insolvent in October 1992 accelerated First City banks’ closure by about 2 months. FDIC officials believed the earlier than projected closure unintentionally but effectively precluded either previous or new potential acquirers from doing due diligence, i.e., determining the value of the bank assets, deposits, and other liabilities necessary to ascertain their interest in bidding on the First City banks at the time of closure. Although FDIC initially estimated the cost to BIF of the October 1992 resolution of First City banks to be $500 million, the agency has since projected that this resolution will result in no cost to BIF. When it announced the sale of the First City banks in January 1993, FDIC estimated the proceeds generated from the sale would amount to a surplus of about $60 million. In June 1994, FDIC estimated that the surplus may exceed $200 million. As mentioned earlier, any surplus remaining after payment of FDIC’s resolution expenses is to be returned to First City’s creditors and shareholders. According to FDIC’s analysis of the resolution, sales proceeds were higher than FDIC expected largely because acquirers paid a 17-percent premium for the banks—substantially more than the 1-percent premium on deposits that FDIC had estimated in arriving at the $500 million loss estimate. According to FDIC officials, a deposit premium of 1 percent was typical for failed bank resolutions contemporaneous with the 1992 First City bank resolution. Some FDIC officials, however, told us that at least part of the premium paid by the acquirers should be attributed to the value the acquirers placed on First City bank assets. Since acquirers do not specify in their bids how much they are willing to pay for assets or deposits, neither we nor FDIC can determine the exact basis for the premium. As of June 1994, FDIC projected that settlement of the lawsuits by First City Bancorporation would result in no cost to BIF. FDIC’s projection was based on the assumption that the estimated surplus from the bridge bank sale will exceed its costs to resolve and liquidate the bridge banks, with any excess ultimately to be paid to the holding company. On June 22, 1994, FDIC and the holding company signed a settlement agreement under which First City would immediately receive in excess of $200 million. The settlement would allow the First City Bancorporation to pay its creditors and permit a distribution to its shareholders sooner rather than later.Basic tenets of this proposed settlement are (1) BIF will incur no loss in connection with the 1992 resolution of the First City banks and (2) FDIC will not receive more than its out-of-pocket costs to resolve the banks. Consistent with these tenets, the proposed settlement provides for FDIC to receive the net present value of over $100 million, largely based on First City’s guarantee to pay in 1998 toward the retirement of the collecting-bank-preferred-stock FDIC received in return for the 1988 open bank assistance. Any settlement between the two parties cannot be consummated until it is approved by the bankruptcy court. FDIC officials anticipate a decision on the settlement in early 1995. Generally, the processes used in providing financial assistance, closing banks, and resolving troubled banks should always include adequate safeguards for BIF. The events surrounding the First City resolutions offer valuable lessons for FDIC as the insurer and for all of the primary bank regulators. These lessons relate to how to better assist, close, or otherwise resolve troubled institutions in the future. Consultation between regulatory agencies might have led FDIC to adopt more realistic assumptions concerning the likelihood of success of the $970 million open bank assistance provided First City in 1988. When FDIC and the new First City management forecasted the First City banks’ success in 1988, a key economic assumption was that the economies of Texas and the Southwest would reverse their recessionary trend and grow at about 3 percent per year to mirror the growth rate of the national economy during the mid-1980s. However, the Texas economy grew only an average of 2.2 percent per year between 1989 and 1991. Furthermore, by the late 1980s and early 1990s, the national economy, which had been growing at about 3 percent per year, started to weaken and experience its own recessionary conditions. While approving the change of control to the new First City bank management, FRB raised a concern about these economic assumptions being too optimistic and, if not realized, possibly jeopardizing the success of the recapitalized banks. If FDIC had consulted with its regulatory counterparts in FRS and OCC on economic and financial assumptions for the economy and market in which the assisted bank would operate, it would have had a broader base for, and greater confidence in, the economic assumptions used as a basis to approve the open bank assistance. Such consultation might have produced more realistic assumptions and a better understanding of the likelihood that the financial assistance that FDIC provided could be successful. The financial assistance agreement could have included safeguards to better ensure that First City undertook only those operations that were within its capabilities and capacities. At the time of the open bank assistance, the new management of the First City bank projected relatively modest growth, primarily in traditional consumer lending activities. However, under pressure to generate a return for its investors through earnings and dividends, the management pursued much riskier lending and investment activities than it had described in its reorganization prospectus. In addition to taking more risks, the new bank management did not have the expertise, policies, or procedures in place to adequately control these activities. Further, the new bank management entered into contractual arrangements based on projected growth that, when not realized, resulted in higher operating expenses than the bank could sustain. FDIC’s assistance agreement did not include sufficient safeguards to ensure that the new bank management actually pursued a business strategy comparable to the one agreed upon as being prudent, or that the bank’s activities were in line with management’s capabilities or the bank’s capacities. In retrospect, such safeguards could have been specified in the agreement. For example, according to the reorganization prospectus, First City projected that it would expand its overall loan portfolio an average of about 10 percent per year for the first 3 years after the recapitalization. The new bank management projected that consumer, credit card, and energy loans would grow at significantly higher rates than the overall loan portfolio. Management also projected little growth in the riskier areas of real estate and international lending. Contrary to those projections, overall lending activity grew by only about 3 percent in 1989 and actually declined by about 3 percent in 1990 and by over 31 percent in 1991. First City sold its credit card portfolio in early 1990. In addition, First City’s actual real estate and international loans accounted for far greater percentages of its total loan portfolio than projected in the 1988 prospectus. FDIC’s financial assistance agreement with First City did not contain provisions requiring First City’s management to develop specific business strategies reflecting safe and sound banking practices and internal control mechanisms safeguarding FDIC’s investment in the First City banks. Shortly after the recapitalization, OCC examiners criticized the management of First City’s Houston bank for not having established policies and procedures to manage the risk associated with the bank’s highly leveraged transaction loans. Consequently, OCC directed the bank to establish policies and procedures to minimize the risks of those transactions. OCC similarly directed the bank management to establish policies and procedures related to the Houston bank’s international lending activities. In the meantime, First City bank management paid dividends based on income derived from its lending activity as well as from extraordinary events, such as the sale of its credit card operations. While such payments were permissible under the law at the time, they did not help the bank retain needed capital. Consequently, First City banks lacked sufficient capital to absorb the losses stemming from their lending activities. Further, First City-Houston entered into long-term contractual arrangements for buildings and services, such as data processing, that were based on overly optimistic projections of future growth. When that growth was not realized, the overhead costs related to these arrangements proved to be a drain on earnings and contributed to the bank’s failure. FDIC would have been in a better position to avoid the risks associated with these banking practices if it had strengthened the open assistance agreement by including provisions to (1) require bank management to develop business strategies relative to its market, expertise, and operational capabilities; and (2) control the flow of funds out of the bank through dividends, contractual arrangements, and other activities, such as management fees paid to the holding company or affiliates. The provisions could have been structured so that the primary regulator held bank management accountable for compliance with them. Such a structure could have involved having bank management stipulate that it would comply with specific assistance agreement provisions. Such a stipulation would have allowed the primary regulator to monitor the bank’s adherence to the key provisions of the assistance agreement, including the development of specific business strategies and lending policies and procedures. The primary regulator would then have had the information and authority necessary to take the appropriate enforcement action to ensure compliance with the key provisions of the agreement. Banks are required to follow statutory limitations on dividend payments provided in 12 U.S.C. §§ 56 and 60. While the regulations implementing the statutes and governing the payment of dividends have been tightened since 1988, banks are still authorized to pay dividends, as long as they satisfy the FDICIA minimum capital requirements. FDIC could have better controlled the flow of funds from the assisted banks by either limiting dividend payments or requiring regulatory approval based on the source of dividends. Such controls are typically used by FDIC and other regulators in enforcement actions when they have reason to be concerned about the safety and soundness of a bank’s practices or condition, and they could have been used in a similar manner in the First City assistance agreement. OCC could have better documented the bases for its closure decision had its examination reports and workpapers been clear, complete, and self-explanatory. Congress authorized the Comptroller of the Currency, as the charterer of national banks, to close a national bank whenever one or more statutorily prescribed grounds are found to exist, including insolvency. It is generally agreed in the regulatory community that closure decisions should be supported by clear, well-documented evidence of the grounds for closure. Thus, OCC and other primary regulators’ bank examination reports and underlying workpapers supporting closure decisions need to be complete, current, and accurate and provide documentation of the bases for closure decisions that is self-explanatory. However, we were unable to determine the basis for the OCC examiners’ finding that First City-Houston’s ALLL was insufficient solely from our review of the examination report or workpapers. Specifically, the examination report that OCC conveyed to Houston bank management did not fully articulate the basis for OCC’s finding that the bank’s ALLL was inadequate. From our review of OCC’s workpapers, we were unable to reconstruct the analysis performed to arrive at the need to increase the Houston bank’s ALLL. We had to supplement the information in the working papers with additional information obtained through discussions with the EIC and senior level OCC officials in order to determine how OCC arrived at its decision to require First City-Houston to increase its ALLL by $67 million. OCC officials were able to provide additional clarifying information on the basis for this finding. Although some information regarding these concerns was included in the examination workpapers, it was not sufficient for us to independently follow how OCC’s examiners arrived at the basis for their conclusion that First City-Houston’s ALLL was insufficient. Thoroughly documented workpapers would also have provided OCC and FDIC with a clear trail of the examiners’ methodology, analytical bases of evidentiary support, and mathematical calculations. This would have precluded the need for resource expenditures to reconstruct or verify the basis for examiners’ conclusions. Workpapers are important as support for the information and conclusions contained in the related report of examination. As described in OCC’s examination guidance, the primary purposes of the workpapers include (1) organizing the material assembled during an examination to facilitate review and future reference, (2) documenting the results of testing and formalizing the examiner’s conclusions, and (3) substantiating the assertions of fact or opinions contained in the report of examination. When examination reports and workpapers are clear and concise, independent reviewers, including those affected by the results, should be able to understand the basis for the conclusions reached by the examiner. OCC officials agreed that the First City examination workpapers should have included a comprehensive summary of the factors considered in reaching the final examination conclusions, especially regarding such a critical issue as a determination of bank insolvency. FDIC’s DOR could have considered information from the primary regulator relative to asset quality in making its resolution decisions. In situations like First City, where the primary regulator had just extensively reviewed a high proportion of the loan portfolio as part of a comprehensive examination and found deficiencies in the bank’s loan classification and reserving processes, FDIC resolutions officials should have been able to utilize the examination findings, at least as a secondary source, to test their asset valuation assumptions. This would have been particularly useful because the failure came on short notice and some FDIC officials had reservations about some of the underlying assumptions. OCC examination officials were apparently communicating with their FDIC examination counterparts about the accelerated First City-Houston bank examination. Even so, FDIC’s DOR officials could have benefitted from earlier information on OCC’s preliminary findings that indicated that First City-Houston would be insolvent before December 1992, as had been anticipated by all affected parties. This information would have provided DOR more lead time to consider a wider range of resolution alternatives, including soliciting bids from parties it knew to be interested in acquiring the banks. FDIC officials, however, did not believe the interested parties would submit bids since neither they nor FDIC had an opportunity to perform due diligence on the First City bank assets on such short notice. DOR officials could have used the OCC examiners’ assessment of asset quality as a means of verifying the asset valuations estimated through its own techniques. This would have been similar to the way FDIC uses its research model on smaller resolutions, i.e., as an independent check against the valuations. Also, the FDIC Board could have used such information since it was not confident that the more traditional resolution estimating techniques provided reliable results for the circumstances relative to the failing bank. The going concern valuation used by OCC examiners may even have been more relevant than the net realizable valuation used by DOR because FDIC expected a bridge bank or open bank assistance resolution to be the most orderly and least costly resolution alternative. FDIC and OCC provided written comments on a draft of this report, which are described below and reprinted in appendixes IV and V. FRS also reviewed a draft, generally agreed with the information as presented, but provided no written comments. FDIC described the report as being well researched and an overall accurate recording of the events that led up to and through the 1988 and 1992 transactions. FDIC offered further information and explanation related to the two transactions, including reasons why some of the lessons to be learned could not have been used by FDIC in 1988 and 1992 or would not have altered the outcomes of these transactions. FDIC further stated, however, that it will consider the lessons enumerated in the report and, where appropriate, incorporate them into future resolution decisions. We believe FDIC’s elaborations about the 1988 and 1992 transactions provide meaningful insights about its assistance and resolutions processes. The Executive Director, in later discussions about FDIC’s written comments, assured us that FDIC is open and receptive to the lessons to be learned, and his elaborations were intended to explain the bases for FDIC’s decisions and why other positions were not considered or taken at the time of the transactions. OCC raised concerns that the report might create an inference that we were questioning OCC’s basis to close the First City banks and about our suggestion that OCC needs to improve the quality of its examination reporting and workpaper documentation. OCC believes its basic standards for examiner documentation are appropriate for supervisory oversight and examiner decisionmaking purposes. While OCC believes its basic approach to be sound, including its documentation practices, it will consider our views in reviewing current examination guidance for potential revision to provide clarity, ensure consistency, and reduce burden. Our study was basically intended to provide an accurate accounting of the events, involving both the banks and regulators, that led to the 1988 and 1992 transactions to resolve First City. In compiling this account, we identified lessons to be learned from the First City experience that could potentially improve the insurer’s and regulators’ open bank assistance, bank closure, and bank resolution processes. We did not question the bases used by the insurer or regulators in making decisions relative to First City, but instead we looked for opportunities to improve those processes to ensure the insurer’s and regulators’ interests are adequately protected in making future decisions. The insurer and regulators, including FRS, generally agreed to consider the lessons to be learned from the First City experience to improve their processes. We will provide copies of this report to the Chairman, Federal Deposit Insurance Corporation; the Comptroller of the Currency; the Chairman of the Federal Reserve Board; and the Acting Director of the Office of Thrift Supervision. We will also provide copies to other interested congressional committees and members, federal agencies, and the public. This review was done under the direction of Mark J. Gillen, Assistant Director, Financial Institutions and Markets Issues. Other major contributors to this review are listed in appendix VI. If you have any questions about the report, please call me on (202) 512-8678. Concerned with FDIC’s provision of $970 million financial assistance to First City banks in 1988 and their ultimate failure less than 5 years later, the former Chairman of the Senate Committee on Banking, Housing and Urban Affairs asked us to review the events surrounding First City Bancorporation of Texas’ 1988 and 1992 resolutions and to use our review to reflect on FDIC’s use of open bank assistance. As agreed with the Committee, we focused our review on First City’s largest bank in Houston and its second largest bank in Dallas, because the financial difficulties of these two banks resulted in the insolvency of First City’s 18 other banks. Our objectives were to review the events leading up to First City’s 1988 open bank assistance and its 1992 bank failures to determine why FDIC provided open bank assistance in 1988 rather than close the First why the 1992 resolution estimate differed so much from the estimate resulting from the 1993 sale of the banks; whether the First City banks’ failures in 1992 are expected to result in additional costs to BIF; and whether the First City experience provides lessons relevant to the assistance, closure, and/or resolution of failing banks. To achieve our objectives, we reviewed examination reports and related available examination documents and workpapers relative to First City’s Houston and Dallas banks and other subsidiary banks for 1983 through 1992. We began our review of examination reports with the 1983 examination because OCC officials told us that was when they first identified safety and soundness deficiencies in First City banks. The 1993 examination also precipitated the first supervisory agreement between First City management and the bank regulatory agencies. In reviewing the examination reports we sought to obtain information on the condition of the banks at the time of each examination and the significance of deficiencies as identified by the regulators. We reviewed examination workpapers, correspondence files, and management reports to gain a broader understanding of the problems identified, the approach and methodology used to assess the conditions of the First City banks, and the regulatory actions taken to promote or compel bank management to address deficient conditions found by regulators. We also used the examination workpapers to compile lists of loans that caused significant losses to the banks to try and compare the loan quality problems arising from loans made before the recapitalization to those made by new bank management. We interviewed the OCC examiners-in-charge of the 1989 examinations and all subsequent examinations to obtain their perspectives on the conditions found at the First City banks. We also interviewed OCC National Office officials to obtain their views on the adequacy of OCC’s oversight of the banks. We reviewed all relevant examination reports, workpapers, and supporting documentation to assess their adequacy in explaining the positions taken by OCC relative to First City-Houston and the Collecting Bank. When we were unable to gain adequate information from the examination records, we sought further explanations from OCC examination officials and assessed those explanations when received. We also reviewed FDIC and FRS records of examinations and supporting documents, particularly those related to First City-Dallas. We also discussed issues relating to First City banks with FDIC and FRS officials. Further, we reviewed First City Bancorporation financial records and supporting documents and discussed issues relating to OCC, FDIC, and FRS oversight with First City officials. Finally, we reviewed FDIC records relating to First City’s 1988 recapitalization and FDIC’s 1992 and 1993 bridge bank decisions. We discussed issues relating to these actions with FDIC, OCC, FRS, and First City officials to obtain their viewpoints on the actions taken. We also reviewed FDIC, OCC, and FRS records assessing the economy and the conditions of Texas financial institutions from the mid-1980s to the early 1990s. FDIC and OCC provided written comments on a draft of this report. FRS also reviewed a draft, generally agreed with the information as presented, but provided no written comments. The agencies’ written comments are presented and evaluated on page 21 of the letter and reprinted in appendixes IV and V. We did our work between January 1993 and June 1994 at FDIC, OCC, and FRS in Washington, D.C.; at FDIC, OCC, and FRS in Dallas; and at the First City banks in Houston and Dallas. We did our work in accordance with generally accepted government auditing standards. The 1980s and the early 1990s were tumultuous times for the banking industry, especially in the Southwest. During this time, the banking industry experienced record profits followed by record losses, and a number of legislative and regulatory changes altered both the way banks did business and the way banks were regulated. The responsibility for regulating federally insured banks is divided among three federal agencies. OCC is the primary regulator for nationally chartered banks. FRS regulates all bank holding companies and state-chartered banks that are members of FRS. FDIC regulates state-chartered banks that are not members of FRS. FDIC is also the insurer of all federally insured banks and thrifts, which gives it the dual role of being both the regulator and insurer for many banks. The primary role of federal regulators is to monitor the safety and soundness of the operations of both individual banks and the banking system as a whole. The regulators’ major means of monitoring the banks is through the examination process. Examinations are intended to evaluate the overall safety and soundness of a bank’s operations, compliance with banking laws and regulations, and the quality of a bank’s management and directors. Examinations are also to identify those areas where bank management needs to take corrective actions to strengthen performance. When a regulator identifies an area where the bank needs to improve, it can require the bank to initiate corrective action through either formal or informal measures. These measures can be as informal as a comment in the examination report or as severe as the regulator ordering the bank to cease and desist from a particular activity or actually ordering the closure of the bank. The role of the insurer is to protect insured depositors in the nation’s banks, help maintain confidence in the banking system, and promote safe and sound banking practices. As the insurer of bank deposits, FDIC may provide financial assistance for troubled banks. The assistance may be granted directly to the bank or to a company that controls or will control it. FDIC may also grant assistance to facilitate the merger of banks. When a chartering authority closes a bank, it typically appoints FDIC as receiver for the bank. FDIC then arranges for insured depositors to be paid directly by FDIC or the acquiring bank and liquidates the assets and liabilities not assumed by the acquiring bank. Many banks, including First City’s, are owned or controlled by a bank holding company and have one or more subsidiary banks. Typically, in a bank holding company arrangement, the largest subsidiary bank is referred to as the lead bank. The subsidiary banks may or may not have the same types of banking charters, i.e., either national or state charters. Consequently, different regulators may be responsible for overseeing the lead bank and the other subsidiary banks in the organization, with FRS responsible for overseeing all bank holding companies. First City Bancorporation of Texas typified this structure. It consisted of a holding company, a nationally chartered lead bank, 11 other nationally chartered subsidiary banks, 5 state-chartered banks that were members of FRS, and 3 state-chartered banks that were not members of FRS. Hence, the First City organization was supervised and examined by all three federal bank regulators. Between the time FDIC first announced open bank assistance for First City in 1987 and its closure in 1992, a number of regulatory and legislative initiatives gave the federal government greater authority to deal with troubled financial institutions. Passage of the Competitive Equality Banking Act of 1987 (CEBA), the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA), and the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) provided both regulators and the insurer greater authorities in dealing with troubled financial institutions. Their passage also provided the impetus for regulatory changes that granted regulators and the insurer greater authorities to close and resolve troubled financial institutions. The regulators’ expanded authority to close a bank is possibly one of the most significant changes that has occurred in the federal government’s oversight of banks. At the time of the 1988 First City reorganization, OCC had the authority to appoint FDIC as receiver for a national bank whenever OCC, through its examination of the bank, determined that the bank was insolvent. The National Bank Act did not define insolvency, and the courts afforded OCC considerable discretion in determining the standard for measuring insolvency. OCC used two standards to measure insolvency—a net worth standard and a liquidity standard. Basically, a bank becomes net worth or equity insolvent when its capital has been depleted. Similarly, a bank becomes liquidity insolvent when it does not have sufficient liquid assets—i.e., cash—to meet its obligations as they become due, regardless of its net worth. Following the 1988 First City reorganization, OCC promulgated a regulation that allowed it to find a national bank insolvent at an earlier stage than before. Under the new rule, OCC redefined primary capital to exclude a bank’s allowance for loan and lease losses. Prior to this change, OCC considered a national bank’s regulatory capital to include not only its retained earnings and paid-in capital but also the allowance a bank had set up for loan and lease losses; i.e., for uncollectible or partially collectible loans. According to OCC, the change brought OCC’s measurement of a bank’s equity more closely in line with generally accepted accounting principles’ measurement of equity. While this action was not specifically required by FIRREA, OCC stated the change was within the spirit of the 1989 amendments to the federal banking laws. The cross-guarantee provisions of FIRREA also granted FDIC authority to recoup from commonly controlled depository institutions any losses incurred or reasonably anticipated to be incurred by FDIC due to the failure of a commonly controlled insured depository institution. As in the case of the First City banks, the cross-guarantee assessment may result in the failure of an otherwise healthy affiliated institution if the institution is unable to pay the amount of the assessed liability. This provision imposes a liability on commonly controlled institutions for the losses of their affiliates at the time of failure, thereby reducing BIF losses. The law gives FDIC discretion in determining when to require reimbursement and to exempt any institution from the cross-guarantee provisions if FDIC determines that the exemption is in the best interest of the applicable insurance fund. The manner in which FDIC can resolve troubled banks involves another significant set of changes that has occurred since FDIC announced First City’s first resolution in 1987. More specifically, FDICIA now requires FDIC to evaluate all possible methods for resolving a troubled bank and resolve it in a manner that results in the least cost to the insurance fund. Prior to FDICIA’s least-cost test, FDIC was required to choose a resolution method that was no more costly than the cost of a liquidation. Currently, the only exception to the least-cost determination is when the Secretary of the Treasury determines that such a selection would have a serious adverse effect on the economic conditions of the community or the nation and that a more costly alternative would mitigate the adverse effects. To date, the systemic risk exception has not been used. FDIC’s ability to provide open bank assistance has also undergone significant changes since FDIC assisted First City in 1988. At that time, FDIC was authorized to provide assistance to prevent the closure of a federally insured bank. FDIC was permitted to provide the assistance either directly to the troubled bank or to an acquirer of the bank. Before providing the assistance, FDIC had to determine that the amount of assistance was less than the cost of liquidation, or that the continued operation of the bank was essential to provide adequate banking services in the community. To implement these provisions, FDIC adopted guidelines that open assistance had to meet. The key guidelines are summarized below: The assistance had to be less costly to FDIC than other available alternatives. The assistance agreement had to provide for adequate managerial and capital resources (from both FDIC and non-FDIC sources) to reasonably ensure the bank’s future viability. The agreement had to provide for the assistance to benefit the bank and FDIC and had to include safeguards to ensure that FDIC’s assistance was not used for other purposes. The financial effect on the debt and equity holders of the bank, including the impact on management, shareholders, and creditors of the holding company, had to approximate what would have happened if the bank had failed. If possible, the agreement had to provide for the repayment of FDIC’s assistance. FDICIA placed additional limits on FDIC’s use of open bank assistance. FDICIA added a new precondition to FDIC’s authority to provide open assistance under section 13(c), which is summarized below. Under FDICIA, FDIC may consider providing financial assistance to an operating financial institution only if the following criteria can be met: (a) Grounds for the appointment of a conservator or receiver exist or likely will exist in the future if the institution’s capital levels are not increased and it is unlikely that the institution will meet capital standards without assistance. (b) FDIC determines that the institution’s management has been competent and has complied with laws, directives, and orders and did not engage in any insider dealing, speculative practice, or other abusive activity. In addition to the previously discussed statutory changes, FDICIA contained a resolution by Congress that encourages banking agencies to pursue early resolution strategies provided they are consistent with the new least-cost provisions and contain specific guidelines for such early resolution strategies. Since FDICIA, a further statutory limitation has been placed on open assistance transactions. Section 11 of the Resolution Trust Corporation Completion Act of 1993 prohibits the use of BIF and Saving Association Insurance Fund (SAIF) funds in any manner that would benefit the shareholders of any failed or failing depository institution. In FDIC’s view, as set forth in its report to Congress on early resolutions of troubled insured depository institutions, this provision “largely eliminates the possibility of open assistance, except where a systemic risk finding” is made pursuant to the least-cost provisions. Another change to FDIC’s resolution alternatives occurred when CEBA provided FDIC the authority to organize a bridge bank in connection with the resolution of one or more insured banks. Essentially, a bridge bank is a nationally chartered bank that assumes the deposits and other liabilities of a failed bank. The bridge bank also purchases the assets of a failed institution and temporarily performs the daily functions of the failed bank until a decision regarding a suitable acquirer or other resolution alternative can be made. To better understand some of the factors that contributed to the ultimate failure of the 1988 recapitalized First City banks, we reviewed First City’s activities from 1988 to 1990 as reflected in examination reports and workpapers. The results of that review are summarized in this appendix. First City Bancorporation banks’ reported profits in 1988, 1989, and 1990 depended on nontraditional sources of income that were not sustainable. These profits were then used to justify the payment of cash dividends during 1989 and 1990 that significantly reduced the banks’ retained earnings. First City’s reliance on income from the Collecting Bank nearly equalled First City’s net income during 1988 and 1989, First City’s only profitable years. Furthermore, we found that if it were not for the $73 million in interest and fee income the Collecting Bank paid First City in 1988, the latter would have lost about $7 million that year. While First City’s 1989 net income did not completely depend upon the Collecting Bank’s interest and fees, we found that such income accounted for nearly $100 million of the $112 million in net income earned by First City during 1989. Another nontraditional source of First City income was generated in the first quarter of 1990 when First City sold its credit card portfolio for a $139 million profit. This sale enabled First City to turn an otherwise $49 million loss from operations into a $90 million net profit during the quarter that ended March 31, 1990. These nontraditional sources of income accounted for nearly all of First City’s net income during the first 2 years of operations after recapitalization and did not necessarily indicate a significant problem with First City’s operations. It is also not necessarily a basis for criticizing First City’s management. First City’s reliance on income from nontraditional sources could be explained as the result of initial start-up problems associated with taking over a large regional multibank holding company during a period of economic instability. What is noteworthy is that First City used the profits on income from nontraditional and onetime sources to pay $122 million in cash dividends, thereby decreasing the bank’s retained earnings. The assistance agreement’s only limitation on the payment of dividends was that common stock dividends could not exceed 50 percent of the period’s earnings. The anticipated success of the recapitalized First City was at least partially based upon the assumption that First City Bancorporation, including the Collecting Bank, would not experience further loan portfolio deterioration. This assumption proved to be incorrect. Problems with both pre- and post-recapitalization loan portfolios resulted in significant loan charge-offs and the depletion of bank equity. For example, we found that about $270 million in assets that originated prior to the recapitalization at First City’s Houston and Dallas banks resulted in nearly $75 million in losses. Furthermore, problems with pre-recapitalization assets also plagued the Collecting Bank. These problems forced First City to charge-off nearly $200 million of Collecting Bank notes by the time the banks were closed in October 1992. First City also experienced significant problems with loans originated after the 1988 reorganization. We found that First City suffered about $300 million in losses on such loans. Some of these losses occurred as a result of First City’s aggressive loan growth policy that increased its portfolio of loans to finance inherently risky, highly leveraged transactions. First City’s highly leveraged transaction lending peaked in 1989 at more than $700 million. Other significant losses resulted from First City’s international and other nonregional lending practices. Still other losses resulted from poor underwriting practices or adverse economic conditions. First City’s recapitalization prospectus predicted that the banks would realize savings of more than $100 million per year by reducing operating expenses to a level commensurate with industry standards. While First City realized at least some of the anticipated savings during its first 2 years of operations, it was unable to sustain these cost-cutting efforts. According to both OCC and FDIC, high operating expenses contributed to First City’s 1992 failure. As shown in table III.1, First City’s operating expenses did not decrease as First City’s net income, gross profits, and total assets decreased. Rather, First City’s operating expenses were the lowest during 1988 and 1989, when it reported year-end profits, and highest during 1990 and 1991, when it lost more than $380 million. Our review of First City’s escalating operating expenses showed that during 1990 and 1991—a period when the banks’ revenues and assets were decreasing—its data processing and professional services expenses increased because of the way in which payments for these services were structured in related long-term contracts. Furthermore, First City’s operating expenses were already high due to above-market long-term building leases negotiated before the recapitalization. (158) (225) The following are GAO’s comments on the Federal Deposit Insurance Corporation’s letter dated October 24, 1994. 1. We agree with FDIC that it received bridge bank authority in 1987, prior to the 1988 First City resolution, but did not receive cross-guarantee authority until later, in 1989. We do not dispute the FDIC scenario regarding what may have happened had it exercised its bridge bank authority on the two troubled First City banks in 1988 without having the authority to recover the losses from the other affiliated First City banks. Under the circumstances, FDIC alternatives for resolving the First City banks in 1988 were to either provide open bank assistance for the two troubled banks, or wait until they failed and consider the other resolution methods, including bridge banks. 2. We do not dispute the FDIC position that regulatory agencies were invited to all important meetings or that its Board of Directors was aware of the regulators’ opinions prior to making the 1988 open bank assistance decision. Our suggestion, however, is for FDIC to actively consult with its regulatory counterparts about key assumptions used in resolution alternatives recommended to the Board. We believe FDIC could take better advantage through greater consultation in making economic projections. The Federal Reserve, for example, has developed considerable expertise. In later discussions with the Executive Director, he agreed with us that such consultation with regulatory counterparts would be of value, although he noted that the accountability for the resolution decision, along with its assumptions, resides with FDIC. 3. In later discussions with the Executive Director, he told us that he does not disagree with our suggestion that FDIC include safeguards in open bank assistance agreements. His only concern would be if the safeguards were so stringent as to discourage potential private investors, thereby potentially costing FDIC more to resolve a troubled bank. He agrees with us that FDIC’s responsibility is to protect the Bank Insurance Fund and FDIC should include safeguards in its assistance agreements. 4. We agree that FDIC could realistically enforce the assistance agreement conditions only if FDIC determined that the bank breached the contractual conditions. The Executive Director told us that he is receptive to including provisions in future FDIC assistance agreements that authorize primary regulators to take enforcement actions if they find noncompliance with safeguards contained in future FDIC assistance agreements. His primary concern involves the potential of discouraging private investors, although he also believes there may be some practical problems in agreeing on conditions that serve the interests of the acquirer, the insurer, and the primary regulator. The Executive Director understands that such provisions would enable the primary regulator to gather the necessary information and have the requisite authority to take the appropriate enforcement action to ensure compliance with the relevant provisions of the assistance agreement. 5. We agree that in 1992, earlier FDIC notification of OCC’s finding that First City-Houston was insolvent may not have provided FDIC with a broader range of resolution alternatives because First City management was still convinced that it could raise sufficient capital to make the bank financially viable. Consequently, while some potential investors or acquirers had performed due diligence relative to earlier First City self-rescue proposals, FDIC did not believe sufficient due diligence had been performed by potential acquirers or that First City management would permit those interested to perform due diligence. Therefore, FDIC believed bridge banks would provide for the most orderly resolution, which FDIC also determined to be the least costly resolution alternative available at that time. While earlier notification may not have affected the First City resolution, the Executive Director agreed with us that early notification of insolvency is critically important for FDIC to consider the full range of resolution alternatives. He also said that FDIC is in regular contact with primary regulators to ensure early warning of potential insolvencies to maximize its resolution options. 6. We agree that examiners typically value assets on a going concern basis, and resolvers value the assets on a net realizable value presuming that they will be liquidated. The Executive Director, however, agreed with us that in unique situations like First City—where a high percentage of the assets were just assessed by examiners and market interest in the troubled banks suggests the assets will be acquired by a healthy bank—FDIC could use the examiners’ assessments as a secondary source to check on the validity of its asset valuation review results. Such a use would be comparable to how FDIC generally uses its research model, the results of which the FDIC Board of Directors may consider in its deliberations in making its resolution decisions. The following are GAO’s comments on the Comptroller of the Currency’s letter dated January 5, 1995. 1. Our objectives in the First City study included a review of the processes used by regulators to assist, close, or otherwise resolve failing financial institutions. We reviewed the adequacy of those processes, including the bases for the related decisions made by federal regulators for First City. While we found some deficiencies in the processes as applied in the First City decisions and suggested opportunities to improve those processes from the First City experience, it was not our objective nor did we take a position on the regulators’ decisions. 2. We agree with OCC that its basic standards for examination reporting and workpaper documentation are adequate based on this study and on other GAO studies of OCC’s examination process. In our report entitled Bank and Thrift Regulation: Improvements Needed in Examination Quality and Regulatory Structure (GAO/AFMD-93-15), dated February 16, 1993, we found that OCC generally adequately documented its examination results. Although we did not find in our study of First City adequate documentation for the examination results, OCC officials assured us that our concerns are being considered in their efforts to improve OCC examination processes, including the documentation of examination results. Rosemary Healy, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Federal Deposit Insurance Corporation's (FDIC) resolution of the First City Bancorporation of Texas, focusing on: (1) why FDIC decided to resolve the corporation by providing financial assistance instead of using other available resolution alternatives; and (2) the additional cost to the Bank Insurance Fund (BIF) as a result of the resolution. GAO found that: (1) in 1988, FDIC provided $970 million in financial assistance to recapitalize and restructure the banking organization; (2) FDIC chose this method of resolution because it was less costly than liquidating the banks in the event of insolvency; (3) FDIC estimated BIF costs to liquidate the banks to be about $1.74 billion, as opposed to the $970 million estimated for open bank assistance; (4) FDIC did not opt to sell the banks because it did not believe that it would be able to find acceptable buyers with sufficient capital to restore the banks to long-term viability; (5) FDIC placed the banks under its control for about 3 months and operated them as bridge banks to facilitate the orderly resolution of the banks; (6) FDIC relied on its best business judgment in estimating BIF costs at the time of the banks' failures; (7) FDIC considered loss estimates that ranged from $300 million to over $1 billion in making its least-cost resolution determination; (8) the Office of the Comptroller of the Currency could have better supported its decision to close the largest bank by ensuring that its examination reports and underlying workpapers were clear, well documented, and self-explanatory; and (9) FDIC resolution officials could have used OCC examination findings as a means of verifying its valuation of the banks' assets. |
DHS’s acquisition management policy, commonly referred to as MD-102, as implemented by the DHS Instruction Manual, establishes two overarching categories of acquisitions: acquisitions of capital assets— such as information technology (IT) systems or aircraft—and acquisitions of services—such as those provided by security guards and emergency responders. For each acquisition type, acquisitions are further categorized as major or non-major based on expected cost. An acquisition’s major or non-major status determines who acts as the Acquisition Decision Authority (ADA), the individual responsible for management and oversight of the acquisition. DHS policy established the DHS Chief Acquisition Officer as the ADA for major acquisitions and the Component Acquisition Executive (CAE)—the senior acquisition official within the component—as the ADA for all non-major acquisitions. CAEs have overarching responsibility for the acquisition cost, schedule, risk, and system performance of the component’s acquisition portfolio and are responsible for ensuring that appropriate acquisition planning takes place. According to the DHS Instruction Manual, the CAEs are required to establish component-specific non-major acquisition policies and guidance that support the “spirit and intent” of department acquisition policies. CAEs establish unique processes for managing their components’ non- major acquisitions. Components that do not have CAE-approved policies for non-major acquisition management are required to follow MD-102 and the Instruction until those policies are developed. Figure 1 illustrates the decision authority and thresholds for major and non-major acquisitions. DHS acquisition policy establishes an acquisition lifecycle framework that includes a series of five acquisition decision events. These acquisition decision events provide the ADA an opportunity to assess whether an acquisition meets certain requirements and is ready to proceed through the lifecycle phases. Figure 2 depicts the five acquisition decision events and the four phases of the acquisition lifecycle. As part of an acquisition decision event for a major acquisition, the ADA reviews and approves key acquisition documents, such as an acquisition program baseline. An acquisition program baseline establishes an acquisition’s critical cost, schedule, and performance parameters. Baselines are useful management tools that can help leadership (1) understand of the scope of an acquisition, (2) assess how well the acquisition is being executed, and (3) secure adequate funding. For non- major acquisitions, each CAE has flexibility in deciding how his or her component will apply the acquisition lifecycle framework and the types of documentation that will be required at each acquisition decision event. The Instruction grants the components flexibility when managing non- major acquisitions. Within DHS headquarters, PARM is the lead office responsible for overseeing the department’s acquisition processes. PARM has a direct management role with major acquisitions and less oversight of non-major acquisitions. For non-major acquisitions, PARM’s role is to ensure CAEs are overseeing their components’ acquisitions appropriately, and facilitate component efforts to report acquisition information using DHS’s INVEST system, among other things. The INVEST system is a central repository for data on DHS acquisitions and investments, such as budget, schedule, and performance information. INVEST data are used to oversee both major and non-major acquisitions and to satisfy internal and external reporting requirements. DHS’s component agencies lack the information needed to effectively oversee their non-major acquisitions because they cannot confidently identify all of them. They identified over $6 billion in non-major acquisitions; however, we found 8 of the 11 components could not identify all of their non-major acquisitions and we found that the data that 9 components provided for these acquisitions were unreliable. Several officials indicated that their focus had been on major acquisitions historically, and they had not turned their attention to non-major acquisitions until more recently. Many component officials said they were still in the process of identifying all of their non-major acquisitions, but it was unclear when they would complete these efforts. DHS headquarters had not established time frames for components to do so, which may have resulted in components losing traction in their efforts. Federal internal controls standards establish that management should obtain relevant data from reliable sources in a timely manner. Another key challenge involves the use of baselines, which establish a program’s critical cost, schedule, and performance parameters. Component officials identified 38 non-major acquisitions that were active at the start of fiscal year 2017 (as opposed to acquisitions that have been delivered to end users and are considered to be non-active). We found that most of the active non-major acquisitions (23 of 38) did not have approved baselines, and that the value of the acquisitions without baselines constituted nearly half of the total value of the active acquisitions. At the beginning of fiscal year 2017, some components did not require approved baselines. However, in response to our preliminary findings, in February 2017, DHS required component leadership approve baselines for non-major acquisitions, which should help components oversee them more effectively. Component officials identified 38 non-major acquisitions, valued at greater than $6 billion, that were active as of the start of fiscal year 2017. We define acquisitions to be active when they have entered the obtain phase of the acquisition lifecycle and have not yet achieved FOC. Of the reported 38 active acquisitions, 36 were capital asset acquisitions with a total value exceeding $6 billion. The remaining two active acquisitions were services acquisitions with combined annual expenditures of $19 million in 2016. Across DHS, components identified a total of 255 non- major acquisitions in all phases of the acquisition lifecycle. DHS’s non-major acquisitions encompass diverse systems and capabilities that address critical mission needs including immigration services, law enforcement, and disaster response. For example, non- major acquisitions include USCG response boats that perform law enforcement and search and rescue missions; CBP’s Mobile Video Surveillance System, which identifies and detects illegal incursions into areas that have gaps in coverage from other surveillance systems; and DNDO’s Human Portable Tripwire, a small, wearable system that can detect radiological threats. Figure 3 depicts these three acquisitions. Acquisition management efforts have the greatest impact on active acquisitions. When an acquisition is considered active, managers develop, test, and evaluate the extent to which the acquired capability can meet DHS mission needs, and adhere to critical cost, schedule, and performance parameters. By comparison, acquisition management activities have less impact on acquisitions that have reached FOC because these acquisitions have passed key decision events in the acquisition lifecycle. Meanwhile, we define acquisitions that are very early in the acquisition life cycle, i.e. in the need or analyze/select phases, to be pre-active. Pre-active acquisitions do not yet have critical cost, schedule, and performance parameters because component officials have not yet agreed on what they want, when they want it, or how much they want to spend. Figure 4 depicts the number of non-major acquisitions component officials identified by acquisition lifecycle phase. About 74 percent of the non-major acquisitions that component officials identified (189 of 255) had reached FOC at the start of fiscal year 2017, and components were operating and maintaining them until disposal. Although officials indicated these acquisitions were no longer active and had already passed all of their major acquisition decision events, it is still important for DHS to understand the scope of these acquisitions because up to 70 percent of an acquisition’s total life cycle costs can occur after FOC. About 11 percent of the non-major acquisitions that component officials identified (28 of 255) are in the pre-active phases of the acquisition lifecycle. During these phases, program managers identify a mission need that justifies investment in a new acquisition and evaluate alternative options to meet that need. DHS component officials identified 255 non-major acquisitions across DHS, but officials from most of the components (8 of 11) also reported to us that they were not confident that they had accounted for all of their non-major acquisitions. However, these officials also told us they were working to improve their ability to identify their non-major acquisitions going forward. In the view of some officials, the problems lie primarily in tracking the non-active acquisitions, rather than those that are still active. Even when component officials could identify these acquisitions, we found that the data they provided were often unreliable. The data reliability issues often involved the type of information included in acquisition baselines, specifically cost, schedule, and capability information. We spoke with officials from 11 components to get their perspective on whether they were able to accurately identify the full scope of their non- major acquisitions. Officials from 8 of the 11 DHS components told us they could not identify all of their non-major acquisitions. These officials were able to provide data on some acquisitions, but they were not confident that they had identified all of them. Officials from 3 of the 8 components stated that they were more confident in their ability to identify all active acquisitions but were less sure that the full scope of post-FOC acquisitions were identified. Component officials offered two reasons for the lack of confidence in the data: 1. Historically, managing non-major acquisitions has been a relatively low priority when compared to managing major acquisitions or other component activities. 2. The components lack effective procedures for identifying those acquisitions. Officials from all of the components we reviewed indicated that they are working to improve their management of non-major acquisitions for a variety of reasons, including to improve their ability to monitor acquisition cost growth and other acquisition performance metrics. Competing priorities: Officials from 6 components indicated that managing non-major acquisitions has historically been a lower priority than managing major acquisitions or other component activities. For example, CBP officials reported that since 2011, CBP’s CAE staff has focused on bringing major acquisitions into compliance with DHS acquisition policy. It took CBP until 2016 to baseline all of its Level 1 acquisitions. In 2014, CBP officials turned their attention to non-major acquisitions. They began to identify their non-major acquisitions, and worked to understand their purpose, status, and the CBP offices they support. According to CBP officials, these efforts are ongoing. Similarly, following the issuance of MD-102 in 2008, FEMA focused on managing major acquisitions before placing an emphasis on non- major acquisition management in 2015. According to component officials, FEMA is now developing a more robust management process for non-major acquisitions. They said that the first step toward increasing the management rigor for these acquisitions is to accurately identify them. Ineffective procedures: Officials from 2 components stated that they lack effective procedures for identifying non-major acquisitions. USCG and USCIS officials acknowledged that their procedures for identifying these acquisitions need improvement. Specifically, a USCG official told us that USCG procedures do not always successfully distinguish IT acquisitions from non-acquisition activities. According to the official, many non-major IT acquisition activities may be occurring, but the USCG acquisition support staff may not be aware of them. USCG officials said that the component has approximately 400-500 IT investments to assess to determine whether they should be identified as acquisitions. As a result, USCG may be underreporting the dollar value of non-major acquisitions. According to USCG officials, the process of identifying all such acquisitions is underway. Additionally, a senior USCIS official stated that his component’s method for identifying its smaller non-major acquisitions needs improvement. USCIS combines its smallest acquisitions—those valued at less than $50 million—into a single acquisition, aligns each combined acquisition to specific offices within USCIS, such as the Office of Information Technology, and tracks each combined acquisition as a single acquisition. Using this approach, USCIS may be underreporting the number of non-major acquisitions, as multiple acquisitions may be counted as one, and the CAE may be missing opportunities to influence the acquisitions at key decision events. To improve tracking of these acquisitions, the official told us that USCIS is evaluating each individual acquisition to determine if it is active, and if it should be managed as a stand-alone acquisition. In addition, according to USCIS officials, USCIS has recently revised its non- major acquisition policy, which will change the acquisition tracking requirement. This revision is expected to be finalized in 2017. DHS component officials told us that they were working to improve their ability to identify non-major acquisitions. For example, officials from 4 components stated that they were using new guidance provided in a 2016 update to the DHS Instruction Manual to more consistently categorize all acquisitions as (1) capital acquisitions, (2) service acquisitions, or (3) simple procurements. The guidance includes a series of yes-or-no questions that acquisition officials answer to categorize a particular acquisition. Component officials said these categorizations are helping them identify all of the non-major acquisitions in their respective portfolios. For example, FEMA officials said they have used the new guidance to determine whether acquisitions considered procurements should actually be managed as non-major acquisitions. Officials from the other components reported efforts such as updating component policies and performing ongoing reviews to identify which activities are acquisitions. However, it was unclear when these various efforts to identify the full scope of all non-major acquisitions would be complete because no timelines had been established by DHS headquarters, which may have resulted in components losing traction in their efforts. Federal internal controls standards establish that management should obtain relevant data from reliable sources in a timely manner. Until components have identified all of their non-major acquisitions, they cannot effectively manage their acquisition portfolios or apply the level and type of oversight that complies with department policy. Having an established time frame should help ensure that the actions underway are seen to completion. In addition to the components’ inability to identify all non-major acquisitions, our analysis and information received from component officials identified a number of data reliability issues. Specifically, our analysis found that the data provided by 8 of the 11 components were not complete. For example, several life cycle cost estimates did not include government personnel costs. In addition, most of the components that reported active acquisitions could not provide approved baselines supporting the data they provided, in part because some components did not require approved baselines. Officials from 6 components also acknowledged they have issues with data reliability, specifically with accuracy and completeness, which could hinder their CAEs’ ability to manage non-major acquisitions in accordance with DHS acquisition policy. Our analysis of all non-major acquisition data provided by the DHS components found that the data were complete for over 60 percent of the acquisitions reported and that data for active acquisitions had fewer issues with incomplete data than acquisitions that were post-FOC. In responding to our requests for information, 5 components did not provide a complete FOC date or cost information for at least one of their active non-major acquisitions. Although the components have different requirements for documenting such information, key acquisition management best practices recommend that all acquisitions have well defined requirements and establish realistic cost and schedule estimates. Table 1 describes the data reliability issues that we identified in the component-reported non-major acquisition data. Cost, schedule, and capability information are basic acquisition data that would be included in an acquisition baseline, and we have previously found that these types of information help senior leadership manage acquisitions more effectively. However, most of the components that reported active acquisitions could not provide approved baselines for all of their non-major acquisitions since not all of them were required to provide approved baselines. In addition to the completeness issues we identified, officials from 6 components acknowledged that they have issues with data reliability, specifically with accuracy and completeness. Accuracy: Officials from 5 components reported issues with the accuracy of the non-major acquisition data they provided. Data accuracy refers to the extent that recorded data reflect the actual underlying information. For example, DNDO officials stated they did not have full confidence in 20 percent of the acquisition data they reported because most DNDO non-major acquisitions were not required to have program baselines. Instead, DNDO is using rough order of magnitude cost estimates for these acquisitions, which they acknowledged are inherently inaccurate. Completeness: Officials from 3 of the 6 components also told us that their non-major acquisitions data were incomplete. Data completeness refers to the extent that relevant records are present— an issue addressed in the scope discussion above—and that the fields in each record are populated appropriately. For example, FEMA officials said that they had limited cost value data for non-major acquisitions because many of these acquisitions have not had formal life cycle cost estimates, and that improvements to their non-major acquisitions data are required in order to provide such estimates. Table 2 lists the components we reviewed, whether we identified data reliability issues—specifically incomplete data—in the non-major acquisition data reported by the components, and whether component officials self-identified data reliability issues in that data. For 9 components, we identified data reliability issues through our analysis, component officials identified data reliability issues themselves, or both. In accordance with the CAE responsibilities outlined in the DHS Instruction Manual, CAEs have developed a variety of processes to maintain and report data on non-major acquisitions. For example, USCG officials reported using the INVEST system and three of their own systems to track and report data on non-major acquisitions. They told us their non-major acquisition data and corresponding documentation is regularly compiled and CAE staff review it every month. Meanwhile, CBP officials reported using a less centralized approach. CBP officials track non-major acquisition data in two department-level systems and multiple component-level systems, including several Microsoft Excel spreadsheets and Microsoft Access databases. CBP officials reported that their component lacks a systematic data review process, and that they had to manually aggregate their data to respond to our queries. Additionally, components’ policies for managing non-major acquisitions vary. For example, at the start of fiscal year 2017, 7 components had policies in place requiring component leadership to approve program baselines for active non-major acquisitions. However, the 4 other components did not. Our prior work and DHS acquisition policy emphasize the importance of the critical cost, schedule, and performance parameters that a baseline provides. As our work showed in 2012, the baseline is a critical tool for managing an acquisition. First, it is an agreement between program-, component-, and department-level officials establishing what the capabilities being acquired should cost, when they should be delivered, and how they should perform. DHS acquisition policy for major acquisitions requires that the ADA approve a program’s baseline before it initiates design and development activities, and this baseline then serves as a performance management tool to monitor and measure an acquisition’s execution. Second, baselines can help acquisition leaders secure funding needed for programs to meet critical cost, schedule, and performance parameters. If a program is not fully funded, a baseline can help leaders identify the trade-offs needed to fund the program with existing resources. Our prior work has demonstrated that resources, including time and funding, should be consistent with performance requirements. For major programs, the ADA confirms the program is fully resourced through the next 5 years when the ADA approves the program’s baseline. At the start of fiscal year 2017, the majority of DHS’s active non-major acquisitions did not have component-approved baselines, though 3 components—NPPD, USCG, and USSS—did have baselines for all of the active non-major acquisitions they identified. Across the 11 components, over half of the reported active non-major acquisitions (23 of 38) did not have approved baselines, including both of the active services acquisitions. The baselines provided by the components for the remaining 15 acquisitions varied in length and detail, but each included the cost, schedule, and performance parameters needed to monitor a program over time. Some components provided other types of acquisition documentation, such as an Operational Requirements Document or Test and Evaluation Master Plan. However, in each case, we determined that the documents submitted did not effectively define the acquisitions by linking their cost, schedule, and performance parameters. As such, we did not consider these documents to represent a baseline. Figure 5 shows the number of active non-major acquisitions and the number of component-approved baselines at each component. We found that the 21 capital acquisitions without CAE-approved baselines constituted a 47 percent share ($3.0 billion) of the approximately $6.4 billion components reported as the total value of their non-major capital acquisitions. Figure 6 shows the value of non-major acquisitions with CAE-approved baselines and the value of those without CAE-approved baselines. Component officials offered a variety of reasons why their CAEs had not approved baselines for non-major acquisitions. For example, they said that (1) the baselines for these acquisitions have been a relatively low priority and were therefore still pending development or approval; (2) their components chose not to require program baselines for non-major acquisitions; or (3) while the component requires baselines for future non- major acquisitions, the acquisitions were initiated prior to the establishment of that requirement. However, this situation is likely to change, given that, in February 2017, DHS issued a new policy specifically focused on managing non-major acquisitions. In response to our preliminary findings, during the course of our audit, the Under Secretary for Management included in this policy a requirement that component leadership approve baselines for these acquisitions. This new requirement should help components execute their acquisitions more effectively. Specifically, as identified above, CAEs are likely to: (1) accurately understand the size of their portfolio; (2) have adequate knowledge about execution against cost, schedule and performance parameters when making acquisition decisions; and (3) secure the funding the acquisition needs to meet those parameters. Establishing a program baseline need not be a significant program burden. Baselines should reflect basic, existing acquisition information in a format that is effective for component management. DHS headquarters officials have increased their focus on non-major acquisitions, and a new policy may help DHS’s component agencies establish effective management controls, particularly by helping ensure the new baseline policy is implemented. In 2015, DHS headquarters officials established an annual review process for non-major acquisitions with life cycle cost estimates greater than $50 million, and they now plan to use this process to ensure that components are assessing acquisition performance against approved cost, schedule and performance baselines. DHS leadership has also added new reporting requirements for these acquisitions, and, in response, all components have started entering non-major acquisition data into INVEST, DHS’s central acquisition information system. The data component officials entered into INVEST during 2016 were unreliable, but headquarters officials are taking steps to improve the reliability of this data. Further, DHS headquarters officials have defined roles and responsibilities for managing non-major acquisitions, hired an oversight official specifically responsible for these acquisitions, and elevated selected non-major acquisitions for department-level oversight. MD-102 establishes that the Executive Director of PARM should ensure CAEs are overseeing their components’ non-major acquisitions appropriately. Specifically, the policy states that the Executive Director shall review CAE governance activities and monitor the performance of non-major acquisitions. To this end, PARM has implemented a series of annual reviews of components’ non-major acquisitions with a life cycle cost greater than $50 million that have not yet achieved FOC. According to our analysis, the $50 million threshold provides PARM insight into the bulk of the resources components plan to allot to these acquisitions. Components valued 21 of their 38 reported active non-major acquisitions at more than $50 million, and these acquisitions account for approximately 95 percent—$6.1 billion—of the roughly $6.4 billion components reported as the total value of their active non-major acquisitions. PARM initiated these reviews in 2015, and, during the first round, officials said they reviewed the components’ non-major acquisition policies in an effort to determine whether these policies aligned with departmental guidance. PARM officials also said the components provided updates on the acquisitions’ costs, key milestones, and capabilities. For the second series of annual reviews in 2016, the DHS Under Secretary for Management issued a memorandum intended to increase the rigor of PARM’s non-major acquisition reviews. The memorandum stated that the components must provide PARM “evidence of sufficient acquisition documentation as tailored by the CAE” and report cost, schedule, and performance metrics with associated milestones. However, the memorandum did not identify (1) any minimum requirements for sufficient acquisition documentation; or (2) the specific cost, schedule, and performance metrics the components should report to PARM. To clarify the requirements, later in 2016, PARM developed more detailed instructions for the 2016 annual review. PARM instructed the components to identify whether they had an overarching policy or set of policies that were specific to non-major acquisitions, and whether these policies aligned with departmental guidance. PARM instructed the components to report the annual costs associated with each acquisition, provide a high- level schedule, include a description of the capability or service being acquired, and discuss issues such as the CAE’s confidence in the program’s meeting the metrics in the program baseline, if a baseline was indeed in place. Additionally, in response to our preliminary findings, a new policy that DHS finalized in February 2017 includes a requirement that component leadership approve baselines for non-major acquisitions that have not yet achieved FOC, and states that PARM’s Executive Director will leverage its reviews to assess whether CAEs are (a) baselining these acquisitions in accordance with the requirement; and (b) tracking the acquisitions’ progress against cost, schedule, and performance parameters from approved baselines. These reviews would help PARM’s Executive Director determine whether CAEs are overseeing their non-major acquisitions in accordance with MD-102. Federal internal control standards state that management should monitor program results and evaluate these results against a previously established baseline. As part of its efforts to increase oversight of non-major acquisitions, DHS leadership now requires components to enter data into the INVEST system for all non-major acquisitions valued at greater than $50 million that have not yet reached FOC. INVEST, DHS’s central system for acquisition information, is used by program-, component-, and department-level officials to enter and obtain information for monthly reporting and monitoring. In the past, components were required to enter non-major acquisition data into INVEST only for IT acquisitions valued at greater than $50 million as part of the DHS IT Capital Planning and Investment Control process. In March 2016, DHS’s Under Secretary for Management issued a memo requiring components to enter data into INVEST for both IT and non-IT non-major acquisitions valued at greater than $50 million. However, we found that the data component officials had entered into INVEST for non-major acquisitions through 2016 were unreliable. Only about half of the acquisitions eligible for entry into INVEST had been entered into the system. As of January 2017, component officials had entered data into INVEST for 10 of the 21 programs that were eligible for entry into the system—i.e., active non-major acquisitions valued at greater than $50 million. For the majority of the non-major acquisitions in INVEST, we found few source documents—particularly baselines—that CAEs could use to validate the data. A December 2016 requirement states that CAEs must validate the accuracy of these data in INVEST twice a year, mirroring the current certification requirement for major acquisitions. This recent requirement, combined with the February 2017 baselining requirement, will likely improve the reliability of the data. Additionally, component officials have reported challenges when trying to enter non-major acquisition data into INVEST. PARM did not initially provide specific guidance on entering the data, and officials from some components said they were confused about the amount and type of information that should be entered into INVEST. In response to this confusion, PARM officials created a new guidebook in October 2016 and issued additional instructions in a December 2016 memo intended to clarify the INVEST data-entry process. In addition to PARM’s implementation of annual non-major acquisition reviews and the requirement to enter non-major acquisition data into INVEST, DHS headquarters has taken several additional actions to help improve the components’ management of these acquisitions. These actions include the following: Defining roles and responsibilities for managing non-major acquisitions. In February 2017, the DHS Under Secretary for Management more clearly defined CAEs’ roles and responsibilities for managing non-major acquisitions. PARM also issued new non-major acquisition management guidance specific to the DHS Management Directorate in February 2017. The new policy consolidates the Management Directorate’s CAE authority by designating the Deputy Director of PARM the CAE for all of the Management Directorate’s non-major acquisitions. Previously, offices within the Management Directorate, such as the OCIO and the Office of the Chief Readiness Support Officer, had their own CAEs. One such office in the scope of our review—OCIO—did not use the acquisition lifecycle process for its acquisition management. This new policy could help to ensure the various offices take a consistent approach. Hiring an oversight official specifically responsible for non-major acquisitions. In light of the billions of dollars the department is spending on these acquisitions, in May 2016, PARM hired an official to focus solely on DHS’s non-major acquisitions. This official’s responsibilities and goals include working with the DHS components to develop and improve their policies and processes for managing non-major acquisitions, in part by ensuring that they align with departmental guidance. Formalizing the process for identifying/categorizing acquisitions. DHS’s Master Acquisition Oversight List identifies the department’s acquisitions and categorizes them by component, major or non-major status, and acquisition type, in order to help DHS’s acquisition managers apply the appropriate oversight requirements. In 2015, PARM established a DHS Master Acquisition Oversight List Governance Board. This body reviews and approves major and non- major acquisition additions, removals, and other updates to the department’s Master Acquisition Oversight List. The board members consist of representatives from the department’s lines of business, including the Office of the Chief Financial Officer, OCIO, and the Office of the Chief Procurement Officer. Elevating non-major acquisitions for department-level oversight. Finally, in some circumstances, the DHS Under Secretary for Management has elevated selected non-major acquisitions to major acquisition status, and, as a result, these acquisitions have received greater department oversight. For example, in April 2016, the Under Secretary for Management elevated CBP’s Remote Video Surveillance System to major acquisition status in response to an expansion in the acquisition’s scope that increased its value above the non-major acquisition dollar threshold. Similarly, at FEMA’s request, the Under Secretary for Management elevated FEMA’s Integrated Public Alert and Warning System acquisition to major status because of its complexity, cross-component impact, and high visibility outside of the department. DHS officials stated that the Under Secretary for Management may elevate non-major acquisitions for other reasons, including external events such as congressional and media interest, if a program’s importance to DHS’s strategic and performance plans is disproportionate to its size, and if an acquisition has significant program or policy implications. These actions reflect DHS leadership’s increased focus on non-major acquisitions as the department continues to work to mature its acquisition management processes across all of its component agencies. Over the past 8 years, DHS leadership has taken several steps to mature its acquisition management processes. More recently, DHS leadership has increasingly focused on its non-major acquisitions, which is fitting, given the billions of dollars going to these programs. Primary responsibility for managing these acquisitions rests, appropriately, with component officials. However, the fact that officials from few components could confidently identify the full scope of their non-major acquisitions is problematic. Understandably, the focus to date has been on active acquisitions, but it is also important that components understand the extent of their non-major acquisitions that have been fielded but are still receiving taxpayer funds to operate. Without an established time frame for components to identify the full picture of their non-major acquisitions— particularly given their acknowledged resource constraints and competing priorities—progress may not have been sustained. To improve the management of DHS’s non-major acquisitions, we recommended that the Secretary of Homeland Security direct the Under Secretary for Management to establish a time frame for components to identify all of their non-major acquisitions. We provided a draft of this product to DHS for comment. In its written comments, reproduced in appendix II, DHS concurred with our recommendation and indicated that the Under Secretary for Management has directed Component Acquisition Executives to identify all Level 3 acquisitions across DHS by no later than October 31, 2017. We reviewed the supporting documentation provided by DHS, reproduced in appendix III, and determined that this direction addressed the recommendation. DHS also provided technical comments that we addressed as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or mackinm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of this audit were designed to examine the Department of Homeland Security’s (DHS) management of non-major acquisitions. This review addresses both the components’ management of non-major acquisitions as well as the department’s oversight role. Specifically, this report assesses (1) the extent to which component leadership is effectively overseeing non-major acquisitions and (2) the extent to which DHS headquarters has helped components establish effective management controls for non-major acquisitions. To identify the extent to which component leadership is effectively overseeing non-major acquisitions, we attempted to identify all non-major acquisitions within DHS. We asked officials from DHS’s Office of Program Accountability and Risk Management (PARM), which are responsible for overseeing the department’s acquisitions, to identify the components that manage non-major acquisitions. PARM identified 14 DHS components in response. We requested data from the 14 DHS components and obtained non-major acquisition data from 11 components. Officials from the 3 remaining components stated that they did not manage non-major acquisitions. For example, officials from the DHS Science and Technology Directorate stated that their component does not identify any acquisition valued at less than $50 million as a non-major acquisition and, based on that definition, the component does not have any non-major acquisitions to report. For this reason, we removed the Science and Technology Directorate from our scope. The 11 DHS component offices and agencies we reviewed are: Customs and Border Protection, Domestic Nuclear Detection Office, Federal Emergency Management Agency, Federal Law Enforcement Training Centers, Immigration and Customs Enforcement, National Protection and Programs Directorate, Office of the Chief Information Officer, Transportation Security Administration, U.S. Citizenship and Immigration Services, U.S. Coast Guard, and U.S. Secret Service. We developed a data collection instrument, sent it to each component, and requested the acquisition name, capability description, total acquisition cost, acquisition type, most recent acquisition decision events, full operational capability (FOC) date, and acquisition lifecycle phase for all of the component’s non-major acquisitions. We used a data collection instrument to obtain non-major acquisitions data based on preliminary discussions with DHS and component officials that indicated we should work directly with the components to collect this information. To assess the reliability of the data provided by components, we reviewed the data to identify outliers, missing data and other potential errors, and compared the data to source documents when available. In interviews and via e-mail correspondence; we provided component officials an opportunity to review, discuss, and, where applicable, correct any completeness and accuracy issues. In addition, we requested that each component update its non-major acquisition data to be accurate as of October 1, 2016. We also requested and reviewed information on how the components enter, store, access, update, and review non-major acquisition data, as well as component officials’ comments on the reliability of the data they provided. Based on this assessment, we determined that the population of current non-major acquisitions and their associated acquisition costs could not be reliably determined. However, we determined that the data were sufficiently reliable to identify the minimum number of non-major acquisitions, and the general magnitude of the minimum acquisition costs associated with active non-major acquisitions. For those acquisitions that components could identify, we found that the data components provided for these non-major acquisitions were generally unreliable as further discussed in the report. In addition to the non-major acquisitions at the components that PARM identified, with the assistance of component officials we also identified one non-major acquisition at the DHS Office of the Chief Readiness Support Officer and one non-major acquisition at the DHS Office of the Chief Security Officer. We included these acquisitions in our scope when working to identify the universe of non-major acquisitions at DHS. As a final quality assurance step, we returned the collected and updated data collection instruments to the respective components, and requested officials verify and, when applicable, correct the data. To designate each acquisition active, pre-active, or post-FOC, we reviewed the FOC and acquisition phase data that the component officials provided. We designated an acquisition active if it had reached Acquisition Decision Event 2A but had not reached FOC by October 1, 2016. If any increment, project, or segment of an acquisition was active, we designated the entire acquisition active. We designated acquisitions with an FOC date on or before October 1, 2016 post-FOC, and those that had not yet reached Acquisition Decision Event 2A by October 1, 2016 pre-active. For acquisitions with conflicting data, we confirmed our designation with component officials. In addition, we collected acquisition cost information for each acquisition, specifically, life cycle cost estimates for capital asset acquisitions and annual expenditure data for services acquisitions. For active acquisitions that did not have final cost estimates in place, we accepted and reported the preliminary information that was available, such as rough order of magnitude estimates or life cycle cost estimates with lower confidence levels. To understand the processes components use to manage non-major acquisitions and assess the extent to which components were consistently baselining these acquisitions, we reviewed draft and final DHS acquisition policy, and component-level non-major acquisition policies and guidance. We also requested and reviewed acquisition decision memos and component-approved acquisition program baselines—or any equivalent documents containing cost, schedule, and performance parameters—for all acquisitions that were active as of October 1, 2016. We then reviewed each baseline document to determine whether it actually contained cost, schedule, and performance parameters in accordance with key acquisition management practices we established in previous reports. We also interviewed department and component officials to expand our understanding of component management processes, and determine why the components were or were not approving baselines for non-major acquisitions. To identify the extent to which DHS headquarters has helped components establish effective management controls for non-major acquisitions, we reviewed draft and final department acquisition policy, guidance, and memos to identify how PARM and other headquarters entities contribute to non-major acquisition management. We also reviewed this documentation to identify existing and planned oversight mechanisms for non-major acquisitions. We requested information from officials at nine DHS headquarters entities and the 11 DHS components in our scope to identify how, if at all, DHS headquarters entities other than PARM monitor or interact with non-major acquisitions during the acquisition process. We interviewed officials from PARM to discuss PARM’s annual review process, DHS’s Master Acquisition Oversight List, and other efforts to address non-major acquisitions. Finally, we interviewed officials from PARM and the components to better understand ongoing efforts to enter non-major acquisition data into DHS’s Investment Evaluation, Submission, & Tracking (INVEST) system, which is the department’s central system for information on its acquisitions. We also interviewed officials to better understand the circumstances under which DHS headquarters elevates non-major acquisitions to major acquisition status, increasing headquarters oversight. To assess the reliability of the data in the INVEST system, we traced non- major acquisition data from INVEST to available source documents. We collected INVEST reports for active non-major acquisitions and compared cost information in those reports to available source documents, such as acquisition program baselines and life cycle cost estimates. To assess relevant internal controls, we reviewed the DHS INVEST User Guide and Training Manual and identified the purpose and structure of the INVEST system. We subsequently evaluated INVEST reports and identified the forms each component used to enter data into INVEST, as well as the level of completeness of the forms and any system-generated errors. Finally, we interviewed component and PARM officials to understand what, if any, internal controls headquarters and components were using for the non-major acquisition data before, during, and after entering that data into INVEST. We found the data in INVEST not to be sufficiently reliable for our purposes of reporting on the universe of non-major acquisitions. We conducted this performance audit from February 2016 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact listed above, Nathan Tranquilli (Assistant Director), Katherine Trimble (Assistant Director), Betsy Gregory-Hosler (Analyst-in-Charge), Andrew Fisher, Javier Irizarry, Kirsten Leikem, and John Rastler made significant contributions to this report. Peter Anderson, Christopher Businsky, Lorraine Ettaro, and Sylvia Schatz also made key contributions to this report. | Each year, DHS acquires a wide array of systems intended to help its component agencies execute their many critical missions. GAO has previously reported that DHS's process for managing its major acquisitions is maturing. However, non-major acquisitions (generally those with cost estimates of less than $300 million) are managed by DHS's component agencies and have not received as much oversight. Recently GAO reported on a non-major acquisition that was executed poorly, limiting DHS's ability to address human capital weaknesses. GAO was asked to examine DHS's management of non-major acquisitions. This report assesses: (1) the extent to which component leadership is effectively overseeing non-major acquisitions; and (2) the extent to which DHS headquarters has helped components establish effective management controls for non-major acquisitions. GAO reviewed policy and component guidance, and interviewed officials from DHS headquarters and 11 components responsible for managing non-major acquisitions. GAO also traced non-major acquisition data from DHS's central acquisition data system to source documents to assess data reliability. The Department of Homeland Security's (DHS) component agencies—such as the U.S. Coast Guard and Customs and Border Protection—lack the information needed to effectively oversee their non-major acquisitions because they cannot confidently identify all of them. They identified over $6 billion in non-major acquisitions; however, GAO found 8 of the 11 components could not identify them all. Several officials indicated that their focus had been on major acquisitions historically, and they had not turned their attention to non-major acquisitions until more recently. Many component officials said they were still in the process of identifying all of these acquisitions, but it was unclear when they would complete these efforts. DHS headquarters had not established time frames for components to do so, which may have resulted in components losing traction in their efforts. Federal internal controls standards establish that management should obtain relevant data from reliable sources in a timely manner. Another key challenge involves the use of baselines, which establish a program's critical cost, schedule, and performance parameters. Component officials identified 38 non-major acquisitions that were active at the start of fiscal year 2017 (as opposed to acquisitions that have been delivered to end users and are considered to be non-active). GAO found that most of the active non-major acquisitions (23 of 38) did not have approved baselines, and that the value of the acquisitions without baselines constituted nearly half of the total value of the active acquisitions. At the beginning of fiscal year 2017, some components did not require approved baselines. However, in response to GAO's preliminary findings, in February 2017, DHS required component leadership to approve baselines for non-major acquisitions, which should help components oversee them more effectively. DHS headquarters is taking steps to help components establish more effective management controls for non-major acquisitions. In 2015, DHS headquarters officials established a process to review them annually. In February 2017, in response to GAO's preliminary findings, DHS established that components shall use the annual reviews to assess the extent to which non-major acquisitions are on track to meet cost, schedule, and performance parameters from approved baselines. DHS leadership has also established ongoing reporting requirements for non-major acquisitions. All components have started entering non-major acquisition data into DHS's central acquisition information system, and headquarters officials are taking steps to improve the reliability of these data. GAO recommended that DHS headquarters establish time frames for components to identify all non-major acquisitions. DHS concurred with GAO's recommendation and directed components to identify all non-major acquisitions by October 31, 2017. |
The purpose of the PRA is to (1) minimize the federal paperwork burden for individuals, small businesses, state and local governments, and other persons; (2) minimize the cost to the federal government of collecting, maintaining, using, and disseminating information; and (3) maximize the usefulness of information collected by the federal government. The PRA also aims to provide for timely and equitable dissemination of federal information; improve the quality and use of information to increase government accountability at a minimized cost; and manage information technology to improve performance and reduce burden, while improving the responsibility and accountability of OMB and the federal agencies to Congress and the public. To achieve these purposes, the PRA prohibits federal agencies from conducting or sponsoring an information collection unless they have prior approval from OMB. The PRA requires that information collections be approved by OMB when facts or opinions are solicited from 10 or more people. Under the law, OMB is required to determine that an agency information collection is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility. The PRA requires every agency to establish a process for its chief information officer (CIO) to review program offices’ proposed information collections, such as certifying that each proposed collection complies with the PRA, including ensuring that it is not unnecessarily duplicative. The agency is to provide two public notice periods—an initial 60-day notice period and a 30-day notice period after the information collection is submitted to OMB for approval. Agencies are responsible for consulting with members of the public and other affected agencies to solicit comments on, among other things, ways to minimize the burden on respondents, including through the use of automated collection techniques or other forms of information technology. According to an OMB official, this could include asking for comments on a proposal to use administrative data instead of survey data. Following satisfaction of these requirements, an agency is to submit its proposed information collection for OMB review, whether for new information collections or re-approval of existing information collections. Before an agency submits a proposed information collection for approval, an agency may invest substantial resources to prepare to conduct an information collection. An agency may undertake, among other things, designing the information collection, testing, and consulting with users. For example, over the last 8 years, BLS has led an interagency effort designed to develop a measure of the employment rate of adults with disabilities pursuant to Executive Order 13078 signed by President Clinton in 1998. This effort has entailed planning, developing, and testing disability questions to add to the CPS. OMB is responsible for determining whether each information collection is necessary for the proper performance of the agency’s functions. According to the Statistical Programs of the United States Government: Fiscal Year 2006, an estimated $5.4 billion in fiscal year 2006 was requested for statistical activities. The PRA also requires the establishment of the Interagency Council on Statistical Policy (ICSP). According to the Statistical Programs of the United States Government: Fiscal Year 2006, the ICSP is a vehicle for coordinating statistical work, particularly when activities and issues cut across agencies; for exchanging information about agency programs and activities; and for providing advice and counsel to OMB on statistical matters. The PRA also requires OMB to annually report on the paperwork burden imposed on the public by the federal government and efforts to reduce this burden, which is reported in Managing Information Collection: Information Collection Budget of the United States Government. For example, the 2006 Information Collection Budget reported on agency initiatives to reduce paperwork, such as HHS’s assessment of its information collections with a large number of burden hours, which resulted in reducing the department’s overall burden hours by over 36 million in fiscal year 2005. OMB produces the annual Statistical Programs of the United States Government report to fulfill its responsibility under the PRA to prepare an annual report on statistical program funding. This document outlines the effects of congressional actions and the funding for statistics proposed in the President’s current fiscal year budget, and highlights proposed program changes for federal statistical activities. It also describes a number of long-range planning initiatives to improve federal statistical programs, including making better use of existing data collections while protecting the confidentiality of statistical information. At the time of our review, OMB had approved 584 new and ongoing statistical and research surveys as recorded in the database of OMB- approved information collections. OMB uses the database for tracking purposes, as it provides the only centralized information available on the characteristics of the surveys that OMB has approved. The database contains information on some, but not all, of the characteristics of the information collections. The information that agencies provide in the packages they submit to OMB for approval includes additional data, such as the estimated cost. Statistical and research surveys represent about 7 percent of the total universe of 8,463 OMB-approved information collections, the majority of which, as shown in figure 1, are for regulatory or compliance and application for benefits purposes. Although there are certain surveys funded through grants and contracts that are not approved by OMB under the PRA, OMB stated that there is no comprehensive list of these surveys. Forty percent of OMB-approved statistical and research surveys were administered to individuals and households, as shown in figure 2. Annual estimated burden hours are defined as the amount of time for the average respondent to fill out a survey times the number of respondents. Figure 3 shows the range of burden hours, for general purpose research and statistics information collections, with about 35 percent of the surveys each accounting for 1,000 or fewer total burden hours. According to an OMB official, the electronic system, Regulatory Information Service Center Office of Information and Regulatory Affairs Consolidated Information System, has automated the agency submission and OMB review process. This new system, which was implemented in July of 2006, is intended to allow OMB and agency officials to search information collection titles and abstracts for major survey topics and key words. Table 2 provides information from agency officials and documents for the selected surveys that we reviewed in more depth. For these seven surveys, the sample sizes ranged from 5,000 individuals for the NHANES to 55,000 housing units for the AHS. The NHANES has a much smaller sample size and greater cost (as compared to the other surveys with similar burden hours) because it includes both an interview and a physical examination in a mobile exam center. The physical examination can include body measurements and tests and procedures, such as a blood sample and dental screening, to assess various aspects of respondents’ health. Other differences among the surveys we reviewed included their specific purposes (e.g., to obtain health information or demographics data); the time period considered (some of the surveys provide data as of a certain point in time while others are longitudinal and follow the same respondents over a period of time); and the frequency with which the surveys were conducted. In addition, many of these surveys have been in existence for decades. Of the seven surveys we reviewed, five are defined by the Statistical Programs of the United States Government Fiscal Year 2006 as major household surveys (ACS, AHS, CPS, NHIS, and SIPP), and in addition MEPS’s household sample is a sub-set of NHIS’s sample. The ACS, unlike the other surveys, is mandatory and will replace the decennial census long-form. In addition to the surveys that we reviewed, two other surveys, the Consumer Expenditure Surveys and the National Crime Victimization Survey, are also defined by the Statistical Programs of the United States Government of 2006 as major household surveys. Agencies and OMB have procedures intended to identify and prevent unnecessary duplication in information collections. Agencies are responsible for certifying that an information collection is not unnecessarily duplicative of existing information as part of complying with OMB’s approval process for information collections. OMB has developed guidance that agencies can use in complying with the approval process. Once an agency submits a proposed information collection to OMB, OMB is required to review the agency’s paperwork, which includes the agency’s formal certification that the proposed information collection is not unnecessarily duplicative. “For example, unnecessary duplication exists if the need for the proposed collection can be served by information already collected for another purpose - such as administrative records, other federal agencies and programs, or other public and private sources. If specific information is needed for identification, classification, or categorization of respondents; or analysis in conjunction with other data elements provided by the respondent, and is not otherwise available in the detail necessary to satisfy the purpose and need for which the collection is undertaken; and if the information is considered essential to the purpose and need of the collection, and/or to the collection methodology or analysis of results, then the information is generally deemed to be necessary, and therefore not duplicative within the meaning of the PRA and OMB regulation.” When an agency is ready to submit a proposed information collection to OMB, the agency’s CIO is responsible for certifying that the information collection satisfies the PRA standards, including a certification that the information collection is not unnecessarily duplicative of existing information sources. We have previously reported that agency CIOs across the government generally reviewed information collections and certified that they met the standards in the act. However, our analysis of 12 case studies at the Internal Revenue Service (IRS) and the Department of Veterans Affairs, HUD, and DOL, showed that the CIOs certified collections even though support was often missing or incomplete. For example, seven of the cases had no information and two included only partial information on whether the information collection avoided unnecessary duplication. Further, although the PRA requires that agencies publish public notices in the Federal Register and otherwise consult with the public, agencies governmentwide generally limited consultation to the publication of the notices, which generated little public comment. Without appropriate support and public consultation, agencies have reduced assurance that collections satisfy the standards in the act. We recommended that the Director of OMB alter OMB’s current guidance to clarify the kinds of support that it asks agency CIOs to provide for certifications and to direct agencies to consult with potential respondents beyond the publication of Federal Register notices. OMB has not implemented these recommendations. OMB has three different guidance publications that agencies can consult in the process of developing information collection submissions, according to OMB officials. The three guidance publications address unnecessary duplication to varying degrees. The draft, Implementing Guidance for OMB Review of Agency Information Collection, provides, among other things, instructions to agencies about how to identify unnecessary duplication of proposed information collections with existing available information sources. OMB’s Questions and Answers When Designing Surveys for Information Collections discusses when it is acceptable to duplicate questions used in other surveys. The publication also encourages agencies to consult with OMB when they are proposing new surveys, major revisions, or large-scale experiments or tests, before an information collection is submitted. For example, when BLS was developing its disability questions for the CPS, BLS officials stated that they consulted OMB on numerous occasions. OMB officials also said that when they are involved early in the process, it is easier to modify an agency’s plan for an information collection. OMB officials told us that an agency consultation with OMB before an information collection is developed can provide opportunities to identify and prevent unnecessary duplication. For example, according to an OMB official, while OMB was working with the Federal Emergency Management Agency (FEMA) to meet the need for information on the impact of Hurricane Katrina, OMB identified a survey partially funded by the National Institute of Mental Health (NIMH) that was in the final stages of design and would be conducted by Harvard University—the Hurricane Katrina Advisory Group Initiative. OMB learned that this survey, which was funded through a grant (and was not subject to review and approval under the PRA), planned to collect data on many of the topics that FEMA was interested in. OMB facilitated collaboration between FEMA and HHS and ultimately, FEMA was able to avoid launching a new survey by enhancing the Harvard study. OMB’s draft of the Proposed Standards and Guidelines for Statistical Surveys, which focuses on statistical surveys and their design and methodology, did not require that agencies assess potential duplication with other available sources of information as part of survey planning. We suggested that OMB require that when agencies are initiating new surveys or major revisions of existing surveys they include in their written plans the steps they take to ensure that a survey is not unnecessary duplicative with available information sources. OMB has incorporated this suggestion. Under the PRA, OMB is responsible for reviewing proposed information collections to determine whether a proposed information collection meets the PRA criteria, which include a requirement that it not unnecessarily duplicate available information. According to an OMB official responsible for reviewing information collections, OMB’s review process consists of several steps. She said that once an agency has submitted the proposed information collection package to OMB, the package is sent to the appropriate OMB official for review. When there is a need for clarification or questions exist, this OMB official told us that OMB communicates with the agency either through telephone conferences or via e-mail. After approval, OMB is required to assign a number to each approved information collection, which the agencies are then to include on their information collection (e.g., survey) forms. In addition to its responsibilities for reviewing proposed information collections, OMB also contributes to or leads a wide range of interagency efforts that address federal statistics. For example, OMB chairs the ICSP. The ICSP is a vehicle for coordinating statistical work, exchanging information about agency programs and activities, and providing advice and counsel to OMB on statistical matters. The council consists of the heads of the principal statistical agencies, plus the heads of the statistical units in the Environmental Protection Agency, IRS, National Science Foundation, and Social Security Administration (SSA). According to an OMB official, the ICSP can expand its membership for working groups to address specific topics. For example, the ICSP established an employment-related health benefits subcommittee and included non-ICSP agencies, such as HHS’s AHRQ (which co-chaired the subcommittee). The ICSP member agencies exchange experiences and solutions with respect to numerous topics of mutual interest and concern. For example, in the past year, the council discussed topics such as the revision of core standards for statistical surveys opportunities for interagency collaboration on information technology development and investment and sample redesign for the major household surveys with the advent of the ACS. On the basis of OMB’s definition of unnecessary duplication, the surveys we reviewed could be considered to contain necessary duplication. To examine selected surveys to assess the extent of unnecessary duplication in areas with similar subject matter, we looked at surveys that addressed three areas: (1) people without health insurance (CPS, NHIS, MEPS, and SIPP), (2) people with disabilities (NHIS, NHANES, MEPS, SIPP, and ACS), and (3) the housing questions on the AHS and ACS. We found that the selected surveys had duplicative content and asked similar questions in some cases. However, the agencies and OMB judged that this was not unnecessary duplication given the differences among the surveys. In some instances, the duplication among these surveys yielded richer data, allowing fuller descriptions of specific topics and providing additional perspectives on a topic, such as by focusing on the different sources and effects of disabilities. The seven surveys we reviewed originated at different times and differ in many aspects, including the samples drawn, the time periods measured, the types of information collected, and level of detail requested. These factors can affect costs and burden hours associated with the surveys. In addition, the differences can create confusion in some cases because they produce differing estimates and use different definitions. Although the CPS, NHIS, MEPS, and SIPP all measure people who do not have health insurance, the surveys originated at different times and differ in several ways, including the combinations of information collected that relate to health insurance, questions used to determine health insurance status, and time frames. Health insurance status is not the primary purpose of any of these surveys, but rather one of the subject areas in each survey. In addition, because each survey has a different purpose, each survey produces a different combination of information related to people’s health insurance. The CPS originated in 1948 and provides data on the population’s employment status. Estimates from the CPS include employment, unemployment, earnings, hours of work, and other indicators. Supplements also provide information on a variety of subjects, including information about employer-provided benefits like health insurance. CPS also provides information on health insurance coverage rates for sociodemographic subgroups of the population. The time frame within which data is released varies; for example, CPS employment estimates are released 2-3 weeks after collection while supplement estimates are released in 2-9 months after collection. The NHIS originated in 1957 and collects information on reasons for lack of health insurance, type of coverage, and health care utilization. The NHIS also collects data on illnesses, injuries, activity limitations, chronic conditions, health behaviors, and other health topics, which can be linked to health insurance status. HHS stated that although health insurance data are covered on other surveys, NHIS’s data on health insurance is key to conducting analysis of the impact of health insurance coverage on access to care, which is generally not collected on other surveys. The MEPS originated in 1977 and provides data on health insurance dynamics, including changes in coverage and periods without coverage. The MEPS augments the NHIS by selecting a sample of NHIS respondents and collecting additional information on the respondents. The MEPS also links data on health services spending and health insurance status to other demographic characteristics of survey respondents. The MEPS data can also be used to analyze the relationship between insurance status and a variety of individual and household characteristics, including use of and expenditures for health care services. The SIPP originated in 1983 in order to provide data on income, labor force, and government program participation. The information collected in the SIPP, such as the utilization of health care services, child well-being, and disability, can be linked to health insurance status. The SIPP also measures the duration of periods without health insurance. Because the surveys use different methods to determine health insurance status, they can elicit different kinds of responses and consequently differing estimates within the same population. To determine if a person is uninsured, surveys use one of two methods: they ask respondents directly if they lack insurance coverage or they classify individuals as uninsured if they do not affirmatively indicate that they have coverage. The CPS and the NHIS directly ask respondents whether they lack insurance coverage. While the difference between these approaches may seem subtle, using a verification question prompts some people who did not indicate any insurance coverage to rethink their status and indicate coverage that they had previously forgotten to mention. The surveys also differ both in the time period respondents are asked to recall and in the time periods measured when respondents did not have health insurance. Hence, the surveys produce estimates that do not rely upon standardized time or recall periods and as a result are not directly comparable. The ASEC to the CPS is conducted in February, March, and April and asks questions about the prior calendar year. An interviewer asks the respondent to remember back for the previous calendar year which can be as long as 16 months in the April interview. The other three surveys, in contrast, asked about coverage at the time of the interview. Because a respondent’s ability to recall information generally degrades over time, most survey methodologists believe that the longer the recall period, the less accurate the answers will be to questions about the past, such as exactly when health insurance coverage started or stopped, or when it changed because of job changes. Another difference is the time period used to frame the question. The CPS asked whether the respondent was uninsured for an entire year, while NHIS, MEPS, and SIPP asked whether the individual was ever insured, or was uninsured at the time of the interview, for the entire last year, and at any time during the year. Table 3 illustrates the differing estimates obtained using data from the four selected surveys. While these differences can be explained, the wide differences in the estimates are of concern and have created some confusion. For example, the 2004 CPS estimate for people who were uninsured for a full year is over 50 percent higher than the NHIS estimate for that year. HHS has sponsored several interagency meetings on health insurance data, which involved various agencies within HHS and the Census Bureau. The meetings focused on improving estimates of health insurance coverage and included, among other things, examining how income data are used, exploring potential collaboration between HHS and the Census Bureau on whether the CPS undercounts Medicaid recipients, examining health insurance coverage rates, and discussing a potential project to provide administrative data for use in the CPS. As a result, HHS created a Web site with reports and data on relevant surveys and HHS’s office of the Assistant Secretary for Planning and Evaluation (ASPE) produced the report Understanding Estimates of the Uninsured: Putting the Differences in Context with input from the Census Bureau in an effort to explain the differing estimates. Similarly, although the NHIS, NHANES, MEPS, SIPP, and ACS all estimate the percentage of the population with disabilities, the surveys define disability differently and have different purposes and methodologies. In addition to these five surveys, which measure aspects of disability, BLS is also currently developing questions to measure the employment levels of the disabled population. HHS also stated that disability is included on multiple surveys so that disability status can be analyzed in conjunction with other information that an agency needs. For example, disability information is used by health departments to describe the health of the population, by departments of transportation to assess access to transportation systems, and departments of education in the education attainment of people with disabilities. The lack of consistent definitions is not unique to surveys; there are over 20 different federal agencies that administer almost 200 different disability programs for purposes of entitlement to public support programs, medical care, and government services. Although each of the surveys asks about people’s impairments or functionality in order to gauge a respondent’s disability status, there are some differences in how disability is characterized. For example, the NHIS asks respondents if they are limited in their ability to perform age- dependent life and other activities. The NHIS also asks about the respondent needing assistance with performing activities of daily living and instrumental activities of daily living. The NHANES measures the prevalence of physical and functional disability for a wide range of activities in children and adults. Extensive interview information on self- reported physical abilities and limitations is collected to assess the capacity of the individual to do various activities without the use of aids, and the level of difficulty in performing the task. The MEPS provides information on days of work or school missed due to disability. The SIPP queries whether the respondent has limitations of sensory, physical, or mental functioning and limitations on activities due to health conditions or impairments. The ACS asks about vision or hearing impairment, difficulty with physical and cognitive tasks, and difficulty with self-care and independent living. Because surveys produce different types of information on disability, they can provide additional perspectives on the sources and effects of disabilities, but they can also cause confusion because of the differences in the way disability is being measured. The NHIS contains a broad set of data on disability-related topics, including the limitation of functional activities, mental health questions used to measure psychological distress, limitations in sensory ability, and limitations in work ability. Moreover, the NHIS provides data, for those persons who indicated a limitation performing a functional activity, about the source or condition of their functional limitation. The NHANES links medical examination information to disability. The MEPS measures how much individuals spend on medical care for a person with disabilities and can illustrate changes in health status and health care expenses. The SIPP provides information on the use of assistive devices, such as wheelchairs and canes. Finally, the ACS provides information on many social and economic characteristics, such as school enrollment for people with disabilities as well as the poverty and employment status of people with different types of disabilities. However, the estimates of disability in the population that these surveys produce can vary widely. A Cornell University study compared disability estimates among the NHIS, SIPP, and ACS. A number of categories of disability were very similar, such as the nondisabled population, while others, such as the disabled population or people with sensory disabilities, had widely varying estimates, as shown in table 4. For example, according to data presented in a Cornell University study that used survey questions to define and subsequently compare different disability measures across surveys, the SIPP 2002 estimate of people with sensory disabilities for ages 18-24 was more than six times the NHIS estimate for that year for ages 18-24. In commenting on this report, the DOC and HHS acknowledged that comparing the NHIS and SIPP with respect to sensory disabilities is problematic. HHS officials noted that the confusion caused by these different estimates derives mostly from the lack of a single definition of disability, which leads to data collections that use different questions and combinations of information to define disability status. Because the concept of disability varies, with no clear consensus on terminology or definition, and there are differing estimates, several federal and international groups are examining how the associated measures of disability could be improved. HHS’s Disability Workgroup, which includes officials from HHS and the Department of Education, examines how disability is measured and used across surveys. The task of another federal group, the Subcommittee on Disability Statistics of the Interagency Committee on Disability Research, is to define and standardize the disability definition. The Washington Group on Disability Statistics (WGDS), an international workgroup sponsored by the United Nations in which OMB and NCHS participate, is working to facilitate the comparison of data on disability internationally. The WGDS aims to guide the development of a short set or sets of disability measures that are suitable for use in censuses, sample-based national surveys, or other statistical formats, for the primary purpose of informing policy on equalization of opportunities. The WGDS is also working to develop one or more extended sets of survey items to measure disability, or guidelines for their design, to be used as components of population surveys or as supplements to specialty surveys. HHS added that the interest in standardizing the measurement of disability status is also driven by the desire to add a standard question set to a range of studies so that the status of persons with disabilities can be described across studies. In 2002, we reported that the AHS and ACS both covered the subject of housing. Of the 66 questions on the 2003 ACS, 25 were in the section on housing characteristics, and all but one of these questions were the same as or similar to the questions on the AHS. For example, both the AHS and the ACS ask how many bedrooms a housing unit has. However, the two surveys differ in purposes and scope. The purpose of the AHS is to collect detailed housing information on the size, composition, and state of housing in the United States, and to track changes in the housing stock over time, according to a HUD official. To that end, the AHS includes about 1,000 variables, according to a HUD official, such as the size of housing unit, housing costs, different building types, plumbing and electrical issues, housing and neighborhood quality, mortgage financing, and household characteristics. The AHS produces estimates at the national level, metropolitan level for certain areas, and homogenous zones of households with fewer than 100,000 households. The AHS is conducted every 2 years nationally and every 6 years in major metropolitan areas, except for six areas, which are surveyed every 4 years. In contrast, the level of housing data in the ACS is much less extensive. The ACS is designed to replace the decennial Census 2010 long-form and covers a wide range of subjects, such as income, commute time to work, and home values. The ACS provides national and county data and, in the future, will provide data down to the Census tract level, according to a Census Bureau official. The ACS is designed to provide communities with information on how they are changing, with housing being one of the main topic areas along with a broad range of household demographic and economic characteristics. The AHS and ACS also have different historical and trend data and data collection methods. The AHS returns to the same housing units year after year to gather data; therefore, it produces data on trends that illustrate the flow of households through the housing stock, according to a HUD official, while the ACS samples new households every month. Historical data are also available from the AHS from the 1970s onward, according to a HUD official. Analysts can use AHS data to monitor the interaction among housing needs, demand, and supply, as well as changes in housing conditions and costs. In addition, analysts can also use AHS data to support the development of housing policies and the design of housing programs appropriate for different groups. HUD uses the AHS data, for example, to analyze changes affecting housing conditions of particular subgroups, such as the elderly. The AHS also plays an important role in HUD’s monitoring of the lending activities of the government-sponsored enterprises, Fannie Mae and Freddie Mac, in meeting their numeric goals for mortgage purchases serving minorities, low-income households, and underserved areas. AHS’s characteristic of returning to the same housing units year after year provides the basis for HUD’s Components of Inventory Change (CINCH) and Rental Dynamics analyses. The CINCH reports examine changes in housing stock over time by comparing the status and characteristics of housing units in successive surveys. The Rental Dynamics program, which is a specialized form of CINCH, looks at rental housing stock changes, with an emphasis on changes in affordability. Another use of AHS data has been for calculating certain fair market rents (FMR), which HUD uses to determine the amount of rental assistance subsidies for major metropolitan areas between the decennial censuses. However, HUD plans to begin using ACS data for fiscal year 2006 FMRs. As we previously reported, this could improve the accuracy of FMRs because the ACS provides more recent data that closely matches the boundaries of HUD’s FMR areas than the AHS. In our 2002 report, which was published before the ACS was fully implemented, we also identified substantial overlap for questions on place of birth and citizenship, education, labor force characteristics, transportation to work, income, and, in particular, housing characteristics. We recommended that the Census Bureau review proposed ACS questions for possible elimination that were asked on the AHS to more completely address the possibility of reducing the reporting burden in existing surveys. The Census Bureau responded that they are always looking for opportunities to streamline, clarify, and reduce respondent burden, but that substantial testing would be required before changes can be made in surveys that provide key national social indicators. In addition to efforts underway to try to reconcile inconsistencies among surveys that address the same subject areas, a number of major changes have occurred or are planned to occur that will affect the overall portfolio of major household surveys. As previously discussed, the ACS was fully implemented in 2005 and provides considerable information that is also provided in many other major household surveys. The ACS is the cornerstone of the government’s effort to keep pace with the nation’s changing population and ever-increasing demands for timely and relevant data about population and housing characteristics. The new survey will provide current demographic, socioeconomic, and housing information about America’s communities every year, information that until now was only available once a decade. Starting in 2010, the ACS will replace the long-form census. As with the long-form, information from the ACS will be used to administer federal and state programs and distribute more than $200 billion a year. Detailed data from national household surveys can be combined with data from the ACS to create reliable estimates for small geographic areas using area estimation models. Partly in response to potential reductions in funding for fiscal year 2007, the Census Bureau is planning to reengineer the SIPP with the intent of ultimately providing better information at lower cost. SIPP has been used to estimate future costs of certain government programs. For example, HUD used SIPP’s longitudinal capacity to follow families over time to determine that households with high-rent burdens in one year move in and out of high-rent burden status over subsequent years. Therefore, although the overall size of the population with worst-case housing needs is fairly stable, the households comprising this population change with considerable frequency—an issue that HUD told us is potentially important in the design of housing assistance programs. Although the SIPP has had problems with sample attrition and releasing data in a timely manner, which the reengineering is intended to ameliorate, there has been disagreement about this proposal among some users of SIPP data. Census Bureau officials said they are meeting with internal and external stakeholders and are considering using administrative records. Census Bureau officials told us that they could develop a greater quality survey for less money, with a final survey to be implemented in 2009. They also said that they may consider using the ACS or CPS sampling frame. In addition to the seven surveys discussed previously, we also identified examples of how, over the years, agencies have undertaken efforts to enhance their surveys’ relevance and efficiency through steps such as using administrative data in conjunction with survey data, reexamining and combining or eliminating surveys, and redesigning existing surveys. The Census Bureau and BLS have used administrative data collected for the administration of various government programs in conjunction with survey data. The Census Bureau and BLS have used the administrative data to target specific populations to survey and to obtain information without burdening survey respondents. The Census Bureau uses administrative data in combination with survey data to produce its Economic Census business statistics, which, every 5 years, profile the U.S. economy from the national to the local level. The Economic Census relies on the centralized Business Register, which is compiled from administrative records from IRS, SSA, and BLS, along with lists of multi-establishment businesses that the Census Bureau maintains. The Business Register contains basic economic information for over 8 million employer businesses and over 21 million self-employed businesses. The Economic Census uses the Business Register as the sampling frame to identify sets of businesses with specific characteristics, such as size, location, and industry sector. BLS also uses a combination of administrative and survey data to produce its quarterly series of statistics on gross job gains and losses. BLS uses administrative data provided by state workforce agencies that compile and forward quarterly state unemployment insurance (UI) records to BLS. These state agencies also submit employment and wage data to BLS. The data states provide to BLS include establishments subject to state UI laws and federal agencies subject to the Unemployment Compensation for Federal Employees program, covering approximately 98 percent of U.S. jobs. These administrative data enable BLS to obtain information on many businesses without having to impose a burden on respondents. BLS augments the administrative data with two BLS-funded surveys conducted by the states. The Annual Refiling Survey updates businesses’ industry codes and contact information, and the Multiple Worksite Report survey provides information on multiple work sites for a single business, data that are not provided by the UI records, enabling BLS to report on business statistics by geographic location. Combining the data from these surveys with administrative data helps BLS increase accuracy, update information, and include additional details on establishment openings and closings. However, because of restrictions on information sharing, BLS is not able to access most of the information that the Census Bureau uses for its business statistics because much of this information is commingled with IRS data. The Confidential Information Protection and Statistical Efficiency Act of 2002 (CIPSEA, 44 U.S.C. § 3501 note) authorized identifiable business records to be shared among the Bureau Economic Analysis (BEA), BLS, and the Census Bureau for statistical purposes. CIPSEA, however, did not change the provisions of the Internal Revenue Code that preclude these agencies from sharing tax return information for statistical purposes. OMB officials stated that there is continued interest in examining appropriate CIPSEA companion legislation on granting greater access for the Census Bureau, BLS, and BEA to IRS data. Several agencies have reexamined some of their surveys, which has led to their elimination or modification. The Census Bureau, for example, reviewed its portfolio of Current Industrial Reports (CIR) program surveys of manufacturing establishments, which resulted in the elimination and modification of some surveys. Census Bureau officials said they decided to undertake this reexamination in response to requests for additional data that could not be addressed within existing budgets without eliminating current surveys. They were also concerned that the character of manufacturing, including many of the industries surveyed by the CIR program, had changed since the last reexamination of the CIR programs, which had been over 10 years earlier. Using criteria developed with key data users, Census Bureau officials developed criteria and used them to rank 54 CIR program surveys. The criteria included 11 elements, such as whether the survey results were important to federal agencies or other users, and the extent to which the subject matter represented a growing economic activity in the United States. The recommendations the Census Bureau developed from this review were then published in the Federal Register and after considering public comments, the Census Bureau eliminated 11 surveys, including ones on knit fabric production and industrial gases. The Census Bureau also redesigned 7 surveys, scaling back the information required to some extent and updating specific product lists. As a result of this reexamination, the Census Bureau was able to add a new survey on “analytical and biomedical instrumentation,” and it is considering whether another new CIR program survey is needed to keep pace with manufacturing industry developments. Census Bureau officials told us that they plan on periodically reexamining the CIR surveys in the future. HHS has also reexamined surveys to identify improvements, in part by integrating a Department of Agriculture (USDA) survey which covered similar content into HHS’s NHANES. For about three decades, HHS and USDA conducted surveys that each contained questions on food intake and health status (NHANES and the Continuing Survey of Food Intakes by Individuals, respectively). HHS officials stated that HHS and USDA officials considered how the two surveys could be merged for several years before taking action. According to HHS officials, several factors led to the merger of the two surveys, including USDA funding constraints, the direct involvement of senior-level leadership on both sides to work through the issues, and HHS officials’ realization that the merger would enable them to add an extra day of information gathering to the NHANES. Integrating the two surveys into the NHANES made it more comprehensive by adding a follow-up health assessment. According to HHS officials, adding this component to the original in-person assessment allows agency officials to better link dietary and nutrition information with health status. Another mechanism HHS has established is a Data Council, which, in addition to other activities, assesses proposed information collections. The Data Council oversees the entire department’s data collections to ensure that the department relies, where possible, on existing core statistical systems for new data collections rather than on the creation of new systems. The Data Council implements this strategy through communicating and sharing plans, conducting annual reviews of proposed data collections, and reviewing major survey modifications and any new survey proposals. According to HHS officials, in several instances, proposals for new surveys and statistical systems have been redirected and coordinated with current systems. For example, HHS officials stated that when the Centers for Disease Control and Prevention (CDC) proposed a new survey on youth tobacco use, the Data Council directed it to the Substance Abuse and Mental Health Services Administration’s National Survey of Drug Use and Health. The Data Council stated that by adding questions on brand names, CDC was able to avoid creating a new survey to measure youths’ tobacco use. OMB recognizes that the federal government should build upon agencies’ practice of reexamining individual surveys to conduct a comprehensive reexamination of the portfolio of major federal household surveys, in light of the advent of the ACS. OMB officials acknowledged that this effort would be difficult and complex and would take time. According to OMB, integrating or redesigning the portfolio of major household surveys could be enhanced if, in the future, there is some flexibility to modify the ACS design and methods. For example, an OMB official stated that using supplements or flexible modules periodically within the ACS might enable agencies to integrate or modify portions of other major household surveys. OMB officials indicated that such an effort would likely not happen until after the 2010 decennial census, a critical stage for ACS when ACS data can be compared to 2010 Census data. OMB officials said and their long- range plans have already indicated their expectation that there will be improved integration of the portfolio of related major household surveys with the advent of the ACS. For example, the Statistical Programs of the United States Government: Fiscal Year 2006 describes plans for redesigning the samples for demographic surveys, scheduled for initial implementation after 2010, when the ACS may become the primary data source. In light of continuing budgetary constraints, as well as major changes planned and underway within the U.S. statistical system, the portfolio of major federal household surveys could benefit from a holistic reexamination. Many of the surveys have been in place for several decades, and their content and design may not have kept pace with changing information needs. The duplication in content in some surveys, while considered necessary, may be a reflection of incremental attempts over time to address information gaps as needs changed. OMB and the statistical agencies have attempted to address some of the more troublesome aspects of this duplication by providing explanations of the differences in health insurance estimates and with efforts to develop more consistent definitions of disability. These efforts, however, while helpful, address symptoms of the duplication without tackling the larger issues of need and purpose. In many cases, the government is still trying to do business in ways that are based on conditions, priorities, and approaches that existed decades ago and are not well suited to addressing today’s challenges. Thus, while the duplicative content of the surveys can be explained, there may be opportunities to modify long-standing household surveys, both to take advantage of changes in the statistical system, as well as to meet new information needs in the face of ever-growing constraints on budgetary resources. Some agencies have begun to take steps to reevaluate their surveys in response to budget constraints and changing information needs. Agencies have reexamined their surveys and used administrative data in conjunction with survey data to enhance their data collection efforts. These actions, however, focused on individual agency and user perspectives. By building upon these approaches and taking a more comprehensive focus, a governmentwide reexamination could help reduce costs in an environment of constrained resources and help prioritize information needs in light of current and emerging demands. Given the upcoming changes in the statistical system, OMB should lead the development of a new vision of how the major federal household surveys can best fit together. OMB officials told us they are beginning to think about a broader effort to better integrate the portfolio of major household surveys once the ACS has been successfully implemented. Providing greater coherence among the surveys, particularly in definitions and time frames, could help reduce costs to the federal government and associated burden hours. The Interagency Council on Statistical Policy (ICSP) could be used to bring together relevant federal agencies, including those that are not currently part of the ICSP. The ICSP has the leadership authority, and in light of the comprehensive scope of a reexamination initiative, could draw on leaders from the agencies that collect or are major users of federal household survey data. While OMB officials have stated that the ACS may not have demonstrated its success until after 2010, the complexity and time needed to reexamine the portfolio of major federal household surveys means that it is important to start planning for that reexamination. To deal with the longer term considerations crucial in making federally funded surveys more effective and efficient, GAO recommends that the Director of OMB work with the Interagency Council on Statistical Policy to plan for a comprehensive reexamination to identify opportunities for redesigning or reprioritizing the portfolio of major federal household surveys. We requested comments on a draft of this report from the Director of OMB and the Secretaries of Commerce, HHS, HUD, and Labor or their designees. We obtained oral and technical comments on a draft of this report from the Chief Statistician of the United States and her staff at OMB, as well as written comments from the Acting Deputy Under Secretary for Economic Affairs at Commerce; the Assistant Secretary for Legislation at HHS; and the Assistant Secretary for Policy Development and Research at HUD; and technical comments from the Acting Commissioner of BLS at Labor, which we incorporated in the report as appropriate. In commenting on a draft of the report, OMB officials stated that the draft report presented an interesting study that addresses an issue worth looking at. OMB officials generally agreed with our recommendation, although they expressed concerns about the range of participants that might be involved in such a reexamination. We revised the recommendation to provide clarification that OMB should work with the Interagency Council on Statistical Policy rather than with all relevant stakeholders and decision makers. OMB officials also expressed concerns about moving from examining selected surveys in three subject areas to the conclusion that the entire portfolio of household surveys should be reexamined. In response we clarified that we were recommending a comprehensive reexamination of the seven surveys that comprise the portfolio of major federal household surveys, most of which were included in our review. OMB officials also provided clarification on how we characterized their statements on reexamining the portfolio of major household surveys, which we incorporated into the report. Each of the four departments provided technical clarifications that we incorporated into the report, as appropriate. In addition, HHS and HUD officials offered written comments on our findings and recommendation, which are reprinted in appendix II. HHS stated that a reexamination was not warranted without evidence of unnecessary duplication and also highlighted a number of examples of agency efforts to try to clarify varying estimates. However we did not rely on evidence of duplication, but rather based our recommendation on other factors, including a need to provide greater coherence among the surveys and to take advantage of changes in the statistical system to reprioritize information needs and possibly help reduce costs to the federal government and associated burden hours. Further, in light of the major upcoming changes involving the ACS and SIPP, and in conjunction with constrained resources and changing information needs, we believe that the major household surveys should be considered from a broader perspective, not simply in terms of unnecessary duplication. HHS also provided a number of general comments. We incorporated additional information to reflect HHS’s comments on the different uses of disability information, a standard set of disability questions, NHIS’s coverage of access to care, and the fact that MEP’s sample is a subset of the NHIS sample. HHS’s comments on differences in estimates and the lack of a single definition of disability were already addressed in the report. HHS also stated that NCHS works through various mechanisms to ensure that surveys are efficient. We support efforts to enhance efficiency and believe that our recommendation builds upon such efforts. HUD officials were very supportive of our recommendation, stating that such a reexamination is especially important as the ACS approaches full- scale data availability. In response to HUD’s comments suggesting adding more information on SIPP and AHS, we expanded the report’s discussion of the longitudinal dimension of SIPP and AHS. As agreed with your office, unless you publicly announce the contents of the report earlier, we plan no further distribution of it until 30 days from the date of the report. We will then send copies of this report to the appropriate congressional committees and to the Director of OMB, and the Secretaries of Commerce, HHS, HUD, and Labor, as well as to other appropriate officials in these agencies. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http:/www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6543 or steinhardtb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To answer our first objective of identifying the number and characteristics of Office of Management and Budget (OMB)-approved federally funded statistical and research surveys, we obtained the database of information collections that had been approved by OMB as of August 7, 2006. The information in the database is obtained from Form 83-I which is part of an agency’s submission for OMB approval of an information collection. As the approval is in effect for up to 3 years, this database reflects all those collections with OMB approval for their use as of that date, and is thus a snapshot in time. Although OMB Form 83-I requires agencies to identify various types of information about an information collection, including whether the information collection will involve statistical methods, the form does not require agencies to identify which information collections involve surveys consequently the database of OMB-approved information collections does not identify which information collections are surveys. Furthermore, the definition of information collections contained in the Paperwork Reduction Act (PRA) of 1980 is written in general terms and contains very few limits in scope or coverage. On the form, agencies can select from seven categories when designating the purpose of an information collection, which are (1) application for benefits, (2) program evaluation, (3) general purpose statistics, (4) audit, (5) program planning or management, (6) research, and (7) regulatory or compliance. When completing the form, agencies are asked to mark all categories that apply, denoting the primary purpose with a “P” and all others that apply with an “X.” Since OMB does not further define these categories, the agency submitting the request determines which categories best describe the purpose(s) of the proposed collection. The choices made may reflect differing understandings of these purposes from agency to agency or among individuals in the same agency. The list of surveys contained in this report was derived from the database of OMB-approved information collections and therefore contains all information collections that an agency designated as either “general purpose statistics” or “research” in the primary purpose category that we used as a proxy for the universe of surveys. The directions to agencies completing the forms call for agencies to mark “general purpose statistics” when the data are collected chiefly for use by the public or for general government use without primary reference to the policy or program operations of the agency collecting the data. Agencies are directed to mark “research” when the purpose is to further the course of research, rather than for a specific program purpose. We did not determine how accurately or reliably agencies designated the purpose(s) of their information collections. It is also possible that the database may contain other federally funded surveys that the agency did not identify under the primary purpose we used to “identify” surveys, and these would not be included in our list of surveys. We have taken several steps to ensure that the database of OMB-approved information collections correctly recorded agency-submitted data and contained records of all Forms 83-I submitted to OMB. Our report, entitled Paperwork Reduction Act: New Approach May Be Needed to Reduce Burden on Public, GAO-05-424 (Washington, D.C.: May 20, 2005), examined the reliability of the database of OMB-approved information collections and concluded that the data were accurate and complete for the purposes of that report. Because this assessment was recent, we decided that we would not repeat this assessment. We did, however, compare a sample of the surveys from the Inventory of Approved Information Collection on OMB’s Web site to our copy of the database of OMB-approved collections. We found that all of the surveys in the Inventory of Approved Information Collection were contained in the database. Not all information collections require OMB approval under the PRA. OMB’s draft Implementing Guidance for OMB Review of Agency Information Collection explains that in general, collections of information conducted by recipients of federal grants do not require OMB approval unless the collection meets one or both of the following two conditions: (1) the grant recipient is collecting information at the specific request of the sponsoring agency or (2) the terms and conditions of the grant require that the sponsoring agency specifically approve the information collection or collection procedures. As also stated in the OMB draft, information collections that are federally funded by contracts do not require OMB approval unless the information collection meets one or both of the following two conditions: (1) if the agency reviews and comments upon the text of the privately developed survey to the extent that it exercises control over and tacitly approves it or (2) if there is the appearance of sponsorship, for example, public endorsement by an agency, the use of an agency seal in the survey, or statements in the instructions of the survey indicating that the survey is being conducted to meet the needs of a federal agency. Although there are additional surveys funded through grants and contracts that are not approved by OMB under the PRA, OMB stated that there is no comprehensive list. In addition, the draft guidance states that the PRA does not apply to current employees of the federal government, military personnel, military reservists, and members of the National Guard with respect to all inquiries within the scope of their employment and for purposes of obtaining information about their duty status. For the second objective describing current agency and OMB roles in identifying and preventing unnecessary duplication, we took several different steps. We reviewed the PRA requirements for agencies and OMB. We also interviewed agency clearance officers at the Departments of Commerce, Health and Human Services, and Labor about their processes for submitting information collection packages to OMB. These agencies are the top three agencies in terms of funding for statistical activities in fiscal year 2006. We also interviewed OMB officials about their role in approving proposed information collections. For the third objective, through reviewing our reports and literature and by interviewing agency officials, we identified surveys with duplicative content. We identified duplication by looking for areas of potential duplication when several surveys contained questions on the same subject. This duplication was strictly based on similar content in the surveys on the same subject, specifically people without health insurance and those with disabilities. We also looked at the duplication in the subject area of housing between the American Community Survey and American Housing Survey, which had been identified by our previous work. We also looked at environmental surveys, but determined that there was not duplicative content with our major surveys. Once we had identified the three subject areas, we used literature and interviews to identify the current federally funded surveys that were cited as the major surveys in each theme. We did not focus on any particular type of survey, but rather chose the surveys that were cited as the major surveys in each theme. To learn more about the duplicative content between surveys related to these three themes, we reviewed relevant literature and agency documents. We also interviewed officials from OMB, and the Departments of Commerce, Labor, Health and Human Services, and Housing and Urban Development. In addition, we interviewed experts from organizations that focus on federal statistics, such as at the Council of Professional Associations on Statistics and the Committee on National Statistics, National Academies of Science. Although we have included the Census Bureau’s Survey of Income and Program Participants as part of our assessment of potential duplication, the fiscal year 2007 President’s budget proposed to cut Census Bureau funding by $9.2 million, to which the Census Bureau responded by stating that it would reengineer the SIPP. Therefore, the fate of the SIPP is uncertain, and reengineering has not been completed. For the fourth objective, we also interviewed OMB officials, agency officials, and organizations that focus on federal statistics. Through the combination of agency and OMB interviews, expert interviews, and research, we identified selected agency efforts to improve the efficiency and relevance of surveys. In addition to the contact named above, key contributors to this report were Susan Ragland, Assistant Director; Maya Chakko; Kisha Clark; Ellen Grady; Elizabeth M. Hosler; Andrea Levine; Jean McSween; Elizabeth Powell; and Greg Wilmoth. | Federal statistical information is used to make appropriate decisions about budgets, employment, and investments. GAO was asked to (1) describe selected characteristics of federally funded statistical or research surveys, (2) describe agencies' and Office of Management and Budget's (OMB) roles in identifying and preventing unnecessary duplication, (3) examine selected surveys to assess whether unnecessary duplication exists in areas with similar subject matter, and (4) describe selected agencies' efforts to improve the efficiency and relevance of surveys. GAO reviewed agency documents and interviewed officials. Using this information and prior GAO work, GAO identified surveys with potential unnecessary duplication. At the time of GAO's review, OMB had approved 584 ongoing federal statistical or research surveys, of which 40 percent were administered to individuals and households. Under the Paperwork Reduction Act, agencies are to certify to OMB that each information collection does not unnecessarily duplicate existing information, and OMB is responsible for reviewing the content of agencies' submissions. OMB provides guidance that agencies can use to comply with the approval process and avoid unnecessary duplication, which OMB defines as information similar to or corresponding to information that could serve the agency's purpose and is already accessible to the agency. Based on this definition, the seven surveys GAO reviewed could be considered to contain necessary duplication. GAO identified three subject areas, people without health insurance, people with disabilities, and housing, covered in multiple major surveys that could potentially involve unnecessary duplication. Although they have similarities, most of these surveys originated over several decades, and differ in their purposes, methodologies, definitions, and measurement techniques. These differences can produce widely varying estimates on similar subjects. For example, the estimate for people who were uninsured for a full year from one survey is over 50 percent higher than another survey's estimate for the same year. While agencies have undertaken efforts to standardize definitions and explain some of the differences among estimates, these issues continue to present challenges. In some cases, agencies have reexamined their existing surveys to reprioritize, redesign, combine, and eliminate some of them. Agencies have also used administrative data in conjunction with their surveys to enhance the quality of information and limit respondent burden. These actions have been limited in scope, however. In addition, two major changes to the portfolio of major federal household surveys are underway. The American Community Survey is intended to replace the long-form decennial census starting in 2010. This is considered to be the cornerstone of the government's efforts to provide data on population and housing characteristics and will be used to distribute billions of dollars in federal funding. Officials are also redesigning the Survey of Income and Program Participation which is used in estimating future costs of certain government benefit programs. In light of these upcoming changes, OMB recognizes that the federal government can build upon agencies' practices of reexamining individual surveys. To ensure that surveys initiated under conditions, priorities, and approaches that existed decades ago are able to cost-effectively meet current and emerging information needs, there is a need to undertake a comprehensive reexamination of the long standing portfolio of major federal household surveys. The Interagency Council on Statistical Policy (ICSP), which is chaired by OMB and made up of the heads of the major statistical agencies, is responsible for coordinating statistical work and has the leadership authority to undertake this effort. |
ETA administers Job Corps’ 125 centers through its national Office of Job Corps under the leadership of a National Director and a field network of six regional offices located in Atlanta, Boston, Chicago, Dallas, Philadelphia, and San Francisco (see fig. 1). Job Corps is operated primarily through contracts, which according to ETA officials, is unique among ETA’s employment and training programs; other such programs are generally operated through grants to states. Ninety-seven of Job Corps’ 125 centers are operated under contracts with large and small businesses, nonprofit organizations, and Native American tribes. The remaining 28 centers (called Civilian Conservation Centers) are operated by the U.S. Department of Agriculture’s (USDA) Forest Service through an interagency agreement with DOL. Both center contractors and the USDA Forest Service employ Job Corps center staff who provide program services to students. In addition, ETA contracts with 15 organizations to provide other supports for the program, including student outreach and career assistance. To be eligible for Job Corps, youth must be between the ages of 16 and 24 at the time of enrollment; meet low-income criteria; and have an additional barrier to education and employment, such as being homeless, a school dropout, or in foster care. Once enrolled, youth are assigned to a specific Job Corps center, usually one that is located nearest their home and offers a job training program of interest. The vast majority of students live at Job Corps centers in a residential setting, while the remaining students commute on a daily basis from their homes to their respective centers. This residential structure is unique among federal youth programs and enables Job Corps to provide a comprehensive array of services, including housing, meals, clothing, financial assistance, medical and dental care, and recreational activities, as well as academic instruction and job training. Because Job Corps is self-paced, the length of time students participate in the program varies. On average, students participate in the program for 9.6 months; however, the maximum enrollment period is generally 2 years. The Office of Job Corps is responsible for overseeing program operations and monitoring Job Corps costs. Two other offices work in conjunction with the Office of Job Corps and ETA’s six regional offices to monitor and manage Job Corps program costs: ETA’s Office of Financial Administration, which was created in August 2012 to strengthen internal controls and separate Job Corps’ budget and accounting roles and responsibilities from the Office of Job Corps; and ETA’s Office of Contracts Management, which was created in 2010 to centralize ETA’s acquisition and procurement functions, including soliciting, evaluating, and awarding Job Corps contracts. ETA also consults with and reports financial information on Job Corps to DOL’s Office of the Chief Financial Officer, which is responsible for department-wide financial management operations and reporting. In addition, ETA closely works with DOL’s Office of the Assistant Secretary for Administration and Management on budget and contracting issues. ETA manages Job Corps funds across three different time periods: the fiscal year (October 1 – September 30), program year (July 1 – June 30), and contract year, which varies by contract. Fiscal year. In DOL’s annual appropriations acts, Congress provides funding for three Job Corps accounts—Administration; Construction, Rehabilitation, and Acquisition (CRA); and Operations. Administration funds are made available on a fiscal year basis, while Operations and CRA funds are made available on a program year basis. Table 1 shows the appropriations for each of Job Corps’ accounts made in fiscal years 2011 through 2013. Program year. Job Corps operates on a program year basis, which begins on July 1 of the fiscal year for which the appropriation is made, and ends on June 30 of the following year. According to ETA officials, operating on a program year gives Job Corps the flexibility to respond to budget uncertainty. For example, if Congress passes a continuing resolution, Job Corps would be minimally affected, if at all, because its program year funding would cover its ongoing operations until June 30. However, ETA officials noted that if a continuing resolution lasts beyond June 30, it would likely affect Job Corps’ operations. Contract year. ETA awards contracts—and contractors manage their budgets—on a contract year basis, which may begin and end at different times based on the terms of the contract. In program years 2011 and 2012, ETA officials projected that Job Corps’ Operations account would not have sufficient funds to cover program costs. In May 2013, DOL’s inspector general reported that several factors contributed to Job Corps’ financial challenges. These factors included a combination of (1) untimely communication about projected costs that exceeded appropriations; (2) initial planning for costs that did not account for increased expenditures for three new centers; (3) inaccurate accounting for projected obligations; and (4) a lack of consistent monitoring of costs throughout the program year. In response to these findings, DOL’s inspector general recommended that ETA improve Job Corps’ internal controls in four areas: (1) policies, procedures, and communication of information; (2) budget execution; (3) data supporting spending projections and monitoring; and (4) monitoring of projected and actual costs. ETA officials told us that they have addressed all of the recommendations; however, as of January 2015, DOL’s inspector general had not closed most of them because it had not received sufficient documentation of ETA’s actions. ETA officials said that they plan to provide the additional documentation to the DOL inspector general, and are aiming to have all the recommendations closed in 2015. ETA addressed Job Corps’ financial challenges in program years 2011 and 2012 through a combination of funding transfers and spending cuts. In program year 2011, ETA primarily used transferred funds to resolve Job Corps’ projected funding gap, whereas in program year 2012, ETA relied more heavily on spending cuts. Over both program years, ETA used $38.4 million in funding transfers and implemented $75.3 million in spending cuts (see table 2). Most of these spending cuts—$60.3 million— were made through modifications to individual Job Corps center contracts and reductions to the USDA Forest Service budget for operating its Job Corps centers; the remaining cuts were implemented by the Office of Job Corps across all centers and at the national level. Over both program years, ETA’s spending cuts included: Three temporary enrollment suspensions. ETA suspended enrollment of new students in June 2012, from late November to December 2012, and from late January to late April 2013. Temporary cuts to training. ETA prohibited students from enrolling in advanced training or the college program from January to April 2013. Temporary reductions to maximum center enrollment levels. ETA reduced maximum enrollment levels by 22 percent, on average, across all centers. This eliminated unused slots but did not affect students who were already enrolled. Permanent cuts to student benefits and services. ETA reduced student stipends and transition pay that students receive upon graduation. ETA also increased the student-teacher ratio for academic classes and reduced health services and recreational activities. Permanent cuts to administrative costs. ETA reduced national contracts for academic support, career technical support, and the Job Corps Data Center. ETA also reduced national advertising as well as training and travel for center staff. After using transferred funds and implementing spending cuts, ETA reported that Job Corps ended program years 2011 and 2012 with $8 million and $40 million in obligated but unexpended operations funds, respectively. For example, ETA indicated that in program year 2012, it did not use these funds to make payments under the contracts because contractors’ costs were lower than expected. Specifically, ETA officials said that slower than anticipated enrollment after the final enrollment suspension was lifted resulted in lower than expected costs. ETA officials also noted that if Job Corps had been able to increase enrollment more quickly, the amount of obligated but unexpended funds would have been significantly less. Job Corps ended program year 2013 with $11 million in obligated but unexpended funds, which ETA officials described as a typical amount of “carry-over funds.” In a February 2014 response to a congressional inquiry and in its fiscal year 2015 Congressional Budget Justification, ETA stated that the $40 million in obligated but unexpended program year 2012 operations funds were to be “offset” from the contractors’ remaining program year 2013 allocations. This would allow program year 2013 funds to be used for a variety of purposes. In addition, according to ETA officials, the agency used funds that had been obligated for Job Corps contracts but remained unexpended at the end of program years 2011 and 2013 for Job Corps operations in program years 2012 and 2014, respectively, as specified in individual contracts. With respect to all three years—program years 2011, 2012, and 2013—the basis and extent to which ETA used funds made available for one program year for a subsequent program year, and the mechanisms by which it did so, are unclear but beyond the scope of this review. ETA used different overarching goals to guide its decision-making process for selecting measures to address Job Corps’ financial challenges in program years 2011 and 2012. ETA officials said that in program year 2011, their overall goal was to act quickly to end the program year within budget because they had only 2 months to resolve the projected funding gap before the program year ended. Officials said that in program year 2012, their overall goals were to: (1) align Job Corps’ operating expenses with its appropriation, and (2) implement sufficient spending cuts to avoid making additional cuts under sequestration in the following program year. In deciding the extent to which it would use funding transfers to address Job Corps’ financial challenges in program years 2011 and 2012, ETA considered three factors: (1) the time frame it had in which to act, (2) the long-term effect on Job Corps’ financial position, and (3) the effect on Job Corps accounts or other programs from which funds would be transferred. For example, in program year 2011, ETA officials said they used most of the amount available for transfer because there was little time remaining in the program year to implement spending cuts and negotiate contract modifications. However, in program year 2012, ETA officials said they used transferred funds to a lesser extent because they believed that making permanent spending cuts would better position the program to absorb reductions under sequestration in the following program year. In addition, to avoid adversely affecting the safety of Job Corps center facilities, ETA officials said they decided not to transfer funds intended for facility maintenance and improvements to program operations again in program year 2012 after doing so the previous year. In selecting spending cuts in program years 2011 and 2012, ETA considered four factors: (1) the potential effect on students already enrolled in Job Corps, (2) potential dollar savings, (3) the implementation time frame, and (4) equity across contractors. For example, to minimize adverse effects on students already enrolled in Job Corps, ETA officials said they reduced biweekly stipends and transition pay—which students receive upon graduation—only for new students who entered the program on or after November 1, 2012. In addition, ETA officials told us they considered reducing student enrollment at seven Job Corps centers in program year 2012, but decided against this approach because it would not have generated sufficient savings before the end of the program year, and because it would have disproportionately affected certain contractors. Instead, ETA officials said they decided to implement a third temporary enrollment suspension because they believed it would more quickly and equitably achieve sufficient savings. ETA officials said they also considered all 38 recommendations received from internal workgroups and other stakeholders, and implemented 21 of them. In program year 2011, ETA received recommendations from the Job Corps Cost-Effectiveness Workgroup before it projected the funding gap. In program year 2012, ETA solicited recommendations for spending cuts by convening two national workgroups in the areas of health care and staffing, and asked contractors and the USDA Forest Service to nominate participants. In addition to the recommendations submitted by these workgroups, ETA also received several unsolicited recommendations from the National Job Corps Association and two center contractors, which offered alternatives to the third enrollment suspension in program year 2012. ETA officials told us that they implemented 21 of the 38 recommendations submitted by these stakeholder groups (appendix II provides a list of spending cuts implemented by ETA). ETA officials said they did not implement the remaining 17 recommendations for various reasons including potential adverse effects on students already enrolled; lack of evidence of savings; and, lack of timely savings (appendix III provides a list of recommendations not implemented by ETA). For example, officials said they decided not to implement a recommendation to conduct a staff compensation survey because it would have increased costs. While ETA considered recommendations from stakeholders in selecting spending cuts, it implemented a third enrollment suspension from late January to late April 2013 despite concerns raised by stakeholders. For example, the National Job Corps Association urged ETA to reconsider the planned suspension, calling it a drastic step that would compromise the mission of Job Corps. Similarly, a center contractor raised concerns that the suspension would “significantly hurt a large number of students.” In addition, more than 70 members of Congress expressed concerns about the suspension, stating that it would not only be detrimental to students, but would result in layoffs of Job Corps center staff. However, ETA officials said they decided to implement this suspension because they believed it would quickly and equitably achieve sufficient savings. ETA officials also noted that, as program year 2012 progressed, they had fewer options for spending cuts that would achieve sufficient savings. Job Corps center contractors’ corporate and center staff, and outreach and admissions contractors, told us that the timing of ETA’s internal notices regarding spending cuts in program years 2011 and 2012 in some cases created challenges for staff. For example, in two cases, ETA issued a notice on a Friday that required all contractors to submit a response by the following Monday or Tuesday. Staff at three of eight centers we visited said they or corporate staff worked over the weekend to prepare responses, such as revised spending plans. Our review of ETA’s internal notices regarding spending cuts found that 11 of 19—8 of which were issued in program year 2012—provided all contractors, including center staff, with short notice of program changes or a short time frame in which to respond, or both. Specifically, seven notices provided contractors 3 or fewer business days notice of program changes; two required contractors to provide a response within 3 business days; and two notices did both. ETA officials said the timing of their communications to contractors was appropriate, given their oversight role and the time constraints they faced, and noted that they discussed one program change with contractors before the related notice was issued. They also emphasized that the internal notices we reviewed were not contract modifications, which are the legally binding changes to the terms of a contract. However, officials acknowledged that contractors received these internal notices before they received contract-related communications such as contract modifications. In our review of the 11 internal notices, we found that 2 informed contractors of upcoming contract modifications, but both of the notices stated that program changes were effective immediately. In addition, Job Corps center contractors’ corporate and center staff, and outreach and admissions contractors, told us that ETA’s internal notices sometimes lacked information they needed to effectively implement changes and communicate them to students and community partners. For example, staff from 12 of the 15 contractors and centers we interviewed told us that the internal notices did not include the total amount of the projected funding gaps or how long the spending cuts would last. Five of these contractors said this lack of information made it difficult to answer questions from students and organizations they had formed partnerships with in their communities. However, ETA officials said that they chose not to share the total amount of the projected funding gaps with contractors because it could have affected ETA’s ability to maximize savings during contract negotiations. Further, three of the four outreach and admissions contractors we interviewed told us that a partial stop-work order issued in January 2013—which directed each contractor to immediately stop outreach and admissions activities and reduce its staff to one—lacked sufficient instructions. Specifically, two contractors said that the order did not provide guidance on whether staff could continue to work to accomplish required tasks, including notifying pending enrollees of the enrollment suspension and securing offices and student files. Two contractors said that the order did not provide sufficient guidance on staff reductions, such as whether staff who were laid off could receive severance pay or how long the reductions would last. Three contractors noted that regional officials were unable to provide clear answers to their questions. Due to this lack of guidance, two contractors decided not to lay off staff until they had completed the required tasks, despite uncertainty about whether they would be reimbursed for those costs. While these contractors told us that they needed more information on how to implement these changes, the partial stop-work order included the minimum content required by federal regulations. Specifically, the partial stop-work order included: (1) a description of the work to be suspended; (2) instructions concerning the contractor’s issuance of further orders for materials or services; (3) guidance to the contractor on action to be taken on any subcontracts; and (4) other suggestions to the contractor for minimizing costs. In addition, ETA officials said that the level of information they provided to contractors was appropriate, given ETA’s oversight role, and noted that contractors are responsible for internal personnel decisions such as paying benefits and terminating or laying off staff. Officials also noted that they negotiated the terms and conditions of the partial stop-work order separately with each contractor, so it would not have been appropriate for ETA to issue more detailed instructions to all contractors. While ETA is subject to regulatory requirements regarding contract- related communications, it also has internal guidance on the minimum content of the internal notices it uses to inform contractors about program changes. Specifically, ETA has templates requiring these notices to include information such as the purpose of the notice, an explanation of the program change, the effective or expiration date for the change, actions that contractors are required to take, and time frames for completing these actions. However, these templates do not specify the amount of notice that contractors should receive before program changes are expected to be implemented. According to federal internal control standards, federal agencies should ensure that pertinent information is distributed to the right people in sufficient detail and at the appropriate time to enable them to carry out their duties and responsibilities efficiently and effectively. Given the challenges contractors identified, ETA’s internal guidance may not ensure that contractors receive sufficiently detailed information at the appropriate time to effectively communicate and implement program changes. Similarly, while DOL met legal requirements for notifying the House and Senate Committees on Appropriations of funding transfers within certain time frames, some members of Congress expressed dissatisfaction with the timing and completeness of DOL’s communications. For example, our review of letters exchanged between DOL and Congress in program years 2011 and 2012 found that some members of the Senate Committee on Appropriations asked why DOL had not notified them as soon as the first projected funding gap was identified in April 2012. DOL explained that because information was evolving, it delayed notification until it had complete and accurate information. In addition, in a June 2012 report, the Senate Committee on Appropriations stated that it needed to understand the circumstances that led to the projected funding gap in order to conduct proper oversight, and requested that DOL submit a detailed report, including the impact of the projected funding gap on Job Corps’ program year 2012 budget. DOL subsequently provided the report in July 2012. Further, in a January 2013 letter to DOL, more than 70 members of Congress expressed concerns about DOL’s plans to implement a third enrollment suspension, and noted that DOL had not responded to a previous request for the projected amount of savings associated with this suspension. When the congressional members did not receive a response in the time frame they requested, they sent a letter to the Administration, expressing frustration with DOL’s lack of attention and responsiveness. DOL subsequently provided a response in February 2013. DOL officials said that they try to be as timely as possible in responding to congressional requests, and noted that the formal communications we reviewed were only part of their communications with Congress. According to DOL officials, during the last 2 months of program year 2011 and in program year 2012, they also held at least 13 informal briefings and teleconferences with congressional staff about the projected funding gaps and actions to address them. The Workforce Innovation and Opportunity Act, which was enacted in 2014, requires DOL to provide more frequent and detailed reports to several congressional committees and subcommittees on Job Corps’ financial position. Specifically, beginning in January 2015, the Act requires DOL to report every 6 months on: (1) the status of the implementation of the DOL inspector general’s recommendations to improve internal controls over Job Corps’ funds and expenditures, (2) a description of any budgetary shortfalls and the reasons for them, and (3) a description and explanation of any contract expenditures that exceed the amount of the contract. After 3 years, the Act requires DOL to submit these reports on an annual basis for 2 additional years, unless Job Corps experiences a budgetary shortfall, which triggers additional reporting requirements. The three enrollment suspensions ETA implemented in program years 2011 and 2012 restricted access to Job Corps for all but a few types of applicants, which reduced the number of students who applied to and entered the program (new enrollees) in those program years. Specifically, in program year 2012, Job Corps had nearly a third fewer applicants and more than a quarter fewer new enrollees than it did in program year 2010. During the same time period, Job Corps’ average total enrollment dropped 12 percent (see fig. 2). The enrollment suspension implemented from January to April 2013 also restricted access for other groups, including approved applicants who had not yet started in Job Corps and enrolled students who were on medical or administrative leave. For example, some youth who had been approved for the program were unable to enroll until after the suspension was lifted. Students at two centers who were in this group told us that they had to wait 4 to 6 months to enroll in Job Corps, and in some cases took temporary jobs while they waited. One outreach and admissions contractor we interviewed told us that they had about 130 applicants ready to enroll before the enrollment suspension, but only 30 enrolled once the suspension ended. In addition, the 2013 enrollment suspension restricted some students who were already enrolled but were on medical or administrative leave from returning to the program until the suspension was lifted. While ETA allowed applicants who were homeless, runaways, or in foster care to enroll in Job Corps during the 2013 enrollment suspension, fewer were enrolled relative to previous years. According to ETA data, 40 percent fewer youth who were homeless, runaways, or in foster care entered the program during January to April 2013 compared to the same time period of the prior year. ETA officials said that fewer of these applicants applied during the enrollment suspension but stated that the acceptance rate for these applicants did not change. However, two outreach and admissions contractors told us that the restrictions imposed on them through the partial stop-work order made it difficult to enroll these applicants. For example, they said that the travel restrictions made it difficult to assist and maintain contact with homeless, runaway, or foster care applicants throughout the application and enrollment process. Additionally, these outreach and admissions contractors said that ETA established a new rule that required verifications of homelessness to be submitted to and approved by the national Office of Job Corps. The 2013 enrollment suspension also had subsequent effects on student recruitment and enrollment. While outreach and admissions contractors were allowed to begin operating again shortly before the end of the enrollment suspension, ETA did not allow them to immediately return to full staffing levels. Three outreach and admissions contractors told us that most of their former staff did not return so they had to recruit, hire, and train new staff. Besides staffing challenges, four outreach and admissions contractors told us that they had to reestablish partnerships with community organizations and build a new applicant pool since they were not allowed to collect applications during the suspension. Outreach and admissions contractors also told us that because of the smaller applicant pool and the push from ETA to fill centers quickly with limited staffing levels, they have had difficulties meeting enrollment goals. Although ETA lowered its planned enrollment goals for Job Corps after the enrollment suspension was lifted, it did not come within 10 percent of its enrollment goal until December 2013—8 months after the 2013 enrollment suspension was lifted (see fig. 3). Although ETA tried to minimize adverse effects on students who were already enrolled in Job Corps, some of the spending cuts reduced students’ benefits and temporarily limited their training options. Because much of Job Corps’ budget is dedicated to student-related costs, ETA officials said that they had to make cuts that would affect new and current students to address the program’s financial challenges. While some of the cuts affected only newly enrolled students, others affected both new and current students. ETA temporarily restricted some training options that would have given students the opportunity to stay in Job Corps longer and earn additional credentials. All students participate in training for at least one trade, but some take an additional trade, participate in advanced training, or take college courses. Students at two centers told us that they anticipated taking a second trade because they were unable to take their preferred trade when they first enrolled. In an effort to reduce overall enrollment during the 2013 suspension, however, ETA prohibited students from participating in advanced training or entering the college program from January through April 2013. In addition, ETA officials told us that they limited the number of students who could pursue a second trade. According to ETA data, there has generally been a decline in the number of students who have completed additional training over the last 3 program years (see fig. 4). The number of students who completed the college program declined 40 percent from program year 2012 to program year 2013. In addition, ETA reduced financial assistance available to students, making it more difficult for them to support themselves while in Job Corps and after they complete the program. Biweekly stipends for newly enrolled students were reduced from a maximum of $50 to $35, and the maximum clothing allowances for all students were cut by more than a third. Staff at four centers and four groups of students we interviewed said that these reductions made it more difficult for students to purchase necessities such as toiletries and clothing. One staff member noted that students come into Job Corps with very little and, while the amount they receive is small, the financial resources that Job Corps provides makes a big difference in the students’ lives. Additionally, ETA reduced the amount that newly enrolled students receive in transition pay upon graduation. The purpose of this funding is to provide students with an incentive to complete the program, and also provide them with the resources they need to live independently after they graduate from Job Corps. Center staff, regional officials, and students told us that reduced transition pay would not be enough for students to live independently after Job Corps. For example, students at one center told us that the reduced transition pay would not be sufficient for a deposit on an apartment. ETA also limited students’ access to health and wellness services by reducing the number of hours of most health professionals at each center. Some students we spoke with at four centers said they had difficulty getting appointments to see a doctor or dentist. Staff at one center told us that if a doctor was not available when emergency situations arose, students were taken to the hospital and the related expenses were paid for by the center contractor. Students in need of drug and alcohol counseling faced similar issues with access to services. At one center we visited, a drug and alcohol counselor noted that due to such cuts, the center is now less capable of providing for the needs of students who have substance abuse issues. Further, ETA implemented a hiring freeze, increased the student-teacher ratio, and cut funding for student recreation, which affected students’ experiences while in Job Corps. Cuts to center staff created a sense of instability within the program, according to students we spoke with. Students at four centers said they faced overcrowded classrooms, and some noted other negative impacts that staff layoffs had on the program such as the loss of experienced staff that built strong relationships with students and helped set their goals. Students at one center also told us that of all the cuts, those to recreational activities had the most significant impact on them. Recreational activities provide students with opportunities for social development when they are not in the classroom, according to officials in one region. ETA has undertaken several initiatives to improve Job Corps’ financial management. In response to the DOL inspector general’s recommendations, ETA has implemented several initiatives to improve the tracking and reporting of Job Corps’ financial information. According to ETA officials, they have developed guidelines to identify potential financial risks related to contractors’ spending levels. Specifically, officials now use monthly financial reports to identify and investigate instances where contractors’ expenses are 5 percent higher than their monthly budgets or 1 percent higher than their program year budgets. Also, ETA officials said that they have formalized their monthly reporting on the status of Job Corps’ contract obligations and funds to Job Corps, ETA, and DOL management, and have developed reports to reconcile their financial data recorded in various systems. ETA officials have also developed standard operating procedures to reflect current financial processes and systems, and defined staff roles and responsibilities. Beyond these steps to address the DOL inspector general’s recommendations, ETA has initiatives underway to better align Job Corps’ costs with its appropriations. Specifically, ETA is currently making limited increases to the number of students who can be served by certain Job Corps centers, while seeking to ensure that planned enrollment levels align with Job Corps’ appropriations. As part of this initiative, officials told us they are using a phased approach to increase student enrollment. ETA has assessed enrollment levels at the 65 highest-performing Job Corps centers to determine whether they can serve additional students without significantly increasing costs. ETA officials also said that they have begun assessing enrollment levels at the next 32 highest-performing centers to determine whether increases can be made, but did not provide any specific time frames for when this would be completed. In another initiative to improve financial management of the Job Corps program, ETA is monitoring the total value of Job Corps’ contracts in an effort to ensure they align with the program’s appropriations. Additionally, in February 2014, ETA created the Job Corps Financial Management Workgroup to facilitate communication about financial management challenges between national officials and contractors. Specifically, the workgroup is examining challenges and potential solutions related to tracking Job Corps funding across the program year and contract year, among other issues. Furthermore, ETA officials told us that they are assessing the financial implications associated with current and future program changes. For example, officials said that as they have made limited increases to centers’ enrollment levels, they have modified contracts—and contractors have modified their budgets—accordingly. Moving forward, ETA officials said that they will use monthly reports to identify and investigate instances where contractors’ expenses are higher than their revised budgets, as described above. In addition, ETA officials acknowledged that closing low performing centers could potentially increase costs in the short-term. The officials added that they have already begun to identify potential costs such as those associated with breaking leases or transferring current students to other Job Corps centers. Officials noted that they plan to take an incremental approach to spending any savings generated by closing centers in an effort to help ensure that they only spend savings as they are realized. While these initiatives address important financial management challenges for the Job Corps program, it is not yet known the extent to which they will help ETA to improve its financial management of the program in the future. While the financial challenges ETA faced in program years 2011 and 2012 were difficult for the Job Corps program, ETA’s response to them revealed issues that may have relevance for any similar situations in the future. As the program moves forward, it will be important for ETA to ensure that contractors, including center staff, receive the information they need at the appropriate time to effectively communicate and implement program changes. Given the challenges contractors identified, ETA’s current guidance for its internal notices may not be sufficient to ensure that they receive information that is timely and of sufficient detail. Without such information, contractors may continue to face challenges effectively communicating and implementing program changes, such as ongoing changes to center enrollment levels and future closures of low- performing centers. It will also be important for ETA to continue its efforts to improve Job Corps’ financial management. The internal controls deficiencies identified by the DOL inspector general were significant and wide-ranging. In addition, the spending cuts implemented in program years 2011 and 2012 had adverse effects on students and others. While ETA is working to close the inspector general’s recommendations, it is too early to determine the extent to which the steps ETA has taken will help improve its financial management of the program in the future. To enhance communication with contractors about Job Corps program changes, we recommend that the Secretary of Labor direct the Assistant Secretary for Employment and Training to review the sufficiency of ETA’s guidance for internal notices—including Program Instruction Notices, Policy and Requirements Handbook Change Notices, and Information Notices—to ensure that contractors are provided with adequate notification of program changes before they are expected to be implemented, and an adequate level of information to assist them in carrying out their responsibilities. We provided a draft of this report to DOL for review and comment. We received written comments from DOL, which are reproduced in appendix IV. In addition, DOL provided technical comments that we have incorporated in the report as appropriate. DOL concurred with our recommendation for ETA to review the sufficiency of its guidance for internal notices provided to contractors. Specifically, DOL acknowledged the importance of ensuring that Job Corps contractors are provided adequate notification of program changes before their expected implementation. To address this, DOL stated that it would review the sufficiency of Job Corps guidance for internal notices, and execute and distribute any contractual actions, such as modifications, in a timely manner. We are sending copies of this report to appropriate congressional committees, the Department of Labor, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The objectives of this report were to examine (1) how ETA selected the measures it implemented to address Job Corps’ financial challenges in program years 2011 and 2012, (2) the timeliness and completeness of ETA’s communications to contractors, including center staff, and Congress, (3) the effects ETA’s spending cuts had on applicants and students, and (4) the steps ETA has taken since to improve Job Corps’ financial management. To address our objectives, we used a variety of methodologies. Specifically, we reviewed agency documentation and interviewed ETA and Job Corps officials to identify the measures ETA implemented to address Job Corps’ financial challenges in program years 2011 and 2012, and how they were selected. We also reviewed summary-level financial data from DOL’s New Core Financial Management System to identify the dollar amount associated with ETA’s spending cuts. Through our interviews with knowledgeable agency officials and our review of system documentation, we determined these data to be sufficiently reliable for our purposes. To provide information on ETA’s communications, we reviewed the agency’s internal notices about program changes sent to Job Corps contractors’ corporate office and center staff, and outreach and admissions contractors, as well as DOL’s communications with Congress. We also assessed ETA’s internal communications guidance using GAO’s standards for internal control in the federal government. In addition, we interviewed staff from all six Job Corps regional offices, and conducted site visits to eight Job Corps centers, which were selected to reflect a mix of center operators, geographic diversity, and variations in center size. Table 3 provides more information on our site visit selections. During our visits, we interviewed select Job Corps contractors’ corporate office and center staff, and outreach and admissions contractors to understand their involvement in ETA’s process for selecting the spending cuts implemented. In total, we interviewed 10 corporate office staff, 86 center staff, and 19 outreach and admissions staff. We also interviewed 46 students at six of eight Job Corps centers we visited to identify how they were affected by the spending cuts. We generally interviewed those students who were enrolled in Job Corps in program year 2012 because that is the time period in which ETA implemented its longest enrollment freeze and most of the spending cuts. Furthermore, we reviewed Job Corps’ applicant, enrollment, and student outcome data generated by the Job Corps Data Center from program years 2010 through 2013 to identify how ETA’s spending cuts affected applicants and students. We began our analysis with program year 2010 because it preceded the program years in which Job Corps faced its financial challenges—program years 2011 and 2012. We assessed the reliability of the data by reviewing existing documentation and interviewing knowledgeable agency officials. Based on these efforts, we found the data to be sufficiently reliable for our purposes. To identify the steps ETA has taken to improve Job Corps’ financial management, we reviewed relevant agency documentation and interviewed national and regional ETA and Job Corps officials, as well as contractors and center staff. We also interviewed DOL’s inspector general to identify ETA’s progress in implementing its May 2013 recommendations. In addition, we reviewed standards for internal control, as well as relevant federal laws and regulations. Of the 24 Job Corps spending cuts ETA implemented in program years 2011 and 2012, 18 were suggested by ETA management and 6 were suggested by internal workgroups and other stakeholders (see table 4). These 6 spending cuts originated from 14 individual recommendations that ETA received from internal workgroups and other stakeholders. Appendix III: Stakeholder Recommendations Not Implemented by ETA Job Corps Cost-Effectiveness Workgroup Participate in Defense Logistics Agency nationwide direct supply natural gas program. Eliminate the requirement for the provision of on-site dental services at Job Corps centers. The national office should encourage and approve waivers for certain staffing requirements if an operator’s proposed approach is likely to be more cost-effective without jeopardizing performance or services. Disallow regional offices from imposing additional staffing requirements on Job Corps centers beyond national guidelines. Conduct a new Job Corps staff compensation survey. Develop model staffing plans that would identify appropriate staff/student ratios for all Job Corps centers. Modify the goals under which outreach and admissions contracts are rewarded for performance to reflect actual center needs. Eliminate requirements for the provision of post-placement career transition services in order to focus resources on initial placements. Track college program slots and off-center training slots for accountability purposes. Switch Job Corps operations contracts to a 2-year base with five option years without precluding incumbents from bidding on contracts per year and requiring strict standards for the award of option years. Job Corps Staffing Cost-Efficiency Workgroup Evaluate overall staffing levels at centers with higher than average staff to student ratios (e.g., over 35 percent). Compare current staffing levels and positions to those of 10 years ago to determine if the gradual increase in overall staffing is justified by an increase in program requirements. Conduct an annual slot utilization study at each center to identify the extent to which student slot utilization supports staffing. Evaluate the need for the off-center training and the college program coordinator position on a case by case basis. Consider the extent of program slot utilization compared to the number of staff assigned. Eliminate positions, or assign responsibilities as collateral duties where caseloads do not support dedicated staffing. Review the need for and function of deputy director positions at centers with a contracted enrollment of less than 500 students. Consider changing the deputy director position to a program director, or eliminating deputy positions that are designated as developmental positions to prepare staff for center director positions. Evaluate the work-based learning program to determine if current staffing levels could be reduced in light of a continual shortage of work-based learning sites and new stringent program (safety) requirements. Redesign the work-based learning program so that paid work-based learning students can provide some routine functions that have typically been provided by staff (e.g., food service helper, maintenance helper, custodian, groundskeeper, utility worker). Reduce staffing levels for those functions by attrition. Center Contractor Consider the immediate transfer of the allowable $15 million from the Construction, Rehabilitation, and Acquisition (CRA) account to the Operations account to avoid more deleterious effects of the proposed center reductions. Andrew Sherrill, (202) 512-7215 or sherrilla@gao.gov. In addition to the contact listed above, individuals making key contributions to this report were Mary Crenshaw (Assistant Director), Ashanta Williams (Analyst-in-Charge), Jessica Botsford, Caitlin Croake, Helen Desaulniers, Danielle Giese, Carol Henn, John Hocker, Paul Kinney, Kathy Leslie, Julia Matta, Jean McSween, Sheila McCoy, Megan Mumford, Mimi Nguyen, Jerome Sandau, Kathleen van Gelder, and Bill Woods. | Job Corps—funded at about $1.6 billion in program year 2013—is the nation's largest residential, educational, and career training program for economically disadvantaged youth. In program years 2011 and 2012, ETA projected that Job Corps' costs would exceed its appropriations, and took action to resolve these gaps. In May 2013, DOL's inspector general reported internal control weaknesses and recommended improvements; however, questions remained about the funding transfers and spending cuts ETA implemented. GAO was asked to review these measures. This report examines (1) how ETA selected the measures it implemented to address Job Corps' financial challenges, (2) the timeliness and completeness of ETA's communications to contractors, including center staff, and Congress, (3) how spending cuts affected applicants and students, and (4) steps ETA has taken since to improve Job Corps' financial management. GAO visited 8 of the 125 centers—selected based on their geographic diversity and other factors—and interviewed staff and students; reviewed ETA's internal notices to contractors and DOL's communications with Congress, and assessed ETA's guidance for internal notices using federal internal control standards; and analyzed ETA's enrollment data from program years 2010 through 2013. In selecting funding transfers and spending cuts to address Job Corps' projected funding gaps in program years 2011 and 2012, the Department of Labor's (DOL) Employment and Training Administration (ETA) considered various factors, including the potential effects on students and recommendations from stakeholders. After considering these factors, ETA used $38 million in funding transfers and implemented $75 million in spending cuts, which included several temporary suspensions of new enrollments, temporary cuts to training, and permanent cuts to student benefits and services, and administrative costs. Job Corps' contractors, including center staff, said ETA's internal notices to implement spending cuts in program years 2011 and 2012 were sometimes not timely and complete. For example, in some cases, ETA issued a notice on a Friday that required a response by Monday or Tuesday. Staff at three of the eight centers GAO visited said they or other staff worked over the weekend to prepare responses, such as revised spending plans. GAO's review of ETA's internal notices found that 11 of 19 gave contractors 3 business days or fewer to implement a program change, respond, or both. Contractors also said ETA's notices sometimes lacked information they needed to effectively implement changes and communicate them to students and community partners, such as how long cuts would last. ETA officials said their communications were appropriate, given their oversight role and time constraints. Although ETA has guidance on the content of internal notices to contractors, it does not specify time frames for providing notice of program changes. Given challenges contractors identified, ETA's guidance may not ensure that, in accordance with federal internal control standards, contractors receive sufficiently detailed information at the appropriate time to effectively communicate and implement changes. With regard to Congress, while DOL met requirements for notification of funding transfers, some members sent letters to DOL expressing dissatisfaction with the timing and completeness of DOL's communications. The Workforce Innovation and Opportunity Act, subsequently enacted in 2014, requires DOL to provide more frequent and detailed reports to Congress on Job Corps' financial position. ETA's spending cuts reduced the number of applicants and new enrollees, limited some training opportunities for students, and had other adverse effects. According to ETA data, Job Corps applicants decreased by about a third, from 79,567 in program year 2010 to 53,725 in program year 2012, and new student enrollments decreased by about a quarter, from 56,171 to 40,792 over the same time period. The final enrollment suspension, along with other factors, such as a reduction in outreach and admissions staff, also had subsequent effects on student recruitment. For example, after the enrollment suspension ended, it took Job Corps 8 months to reach over 90 percent of its planned enrollment goal. In response to recommendations by DOL's inspector general for internal control improvements, ETA has implemented several initiatives to improve the tracking and reporting of Job Corps' financial information. ETA also has initiatives underway to help ensure that Job Corps' costs and appropriations are aligned and to assess the financial implications associated with program changes. While these are all important steps, it is too early to determine the extent to which they will help ETA improve its financial management of the program in the future. GAO recommends that ETA review the sufficiency of its guidance for internal notices about program changes. ETA concurred with this recommendation. |
For fiscal year 2011, HUD’s three largest offices administered programs that accounted for about 93 percent of HUD’s total budgetary resources of approximately $134.3 billion. Specifically, HUD’s Office of Public and Indian Housing (PIH) had total budgetary resources of approximately $71.2 billion (about 53 percent), the Office of Housing (OH) had about $32.9 billion (approximately 24 percent), and the Office of Community Planning and Development (CPD) had about $21.4 billion (approximately 16 percent). (See fig. 1.) The remaining 7 percent of HUD’s total budgetary resources included the Government National Mortgage Association (Ginnie Mae); the Offices of Policy Development and Research, Fair Housing and Equal Opportunity, and Lead-Based Paint and Poisoning Prevention; and HUD’s management and administration including financial operations across all of HUD’s programs. PIH-administered programs are intended to ensure safe, decent, and affordable housing for low-income families; create opportunities for self- sufficiency and economic independence; reduce improper payments; and support mixed income developments to replace distressed public housing. These programs include grants and subsidies to public housing authorities (PHA) nationwide to provide affordable housing opportunities for about 3.3 million low-income families. Section 8 of the United States Housing Act of 1937, as amended, includes programs for tenant-based vouchers and project-based rental assistance. During fiscal years 2007 through 2011, the number of PHAs increased from 3,100 to 4,150. The OH-administered programs include FHA, which insures mortgages and loans made by FHA-approved lenders for single and multifamily housing units intended to serve borrowers who are not being adequately served by the conventional market, including first-time homebuyers, minorities, low-income families and residents of underserved communities. During the recent mortgage crisis, larger segments of the market began using FHA-insured loans, resulting in the more than doubling of the dollar amount of these mortgage loans from about $439 billion in fiscal year 2007 to over $1 trillion in fiscal year 2011. (See fig. 2.) CPD provides funding mainly through the Community Development Block Grants (CDBG) Program, which is the most widely available source of federal assistance to state and local governments for neighborhood revitalization, housing rehabilitation activities, and economic development. Because of the funding mechanism that the CDBG Program already has in place to provide federal funds to states and localities, the program is widely viewed as a convenient tool for disbursing large amounts of federal funds to address emergencies. Over the past two decades, CDBG has repeatedly been adapted as a vehicle to respond to federal disasters, such as floods, hurricanes, and terrorist attacks—including being used to facilitate disaster relief funds in the wakes of Hurricanes Katrina and Rita in 2006 and 2007. In addition, HUD programs are used to deliver funds for activities associated with the American Recovery and Reinvestment Act of 2009 (Recovery Act), which is one of the federal government’s key efforts to The stimulate the economy in response to the recent economic crisis. goals of the Recovery Act include helping preserve and create jobs, promoting economic recovery from the recent economic recession, providing investments to increase economic efficiency by spurring technological advances, and investing in infrastructure to provide long- term economic benefits. Under the Recovery Act, HUD received $13.6 billion in appropriations to be used to fund several housing program areas, and the OIG received an additional $15 million to its annual appropriation for the oversight and audit of HUD programs, grants, and activities funded by the act. The Cabinet-level OIGs, including the HUD OIG, were established by the IG Act which, among other things, requires each OIG to report specific accomplishments in semiannual reports provided for the Congress. This includes the number of audit reports issued and the questioned costs, unsupported costs, and funds to be put to better use, identified by the OIGs’ audits. As defined by the IG Act, questioned costs include either alleged violations of laws, regulations, contracts, grants, or agreements governing the expenditure of funds; costs not supported by adequate supporting documentation; or the expenditure of funds for an intended purpose that was unnecessary or unreasonable. In addition, unsupported costs are defined as costs that do not have adequate supporting documentation, and funds to be put to better use are inefficiencies identified by the OIG in the use of agency funds. The OIGs also include investigative accomplishments in their semiannual reports. These can include monetary accomplishments such as fines and restitutions resulting from settlements or court-ordered actions resulting from illegal activities investigated by the OIGs, and nonmonetary accomplishments such as cases opened, convictions, and administrative actions. For the 5-year period from fiscal year 2007 through 2011, the HUD OIG had total budgetary resources that were consistently fifth highest out of the 16 Cabinet-level OIGs. (See table 1.) Over the same 5-year period, the total budgetary resources for all16 OIGs increased from about $1.5 billion to almost $2.2 billion, or about 45 percent. In comparison, the HUD OIG’s budgets increased approximately 19 percent, from about $121 million to about $144 million, or less than half of the percentage increase for the total Cabinet-level OIG budgets. When comparing the full-time-equivalent (FTE) staff of the Cabinet-level OIGs during the same period, the HUD OIG was fifth in fiscal year 2011. In prior years the HUD OIG ranked fourth, immediately ahead of the Department of Homeland Security OIG during fiscal years 2007 through 2009, and immediately behind the same OIG during fiscal years 2010 and 2011. (See table 2.) The HUD OIG increased its level of FTEs by about 13 percent during the 5-year period, a similar but somewhat smaller increase than the approximately 17 percent average increase in FTEs for all the Cabinet-level OIGs. During each year of the 5-year period, from fiscal years 2007 through 2011, the HUD OIG’s reported monetary accomplishments compared with its total budgetary resources resulted in estimated annual returns on each total budget resource dollar received. These returns ranged from a low of $10.73 in fiscal year 2010 to a high of $18.70 in fiscal year 2009. In addition, HUD’s OIG reported that total monetary accomplishments from audits, inspections, and investigations over the 5-year period were approximately $9.2 billion. When compared to the HUD OIG’s total budgetary resources for the entire 5-year period of $675 million, the estimated average return for each total budgetary resource dollar received was about $13.62. (See table 3.) When compared to the average 5-year return on total budgetary resource dollars for all 16 Cabinet-level OIGs, HUD OIG’s return was again similar, but somewhat higher. Specifically, the 16 Cabinet-level OIGs reported monetary accomplishments in their semiannual reports that totaled about $106.2 billion over the same period. When compared to their combined total budgetary resources of approximately $9.547 billion, these OIGs had an overall estimated average return on investment of about $11.12 for each total budgetary resource dollar received. (See table 4). The OIGs’ combined estimated average return on total budgetary resource dollars during this 5-year period ranged from a low of $7.54 in fiscal year 2007 to a high of $14.33 in fiscal year 2011. The HUD OIG reported providing the majority of audits and inspections in HUD’s three largest program offices during fiscal years 2007 through 2011. Specifically, the OIG reported a total of 905 audit and inspection reports, which included reviews of the efficiency and effectiveness of HUD’s management and program operations, and audits of HUD’s financial statements during the 5-year period. Of these reports, 810— about 90 percent—addressed HUD programs administered by PIH, OH, and CPD. The OIG reported coverage of PIH-administered programs with a total of 339 audit and inspection reports, which was the greatest number of reports in any of HUD’s program offices. (See fig. 3.) The emphasis on the PIH programs is a result of the OIG’s concerns with the overpayment of housing assistance in the Section 8 programs. The OIG addressed CPD-administered programs through 270 reports with an emphasis on the oversight of funding that goes to nonprofit organizations that have historically not participated in federal programs and may lack the capacity to comply with all grant requirements. Also, according to OIG officials, the OIG’s 201 audit and inspection reports of OH-administered programs are a result of the OIG’s recognition that FHA’s share of mortgage originations has been at an all-time high over the past few years. The HUD OIG also reported opening a total of 6,149 investigative cases, with most of them providing investigative coverage of HUD’s three largest program offices during the 5-year period. Specifically, of these cases, 5,841 (about 95 percent), were in programs administered by PIH, OH, and CPD. The OIG opened 2,944 investigative cases to address alleged fraud in PIH-administered programs, which included rental assistance programs and the administration of public housing authorities. (See fig. 4.) In addition, through expanded mortgage fraud initiatives to address the unprecedented increase in the number of new and refinanced FHA loans, the OIG reported opening 2,018 investigative cases in OH- administered programs. The OIG also reported opening 879 investigative cases into alleged public corruption within the management of housing projects as well as the administration of grant programs funded to state and local governments through CPD-administered programs. The HUD OIG’s total monetary accomplishments of approximately $9.196 billion reported during fiscal years 2007 through 2011 includes about $6.94 billionreports, about $1.39 billion in monetary recoveries from investigations, and about $866 million from additional related investigative efforts. These include the monetary amounts of funds put to better use; questioned costs identified by audits and inspections; and investigative recoveries from fines, settlements, and restitutions. in potential savings from audit and inspection Of the almost $6.94 billion in reported potential monetary savings from audits and inspections, approximately $2.46 billion (about 36 percent), was in HUD’s three largest program offices. Of the remaining amount, about $4.45 billion (approximately 64 percent), was mostly from a financial control deficiency and not directly related to HUD’s large program offices. (See fig. 5.) The majority of this figure—approximately $4.27 billion (about 96 percent), is related to ongoing significant deficiencies reported by the OIG in HUD’s financial process for reviewing outstanding obligations and recapturing amounts no longer needed to fund them. This process was reported by the HUD OIG as a significant deficiency in fiscal year 2011 and in prior years because it allowed invalid obligations to remain in HUD’s accounting records. The HUD OIG identified about $1.25 billion in monetary accomplishments over the 5-year period from audits and inspections of PIH-administered programs, particularly the Section 8 programs. These monetary accomplishments were from audits and inspections of tenant eligibility issues, the accuracy of rental assistance payments, the quality of housing, and the cost of administering the programs. Over the 5-year period, the OIG reported monetary accomplishments from audits and inspections of CPD-administered programs of approximately $835.7 million. These audits and inspections focused on the control systems in place, especially for subrecipients of HUD grant funds, to determine whether these controls provide the review and oversight necessary to ensure funds are spent on eligible activities and put to good use. Also, over the same period, the HUD OIG reported monetary accomplishments of about $381.3 million related to OH-administered programs. The HUD OIG’s audits and inspections target FHA lenders based on a number of high-risk indicators. In fiscal year 2010, the OIG conducted Operation Watchdog that involved reviewing the underwriting of 284 mortgages. Of these mortgages, the OIG concluded that almost 50 percent never should have been insured and resulted in an estimated loss in excess of $11 million. The OIG recommended that HUD take administrative actions against each lender and that HUD develop and implement a risk-based selection of loans to verify that the loans met FHA requirements. The HUD OIG reported a total of about $1.392 billion in investigative recoveries during fiscal years 2007 through 2011. Approximately $1.2 billion (about 86 percent), of these recoveries were from investigations in OH’s housing programs administered by FHA. (See fig. 6.) The OIG reported mortgage fraud investigations into FHA’s programs as a continuing priority and reported working closely with the Federal Bureau of Investigation (FBI) to coordinate mortgage fraud initiatives. OIG investigations focused on various frauds perpetrated by mortgage companies and brokers, title companies, loan officers, real estate agents, closing attorneys, appraisers, builders, and nonprofit entities. For example, a HUD OIG investigation found that HUD, Ginnie Mae, and other financial entities had realized losses in excess of $1.9 billion due to bank, wire, and securities fraud committed by the chairman of a former FHA-approved lender. The chairman was sentenced to 30 years in prison and ordered to forfeit $38.5 million. For investigative recoveries related to other HUD offices, the OIG reported about $94.5 million in PIH- administered programs, and investigations of CPD-administered programs resulted in about $63.4 million in investigative recoveries during the same 5-year period. Regarding nonmonetary accomplishments in HUD’s three largest program offices, the HUD OIG reported that its PIH-related investigative priorities include Section 8 rental assistance fraud committed by tenants and landlords, Section 8 administrators, and PHAs. An important part of the investigative efforts in this area included outreach by the OIG staff to meet with executive directors of housing authorities, provide training seminars for the identification of fraud, and develop liaisons for referrals. As a result, during fiscal years 2007 through 2011, the OIG reported participating in almost 3,000 convictions, pleas, and mistrials (See fig. 7.), and 3,655 administrative and civil actions to address wrongdoing in PIH- administered programs. (See fig. 8.) The OIG also reported full-time participation on the FBI National Mortgage Team with mortgage fraud task forces at over 40 locations throughout the country. These activities led to increased investigations, as well as civil actions, to address fraud in HUD’s single family programs. As a result the OIG reported 1,440 convictions, pleas, and mistrials and 1,680 administrative and civil actions in OH-administered programs. With respect to recent increases in HUD oversight responsibilities, the HUD OIG reported investigative activities that focused on CDBG grants that included federal funding for hurricane and disaster assistance. As a result, the OIG reported a total of 308 convictions, pleas, and mistrials and 200 administrative and civil actions during the 5-year period. For example, a 2010 OIG investigation resulted in a Gulf Coast resident being charged in U.S. District Court with making false statements in the theft of government funds after receiving $300,000 in CDBG disaster assistance funds for damaged property that was not the recipient’s primary residence during Hurricane Katrina and therefore did not qualify for disaster assistance. In fiscal year 2009, the Congress provided the HUD OIG with additional funding of $15 million to provide oversight of Recovery Act funds through HUD’s programs. This resulted in an increased focus by the OIG on HUD’s Recovery Act responsibilities. The coverage and accomplishments reported by the OIG include a total of 177 audit and inspection reports that address the Recovery Act, as well as about $133.7 million in related monetary accomplishments during fiscal years 2009 through 2011. (See table 5.) The HUD OIG considers Recovery Act activities to be high risk with the potential for housing-related fraud because significant allocations of these funds are processed in an unusually short time frame. The OIG’s Recovery Act audits and inspections included determining whether funds are awarded and distributed in a prompt, fair, and reasonable manner. In addition, the OIG’s audits are to help to determine whether recipients and users of funds are transparent to the public, whether funds are used for the authorized purposes, and whether program goals are achieved. In written comments on a draft of this report, the HUD IG generally concurred with the report contents. We also received technical suggestions which we incorporated as appropriate. We are sending copies of this report to the HUD IG and interested congressional committees. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions or would like to discuss this report, please contact me at (202) 512-2623 or davisb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix II. In addition to the contact named above, Jackson Hufnagle, Assistant Director; Clarence Whitt; Francis Dymond; Jacquelyn Hamilton; Katherine Lenane; Arkelga Braxton; Jessica Butchko; Pierre Kamga; and Janaya Davis Lewis made key contributions to this report. | The joint explanatory statement for the Omnibus Appropriations Act, 2009, called for GAO to report on the resources of the HUD OIG in light of HUDs recently expanded roles and responsibilities. In response, GAO (1) compared the budgets, staffing levels, and monetary accomplishments of the HUD OIG to that of comparable OIGs during recent years, and (2) described the results of the HUD OIGs oversight of HUDs programs. GAO compared the budget and staff resources of the HUD OIG with that of other Cabinet-level department OIGs for the 5-year period from fiscal year 2007 through 2011. GAO also summarized the monetary accomplishments of the HUD OIG and other OIGs as reported in their semiannual reports to the Congress, and compared the results with their total budgetary resources to obtain a return on each budget dollar received. In addition, GAO summarized and described the HUD OIGs reported oversight coverage and monetary and nonmonetary accomplishments from audit and inspection reports and investigative cases that addressed HUDs largest program offices from fiscal year 2007 through 2011. During the 5-year period from fiscal year 2007 through 2011, the Department of Housing and Urban Developments (HUD) Office of Inspector General (OIG) had budget and staffing resources that were consistent with other OIGs, and a monetary return for each budget dollar which exceeded the average return for Cabinet-level OIGs. During the 5-year period, the HUD OIG had total budgetary resources ranging from $121 million to $144 million, consistently ranking it fifth among all Cabinet-level OIGs. However, while the total budgetary resources for all Cabinet-level OIGs increased by about 45 percent over the 5-year period, the HUD OIGs total budgetary resources increased by 19 percent. In terms of staffing, the HUD OIGs full-time-equivalent staff (FTE) consistently ranked in the top four or five of the Cabinet-level OIGs. Also, the HUD OIGs FTEs increased by about 13 percent during the 5-year period, as compared to about a 17 percent average increase for all Cabinet-level OIGs. During the same 5-year period, the HUD OIG reported an estimated average dollar return of about $13.62 for each HUD OIG total budgetary dollar received, while the 16 OIGs in the Cabinet-level departments reported an estimated average dollar return of about $11.12 for each OIG total budget dollar received over the same period. The HUD OIG reported the majority of its audit, inspection, and investigative coverage in the three largest HUD program offices during fiscal years 2007 through 2011. Specifically, of the OIGs reported 905 total audit and inspection reports completed over the 5-year period, 90 percent addressed programs in HUDs Offices of Public and Indian Housing, Housing, and Community Planning and Development, which comprised about 93 percent of HUDs fiscal year 2011 total budgetary resources. Also, of the 6149 investigative cases opened during this same period, almost 95 percent involved programs in these same offices. In addition, the OIGs reports and investigative cases focused on HUDs responsibilities related to recent increases in hurricane and disaster relief funds and HUDs implementation of the American Recovery and Reinvestment Act of 2009 (Recovery Act), administered through these HUD program offices. Also, of the almost $6.94 billion in reported potential monetary savings from the OIGs audits and inspections, approximately $2.46 billion (about 36 percent), were in the three largest HUD program offices. Of the remaining amount, approximately $4.45 billion (about 64 percent), was mostly from a financial control deficiency not directly related to the three large program offices, and an additional $28.4 million resulted from audits and inspections of hurricane relief and disaster assistance not reported as part of a specific HUD program. Of the OIGs reported $1.39 billion in investigative recoveries during the 5-year period, approximately $1.2 billion (about 86 percent), was related to mortgage fraud investigations in programs administered by HUDs Office of Housing. The OIG also reported an additional $866 million in potential savings from other investigative efforts throughout HUDs programs during the 5-year period. In addition, the OIG reported nonmonetary accomplishments primarily from investigations in HUDs three largest program offices, which resulted in 4,759 convictions, pleas, and mistrials, and 5,761 administrative and civil actions during the 5-year period. GAO is not making any recommendations in this report. The HUD Inspector General concurred with the contents of the draft report. |
GSA is the central management agency responsible for policy and oversight of administrative services (except personnel) and is a central provider of real property services, such as buildings acquisition and management, for federal agencies. GSA and its Public Buildings Service (PBS) have been undergoing a lengthy transformation in size and organization. Beginning in the 1980s, significant downsizing occurred in GSA’s workforce. Total employment in GSA declined from about 35,800 full-time equivalent (FTE) positions in 1980 to about 16,900 FTEs by 1995. While this general downsizing was occurring, PBS also began to systematically review its real property management activities, using the guidelines in the Office of Management and Budget’s (OMB) Circular A-76, to determine whether such activities should be provided in-house by government personnel or contracted out. The National Performance Review (NPR) recommended a competitive environment as the catalyst to provide the greatest impetus for GSA to streamline its real property activities. In the fall of 1993, the GSA Commissioners of PBS and the Federal Property Resources Service initiated a joint Real Property Reinvention Task Force. The task force analyzed the GSA real property organization, systems, and processes and recommended a new organizational structure for real property management to implement the NPR recommendations. This restructuring was implemented in the second quarter of fiscal year 1995. GSA is now engaged in further reinvention efforts under Phase II of NPR and additional downsizing. The objectives of this assignment were to review (1) the cost-effectiveness and performance of both in-house and contracted real property management services in GSA to determine whether its contracting decisions were sound and (2) evaluation approaches used by private sector real property management organizations to determine whether any practices could improve the oversight and evaluation of the effectiveness of GSA commercial services. For the purposes of this report, real property management services were defined as those services that GSA provides to federal agencies relating to housing for staff and facilities for the storage of equipment and supplies. Because most GSA real property services are the responsibility of PBS, our analyses focused on PBS activities. To address the first objective and support analysis for the second objective, we relied primarily on our examination of a random stratified sample of 54 activities in PBS that were originally reviewed as part of its A-76 program; 21 of the activities were originally retained in-house, and 33 were not. No one clear, common measure was available to evaluate the cost-effectiveness and performance of the sample activities, so we used three types of evidence from the sample case files to assess whether the agency’s original decisions to retain or contract activities were sound. The evidence was (1) subsequent cost comparisons, analyses, and modifications that GSA did to indicate whether it was obtaining services at a reasonable cost; (2) agency evaluations and related documents to indicate whether GSA was obtaining services at an acceptable level of performance; and (3) changes in the status of the activities tracked over time to indicate whether GSA reversed its original decisions or selected other alternatives because of cost, performance, or other factors. Because our sampling strategy precluded the selection of contracted activities from some regions of the country, the results of our assessment, although representative of a significant portion of PBS’ A-76 inventory, are not generalizable to all activities in PBS’ A-76 program. To address the second objective, we examined methods and practices used by private sector property management organizations to measure service performance and determine whether activities should be retained in-house or contracted out. We reviewed industry studies and met with officials from private sector real estate organizations. In addition, we reviewed GSA plans and proposals developed in response to NPR, the Government Performance and Results Act (P.L. 103-62 (1993)), and resulting reorganization efforts. A more detailed description of the methodology we used and specific data limitations are provided in appendix I. We did our work from September 1994 to July 1995 in accordance with generally accepted government auditing standards. We also incorporated information gathered in the preparation of our previous report on contracting out real property management services at GSA, where appropriate. We asked the Administrator of GSA for comments on a draft of this report. The comments are summarized and analyzed on page 16 and presented in appendix V. Our review of the sample activities indicated that GSA’s original decisions to retain sample activities in-house or contract them out were sound. GSA’s subsequent evaluations and analyses of government cost estimates and contractor bids for those activities confirmed that the sector originally selected, in-house or contract, provided services at a reasonable cost to the government. Case file documents also furnished evidence that performance was generally satisfactory. Finally, the sector GSA originally selected for the activities, whether in-house or contract, generally did not change over time. GSA’s cost analyses and evaluations done after the original decision to retain or contract out a sample activity provided a useful measure of whether the costs of the activities remained reasonable. The clearest measure of cost-effectiveness was found when GSA resolicited or reviewed an existing contract for renewal. (See app. IV for further information on the approaches used by GSA to determine reasonable costs.) While we could not recreate complete histories for the sample activities that were contracted out, we were able to examine cost data from 34 separate contract solicitations or renewals. For those sample activities originally retained in-house, we reviewed GSA evaluations that compared the government’s cost estimates at the time GSA decided to retain the activity with the actual costs experienced by in-house performance. At the time of our review, GSA had completed evaluations for 14 of the in-house sample activities. Information in the case files showed that contractor bids remained lower than the independent government cost estimates for in-house performance in all but three sample activities that were contracted out. However, the three activities included situations in which price was not the determining factor in the agency’s decision. One of the sample maintenance activities was brought back in-house when GSA personnel won a resolicitation that combined three separate federal activities into a single activity. The low contractor bid in that case was 6.4 percent above the government’s bid. Overall, the low contractor bids ranged from 12 percent above to 51 percent below the government cost estimate and averaged 18 percent below the government cost estimates. These actions occurred as many as 12 years after and on average 5 and one-half years after GSA’s original decisions. Figure 1 presents cost data from the subsequent contract actions. An internal evaluation process for in-house activities, known as the post-MEO review, was developed by GSA to determine whether those activities retained in-house continued to perform within the government cost estimates and performance requirements established in the A-76 competition. The post-MEO reviews did not consider what current contractor bids would be for the same activity. According to those reviews, only one sample activity reported actual costs that were more than 10 percent above the estimated costs for in-house performance, a threshold GSA has established for taking corrective action. Overall, the post-MEO reviews showed actual costs to range from 13.5 percent below to more than 25 percent above the estimated costs. The reported actual costs for six sample activities were below the original estimates for in-house performance. The sample activity with a difference of more than 25 percent between actual and estimated costs failed the post-MEO review and was converted to contract. Figure 2 presents cost data from post-MEO reviews of sample activities retained in-house. We found that detailed documentation on performance was somewhat limited in the case files for the sample activities, especially for the in-house activities, which tended to be evaluated as part of larger organizational components. However, the available information from GSA inspections, performance reviews, and evaluations, including post-MEO reviews, indicated that GSA was obtaining a satisfactory level of performance for most of the sample activities. We found no evidence of performance problems in the case files for a majority (29) of the sample activities. For 14 of the 54 sample activities, we found evidence of only relatively minor problems. These problems usually involved tasks or paperwork that were incomplete or not up to specified performance standards. For those activities that were contracted out, these problems were not serious enough to preclude GSA from exercising contract options or extensions. There was evidence of more serious problems for 11 of the sample activities. These activities included three contractor defaults, five terminations for unsatisfactory performance (one terminated contract involved two sample activities), and two occasions in which no contractor was willing to take on the sample activity. Five of these cases were converted from contract to in-house or vice versa; new contractors were found for four others, and the remaining activity was abolished when GSA disposed of the building. Performance problems were found more often in maintenance activities (both contract and in-house) or Commercial Facility Management (CFM) contracts, including all examples of terminations or defaults. Agency officials in the GSA regional and field offices also said that they had experienced more frequent and serious performance problems with contracted maintenance activities than with other types of activities. In general, the case files for the sample activities we reviewed provided evidence that GSA made efforts to oversee those activities and take appropriate corrective action when necessary. Oversight actions ranged from official correspondence or records of meetings in which GSA identified problem areas and requested corrections to GSA’s taking deductions from monthly payments. If problems continued or were more serious, GSA terminated or resolicited the activity, even reversing its original decision. Because complete cost and performance data were not available for the entire history of each sample activity, we also compared the original status of each sample activity to its overall status at the time of our review. This comparison provided a proxy measure of whether cost, performance, or other factors resulted in GSA changing its original decision. Only 8 of the 54 sample activities (about 15 percent) had changed from contract to in-house, or vice versa, by the time of our review. Thirty-nine of the sample activities (about 72 percent) continued to be delivered by the sector originally selected by the agency. Of the remaining seven activities, four were abolished for such economic reasons as disposal of the federal building originally covered by the activity; two were pending restudy because of changes in scope; and one had been delegated, in large part, to other federal agencies. All of the 10 activities that changed or were scheduled for restudy involved maintenance services. Table 1 presents summary data on the status of the sample activities at the time of our review. (Tables III.1 and III.2 in app. III provide more detailed information on each of the individual sample activities.) While GSA’s original decisions appeared to be sound, we could not conclusively demonstrate that they generated the estimated level of savings or improved the cost-effectiveness of GSA’s services. Our attempt to systematically assess GSA’s commercial services was frustrated by two main factors. First, there was no common basis for measuring costs and outcomes. Second, we found that post-decision comparisons would be difficult because most activities did not remain static over time. In-house and contract activities were not subject to the same evaluation. Without a common basis for measurement, we were not able to confirm whether cost-effectiveness had improved since the original decisions, nor were we able to compare the actual costs of these activities with what could have been the cost if another decision had been made. Some indirect evidence was available in looking case-by-case at activities after a decision was made, but we could not verify the agencywide impact. Even if consistent data were available for evaluation purposes, we found that post-decision comparisons would still be difficult because most activities did not remain static over time. Our review showed that analysis at the activity level was very difficult given changes in scope. The evidence was incomplete and also more indirect for in-house activities than it was for contracted ones. However, in 30 of 42 cases for which information was available, we found some changes in scope—i.e., work that was done as part of an activity that was not part of the activity when GSA originally reviewed it. Changes in scope ranged from minor adjustments, such as incorporating trash removal into the custodial contract for an activity in Elizabeth City, North Carolina, to drastically restructuring the original activity. For example, one sample case was originally a small activity involving one full-time equivalent position to provide maintenance services for the federal building in Kingston, Tennessee. This sample activity was subsequently folded into a broader CFM contract. The CFM contract covered facilities management, utilities, operations and mechanical maintenance, elevator maintenance, maintenance repair, architectural and structural, janitorial, and protection services for federal buildings in Kingston and five other cities (Knoxville, Athens, Wartburg, Chattanooga, and Jacksboro). We also found 13 modifications to the CFM contract that changed the scope of services for that particular contract. The Kingston activity was not an isolated example; at least 13 other sample activities became part of broader multisite contracts or in-house activities. To support its reinvention efforts, GSA collected information on private sector practices. This information indicated that real estate organizations commonly used performance measurement to evaluate their activities and to decide whether to contract out. We obtained similar information in our review of industry studies and through feedback from directors of several Fortune 500 organizations with large investments in real estate during roundtable discussions on how corporate America manages its real estate strategies and tasks. Specifically, the organizations identified the value of such practices as developing and using performance measures to evaluate the effectiveness of programs and service delivery and benchmarking an organization’s own performance against that of others. The evidence we reviewed suggests that the use of performance measures was the most widespread of these practices. During our roundtable discussion, it was also the practice most often cited by the private sector participants as a key element of successful management. We obtained evidence to a lesser degree on the use of benchmarking, reengineering, and such techniques as activity-based costing. As we have found in our related management work, a common element in each of these practices is that they tend to focus on the outcomes of their programs in addition to the performances of their core operations and activities. Across the industry, private sector organizations employ a wide variety of specific performance measures. Among the most common general categories are cost, profit, and customer feedback. GSA’s Real Property Reinvention Task Force found that unlike GSA’s reliance on process checks and detailed inspection, private sector organizations relied on a few key performance measures. In its report, the task force noted that industry benchmark data, such as the Building Owners and Managers Association (BOMA) Experience Exchange Report, are commonly used as a reference.The feedback from our roundtable participants was consistent with the task force’s findings. The participants also pointed out the importance of performance measures for customer satisfaction, costs of operations, and profitability. Private sector officials stressed the need for an organization to measure performance in order to effectively manage real estate operations. Among the advantages the officials cited for using such data were the ability to (1) analyze changes in performance over time and (2) identify opportunities for improvement. Benchmarking and performance measures also assisted organizations in deciding which services to retain in-house or contract out. For example, several private sector officials said they benchmarked their performance against those of their peers. According to these officials, if the data indicated that the officials’ organizations were not demonstrably best in class, adding value to the parent organization, or providing services at least as well as others could, they would turn to outside sources. Performance measures and analysis also helped organizations focus on what services and mix of skills they needed to keep inside the unit (i.e., their “core” business) and what remaining needs should be filled through contracts, alliances, or other relationships with outside providers. While most of the private sector organizations we met with or reviewed information on used benchmarking and performance measures to some extent, we found a range of opinions on what type of data they used in their evaluations. Some organizations were concerned with finding comparable data. These organizations were likely to rely on internal comparisons of their own data for benchmarking purposes rather than making comparisons to data from outside sources. Other organizations were concerned with measuring their performance against operations considered best in class. Those organizations would seek out data from peers in the industry and even organizations that might be very different from their own in terms of size or even the field of business, because the other organizations did some things very well. The organizations taking the broader approach appeared to be less concerned with straightforward cost comparisons than with identifying best practices and setting higher standards for their own performance. In the middle ground, organizations looked to industry sources, such as BOMA, or to special studies for local or regional markets. GSA’s reorganization of PBS along private sector business lines presents GSA with an opportunity to apply private sector performance measures and benchmarking practices. GSA has already begun to implement selected performance measures, for example, through customer satisfaction surveys. The reorganization was designed, in part, to help PBS measure its performance against commercial practices and identify opportunities for improvement. Such opportunities may not only include improving areas in which the performance of PBS’ business lines falls short of industry benchmarks but also replicating and reinforcing areas in which PBS’ performance and practices can be demonstrated to exceed industry benchmarks. Recent GSA proposals, generated as part of GSA’s efforts to reorganize PBS, would begin to implement a number of the common private sector practices. For example, the Real Property Reinvention Task Force recommended the following general types of performance measures: (1) customer satisfaction, as determined through surveys, personal contacts, and such indicators as complaint trends and customer retention statistics; (2) competitiveness, as determined by cost-recovery pricing versus commercial pricing; (3) cost-effectiveness, as measured through benchmarking against other providers; and (4) timeliness, as measured through the percentage of reimbursable work authorizations completed on schedule. Subsequent business design documents proposed more specific versions of the general categories of performance measures set forth in the task force report. The use of multiple performance measures reflects the general trend found in the research on industry practices. On a more practical level, our review of the sample activities also showed that no single measure could account for all aspects of a service. For example, while data on customer satisfaction and operating costs are among the most common measures established by firms, such data are imperfect measures of some service aspects, such as preventive maintenance of the physical plant and equipment. GSA officials we interviewed expressed similar concerns, particularly about finding suitable measures for preventive maintenance. Our work on federal, state, foreign, and private sector reform efforts has shown that the experiences of leading organizations suggest that the number of measures should be limited to a vital few that provide the most needed information for accountability, policymaking, and program management. The use of a few significant performance measures provides a clearer basis for an organization to assess accomplishments, facilitate decisionmaking, and focus on accountability. Too many measures, including those that have little value for stakeholders, can confuse and overwhelm users or make a performance measurement system unmanageable. For GSA to implement benchmarking within PBS business lines, it would not be necessary to find perfectly compatible data. In fact, there are examples, from corporate real estate and other industries, of organizations that have benchmarked their performance by looking outside their business line. Private sector real estate officials pointed out that GSA could also focus first on developing measures using data already available internally. At a minimum, such internal benchmarks could show GSA’s units how they have progressed since the last measurement period. Both private sector and GSA officials pointed out that cost benchmarks should take into account some special circumstances, for example, whether a facility has 24-hour operations (e.g., Customs’ border stations or data processing centers) or has additional security requirements (e.g., courts). According to private sector real estate officials, some common elements of private industry costs, such as taxes and insurance, would also not be applicable to federal space. However, they said that GSA would still be able to focus on the elements that are applicable in the detailed industry data that are reported. The existing post-MEO review structure could be a very valuable tool when applied to analysis of GSA operations against broader cost benchmarks or similar performance goals. The structure allows for variation in individual cost components as long as the aggregate results remain within accepted limits. This approach recognizes that the performance within individual cost elements may go up and down over time and, in fact, that some variation should be expected. The most common feedback we received from regional and field office officials in GSA was that such flexibility was needed in evaluating and analyzing the performance of GSA operations. On the basis of cost comparison, performance evaluation, and historical tracking data we reviewed for our 54 sample activities, we found GSA’s decisions to retain individual activities in-house or contract them out to be sound. The results of our review of the sample activities and evaluation approaches used by private sector organizations showed that there are management practices used by private sector real property organizations that could improve oversight and evaluation of GSA’s services. Through its response to NPR’s recommendations, GSA is restructuring its real property management organization and practices to become more comparable with private sector practices. GSA’s proposed wider use of performance measurement and benchmarking practices used by other organizations could improve overall evaluation of cost-effectiveness and performance of services and provide the basis for decisions to contract out. Among the benefits of such practices are that they could (1) provide a common basis for evaluating cost and performance, regardless of which sector was providing a service and (2) enable GSA to measure the outcome of services, regardless of changes in the scope of an individual activity. Because GSA is actively investigating the applicability of such private sector practices as performance measurement and benchmarking in its restructuring review, we are making no recommendations in this report. We provided copies of a draft of this report for review by officials in the GSA offices and regions where we did our audit work. On August 10, 1995, we met with GSA’s A-76 Program Coordinator and the Deputy Director, Portfolio Team, from the Office of Property Management. They fully agreed with the facts presented and provided additional information on GSA’s reinvention efforts that we incorporated in the background. On August 29, 1995, the Administrator of GSA provided written comments on this report (see app. V) in which GSA generally concurred with the report’s conclusions, including our opinion that additional management practices used by private sector real property organizations could improve oversight and evaluation of GSA’s services. The Administrator also said that he was pleased that we found GSA’s decisions in this area to be sound for all sample activities reviewed. Contrary to the Administrator’s apparent interpretation, our conclusion regarding the soundness of GSA’s decisions represents a summary observation based on the sample activities in their entirety rather than a specific endorsement of all individual decisions. The Administrator also suggested some specific changes to the report text. These suggested changes dealt with additional information on GSA’s reinvention efforts, which we had already made to the text, having received the same information in the August 10 meeting. We are sending copies of this report to the Administrator of GSA, the Director of OMB, and appropriate congressional committees. Copies will also be made available to other interested parties upon request. If you have any questions concerning this report or would like further information, please contact me on (202) 512-8676. The major contributors to this report are listed in appendix VI. The objectives of this assignment were to review (1) the cost-effectiveness and performance of both in-house and contracted real property management services in GSA to determine whether its contracting decisions were sound and (2) evaluation approaches used by private sector real property management organizations to determine whether any practices could improve the oversight and evaluation of the effectiveness of GSA’s real property management services. For purposes of this report, real property management services were defined as those services that GSA provides to federal agencies relating to housing for staff and facilities for the storage of equipment and supplies. Because most GSA real property services are the responsibility of GSA’s Public Building Services (PBS), our analyses focused on PBS’ activities. To address the first objective and support analysis for the second objective, we relied primarily on our examination of a random, stratified sample of 54 commercial services activities in GSA’s PBS. The sample was based on the universe of real property management activities in PBS that had been reviewed by GSA from fiscal years 1982 through 1992 as part of its A-76 program. Although the A-76 program accounts for only a portion of all government contracting activity, more complete data were available for PBS’ A-76 actions than for contracting actions in general, especially on cost estimates. PBS’ A-76 inventory also provided a set universe from which we selected a sample of both retained and contracted service activities. We stratified the population of PBS A-76 actions by whether an activity had been retained in-house or contracted out and by region of the country. Because copies of GSA evaluations for in-house activities are kept at GSA headquarters, we were able to select a sample of retained activities from each of GSA’s regional offices. Because detailed records of GSA’s contracted activities are retained in the regional offices, we sampled contracted activities from 3 of the 11 GSA regions at the time of our review for more efficient use of resources. We traveled to GSA regional offices in Atlanta, Chicago, and Fort Worth to review files for sample contract cases. These three regions accounted for about 50 percent of all contracted activities in the PBS’ A-76 inventory from 1982 to 1992, and our sample activities were representative of those regions. As a result of its original review, GSA decided to retain 21 of the selected sample activities in-house and not retain the remaining 33 activities. There was no one clear measure that demonstrated whether GSA made sound decisions, so we relied on three indicators: (1) cost comparison data, (2) evaluations of performance, and (3) a tracking of the status of sample activities over time. Cost data were used to confirm that the alternatives selected by GSA provided services at a reasonable cost to the government. To compile relevant data, we reviewed PBS’ acquisition plans, solicitations, performance work statements and modifications, cost studies, price analysis reports, price negotiation memos, post-MEO evaluation packages and independent reviews, financial statements for direct operations, and supporting worksheets. We used evaluations, inspection reports, and related documentation, including correspondence, to indicate whether the performance of the sample activities was generally satisfactory. We tracked the status of the sample activities over time because we could not recreate a complete history for each activity. The sample reflected GSA’s reviews from as far back as 1982, so many related case documents had been retired from the active files and, in some cases, destroyed. We therefore needed a broad proxy measure of whether progress had been satisfactory after GSA’s original decision. We found that documentation on actual performance was somewhat limited and uneven in the case files for the sample activities. In general, more evidence was available for the activities that were contracted out than for those retained in-house. In part, this may reflect the difference between having individual contract files for contracted activities, while GSA’s evaluation and inspection reports for in-house operations tended to focus on performance at levels of service that were broader than individual activities, such as entire GSA field offices. For both in-house and contracted activities, the evidence on the performance of individual activities focused on the documentation of specific problems. The case files, therefore, tended to show the exceptions to satisfactory performance rather than provide a guide to the general level of performance for individual activities. We supplemented our review of the case files with interviews of GSA officials, including regional personnel in the contracts offices and PBS managers responsible for the facilities covered by the sample activities. We asked their observations on both in-house and contracted services and did not limit the interviews to only the sample activities. In addition, we obtained their perceptions on what practices have or have not worked well in their experience. They also provided insights on what information was most useful for day-to-day management and oversight of in-house and contracted activities as well as possible changes that could improve management of real property services. To address the second objective, we examined methods and practices used by private sector property management organizations to measure service performance and determine whether activities should be retained in-house or contracted out. We reviewed industry studies and met with officials from private sector real estate organizations. The officials participated in roundtable discussions on how corporate America manages its real estate strategies and tasks. The discussions were jointly hosted by GAO and GSA, and participants included representatives from private sector organizations; other outside experts; and federal representatives from Congress, OMB, GSA, and other agencies. In addition, we reviewed GSA plans and proposals developed in response to the National Performance Review, the Government Performance and Results Act (P.L. 103-62 (1993)), and resulting reorganization efforts. We did our work from September 1994 to July 1995 in accordance with generally accepted government auditing standards. Where appropriate, we also incorporated information on sample activities that was gathered to prepare our previous report on the extent to which GSA contracted out or retained in-house the real property management activities in PBS. OMB Circular A-76 establishes the federal policy on commercial services (referred to in the circular as commercial activities). The circular and its cost comparison handbook specify procedures for determining when it is more economical to contract out activities currently done by federal employees. The A-76 guidance does not always require a formal cost study for an agency to convert a commercial activity to contract. OMB and federal agencies are to maintain records on the reviews done using the A-76 guidance. GSA’s PBS established guidance and technical procedures for evaluating activities that remained in-house after GSA performed an A-76 review. The purpose of these evaluations was to certify that the activity was meeting the cost and performance requirements established during the A-76 review. OMB characterizes Circular A-76 as a management reinvention process designed to use competition to encourage change and improve the quality and cost of commercial support services. The circular defines a commercial activity as one that is operated by a federal executive agency and provides a product or service that could be obtained from a commercial source. Certain government activities are not subject to contracting out under Circular A-76 because they are so closely related to the public interest that they must be done by federal employees. These activities are referred to as inherently governmental. In addition, Congress has exempted some activities from the A-76 review process. To implement the circular, an agency first evaluates its activities to determine whether they are governmental or commercial and completes an inventory of all the commercial activities. Along with a description of the nature and location of each activity, the inventory includes the number of full-time equivalent (FTE) positions assigned to the activity at the start of an A-76 review. For example, one activity in PBS’ A-76 inventory that was selected for our sample was mechanical maintenance services for the U.S. Post Office/Court House and U.S. Customs House in Galveston, Texas, which involved three FTE positions when the activity was studied in fiscal year 1990. At GSA, one FTE is not necessarily comparable to one employee; PBS’ A-76 inventory includes authorized positions, temporary employees, and borrowed labor in its FTE figures. Some inventory activities may be converted to contract without undergoing a formal cost study. The two primary circumstances under which a direct conversion to contract may occur are (1) if the activity should be contracted to a noncompetitive, preferential procurement program source in accordance with applicable regulations and (2) if the activity employs 10 or fewer FTEs. While a formal cost study is not required, an agency may use available cost data to help determine the reasonableness of proposed contract prices and ensure that contracting out will result in a cost that is less than the government’s cost of operation. Agencies are supposed to review the remaining activities in their commercial inventories through a three-step process: (1) the development of the agency’s performance work statements, (2) the completion of a management study of in-house operations, and (3) the submission of formal bids for cost comparison. The purpose of the performance work statement is to allow government employees and the private sector to competitively bid on the same scope of work. It requires agencies to define their workload requirements in terms of measurable performance standards. The purpose of the management study is to determine the most efficient way to provide the requirements using a federal workforce. The resulting government estimate of the lowest number and type of employees required for in-house performance is generally referred to as the most efficient organization (MEO). According to OMB, the management study to identify the MEO protects current employees from historical inefficiencies in the cost comparison, creates incentives to restructure services and reduce costs, and serves to protect the procurement process by protecting the in-house bid. The MEO is used to develop the government’s cost estimate for the activity being studied. This MEO cost is then compared to private sector bids. The circular’s cost comparison handbook describes the specific cost elements of a cost comparison and includes areas such as fringe benefits, material support, facilities, insurance, contract administration, and overhead. A contract is to be awarded for an activity if three conditions are met: (1) the contractor is judged by the government to be able to meet all of the government’s standards for quality, timeliness, and quantity; (2) the total cost of contract performance is less than the government’s total estimate; and (3) the projected cost advantage to the government is at least 10 percent of the government’s personnel costs. The 10-percent margin is included in the cost comparison to take into account unpredictable costs that may occur as a result of the conversion to contract. If these three conditions are met, the activity is to be contracted out. If not, the activity is to remain in-house, but the government must implement the MEO standards developed during the management study to streamline operations and reduce costs. Much of the information we used in our analysis of sample activities retained in-house was generated through the review processes described in this section. However, this description reflects the general guidance and practices used by PBS at the time of our review. Because GSA is currently involved in major reinvention and reorganization efforts, the specific processes and terminology may not apply to future PBS evaluations of in-house activities. The objective of PBS’ post-MEO reviews of retained activities was to certify that PBS implemented the cost and performance requirements established in the A-76 review. The PBS general guidance on post-MEO review and certification noted that the implicit contractual commitment made when commercial activities were retained in-house after an A-76 competition required a method of in-house contract administration. The post-MEO review process was therefore developed to determine whether those PBS activities retained in-house continued to perform within the government cost estimates and performance requirements established in the PBS MEO. To evaluate cost, the post-MEO review compared the government’s adjusted actual costs to adjusted estimated costs proposed at the time of the A-76 competition. According to the guidance, to ensure an equitable review, post-MEO worksheets included adjustments in the costs of fringe benefits, depreciation, and insurance. The review also might have included adjustments for inflation, depending on the period under review and how inflation was handled in the original A-76 cost comparison. Other worksheets covered actual costs for the period under review in areas such as labor, supplies and materials, contracts, and utilities. If actual costs, as summarized in the worksheets, appeared to be excessively high for any of these areas (i.e., more than 10 percent above the estimated cost), the reviewer was supposed to complete additional worksheets to explain or show adjustments to the costs, as appropriate. In explaining or adjusting excessive costs, the reviewers were to examine GSA cost reports and other documents to determine whether the costs coded for the MEO activity actually reflected items or work within the MEO’s scope of work. According to the PBS guidance, if the review indicated that actual costs for a full year of MEO operation are within 10 percent of the cost estimate, the activity could be certified as meeting the cost requirements. If the actual costs were between 10 and 25 percent of the estimate, an action plan was to be prepared to bring the activity within the 10 percent tolerance and implemented within 180 days. Activities for which the variance exceeded 25 percent were to be scheduled for A-76 recompetition. To determine whether an activity operated within the government’s predicted performance objectives, the post-MEO review was to include the most recent GSA Field Office Evaluation. Evaluations were done by teams of inspectors who rated the quality of service delivery and administration for operations. There were 12 categories of operations, such as custodial management, contracting, and security, that might be included in the evaluation if applicable for the location being reviewed. The quality score from the Field Office Evaluation determined the performance level of the activity being reviewed. A score of 75 or more (out of 100) signified an acceptable level of performance. Before final certification and acceptance of the post-MEO review results, GSA’s Central Budget Office completed an independent review. The independent review officer was not responsible for performing a separate audit but had to concur on the calculations and analysis on which the post-MEO certification was based. One of the most important tasks of the independent review officer was to ensure again that the scope of the post-MEO review matched the scope of the MEO performance work statement, including any approved modifications. If the review showed that the in-house activity failed to (1) meet either cost or performance thresholds, (2) adequately explain the reasons for excessive cost variances, or (3) have an approved modification to the MEO, the activity was to be scheduled for A-76 recompetition. However, an activity that failed the post-MEO review was required to recompete using its current organization and operational cost without reconfiguring it for a revised MEO. The following tables present information on each of the sample activities reviewed for this report. Table III.1 includes those sample activities that were retained in-house after GSA’s initial A-76 review, while table III.2 includes those sample activities that GSA decided not to retain in-house. Table III.1: Sample Activities Originally Retained In-House (21) Converted to contract (September 1989). Subsequently became a Commercial Facilities Management (CFM) contract. Retained in-house. Post-MEO review of performance showed the activity to be 9.4% above the adjusted MEO estimate. Retained in-house. Post-MEO review of performance showed the activity to be 10.4% below the adjusted MEO estimate. Retained in-house. Post-MEO review of performance showed the activity to be 12.6% below the adjusted MEO estimate. Retained in-house. Post-MEO review of performance showed the activity to be 0.02% above the adjusted MEO estimate. Retained in-house. Post-MEO review of performance showed the activity to be 5.5% above the adjusted MEO estimate. Retained in-house. Post-MEO review of performance showed the activity to be 2.2% below the adjusted MEO estimate. Retained in-house. Post-MEO review of performance showed the activity to be 13.5% below the adjusted MEO estimate. Scope changed. Scheduled for restudy. Retained in-house. Post-MEO review of performance showed the activity to be 0.5% above the adjusted MEO estimate. Retained in-house. The activity is exempt from post-MEO review because it is remote and uneconomical to study. Scope changed. Scheduled for restudy. Converted to a CFM contract. Retained in-house. Post-MEO review of performance showed the activity to be 3.2% above the adjusted MEO estimate. (continued) Retained in-house. Post-MEO review of performance showed the activity to be 1.6% above the adjusted MEO estimate. Retained in-house. The activity is exempt from post-MEO review. Retained in-house. Post-MEO review of performance showed the activity to be 1.7% below the adjusted MEO estimate. Failed post-MEO review (over 25% above the adjusted MEO estimate). The activity was contracted out (May 1993). Retained in-house but was combined in a full service group for W. Washington. The post-MEO review for the combined activity found it to be operating at 7.1% below the adjusted MEO estimate. Retained in-house. Post-MEO review of performance showed the activity to be 7.1% above the adjusted MEO estimate. Washington, D.C. Retained in-house but reconfigured. Responsibility for managing almost all of the facilities covered by the original activity was delegated to other federal agencies. Table III.2: Sample Activities Not Retained In-House After Original Review (33) The activity remained contracted out, but the active contract added janitorial services at the Federal Building-Agency Motor Pool to the original activity covering the Courthouse-Customhouse. The activity remained contracted out. The activity remained contracted out. The activity remained contracted out but became part of a CFM contract covering the Federal Building and Courthouse, Federal Building/ Courthouse Annex, and parking garage. (continued) The activity remained contracted out but became part of a CFM contract. The CFM contract covered facilities management at the Knoxville, Athens, Wartburg, Chattanooga, Kingston, and Jacksboro, Tennessee federal buildings. The services included were operations and mechanical maintenance, elevator maintenance, maintenance repair, architectural and structural, janitorial, utilities, and protection. The activity remained contracted out. The active contract also included services for Corbin, KY. The activity remained contracted out. The active contract resulted from resolicitation after previous contract options were not exercised due to unsatisfactory performance. The activity remained contracted out but had been combined with other activities handled out of the local field office. The contractor defaulted, and the activity was brought back in-house (Jan. 1992). GSA reviewed the activity and determined it would be beneficial to return the activity in-house (Feb. 1989). The activity was abolished for economic reasons. The activity had been combined with another activity in Greenville. When the contractor for that activity defaulted, everything was brought back in-house (Jan. 1992). The activity was returned in-house because GSA could not get a contractor to take on this activity (no satisfactory commercial source). The activity remained contracted out. It became part of the CFM contract that includes Jacksboro, TN (see 04PCS069). The activity remained contracted out. The activity became part of a CFM contract for five locations in Michigan. The active contract was an emergency procurement (previous contract not renewed due to poor performance). The activity remained contracted out. The activity remained contracted out. It became part of the CFM contract that includes Port Huron, MI (see 05PCS024). (continued) The contract was terminated by GSA in 1986. The activity was combined with two other locations in Columbus and Zanesville and retained in-house after resolicitation in 1987. The most recent post-MEO review of performance (1992) showed the activity to be 7.24% above the adjusted MEO estimate. The activity remained contracted out. The activity remained contracted out. The activity remained contracted out. The GSA activity was abolished. Federal offices previously covered under this activity were covered in a full service lease. The activity remained contracted out but became part of a full maintenance contract. There was no service in the building since April 1992. The building was excessed, and the GSA activity was abolished. Federal offices previously covered under this activity were covered in a full service lease. The activity remained contracted out. The activity is to become part of a CFM contract in November 1995. The activity remained contracted out. The activity remained contracted out. The activity remained contracted out. The activity remained contracted out but became part of a CFM contract. The activity remained contracted out but was expanded to cover full maintenance services. The activity remained contracted out. The contract reviewed was terminated for default. The building was disposed by GSA (June 1992). Once a decision was made to compete an activity, our review indicated that the government could determine a fair price for services without resorting to lengthy cost study processes, even if direct competition was limited. Formal A-76 cost comparison studies were used for only two of the sample contract activities we reviewed, because the activities were under the circular threshold requiring a cost study. The other contract activities were all direct conversions to contract. While inadequate cost or price analysis documentation has been identified as a broad area of concern by prior GAO, OMB, and agency reviews of contracting practices, our review of the sample case files did not reveal such problems. We found that it was common practice for GSA managers and contracts personnel to compile detailed cost data to evaluate prospective bids for activities going out to contract. GSA personnel used a variety of approaches to establish a reasonable price range for the desired services. For example, documents in the contract case files showed that bid prices could be analyzed on the basis of an analysis and comparison of (1) competitive bids received for the current solicitation; (2) the prior contract price; (3) prices paid for similar services in different GSA locations (a market analysis); (4) an independent government cost estimate; and (5) industry data, such as reported average costs of specific services. Documentation in the contract case files indicated that GSA usually used a combination of these approaches as part of its cost analysis and negotiations and rarely relied on only one data source. We found that this type of analysis was done even for a legally mandated contract source. In some of the more complex cases, particularly multistage solicitations, we found that GSA contracts personnel had completed detailed analyses of individual cost components, hourly rates, and prices for special services that contractors proposed in their bids to GSA. An independent government estimate for these items was often incorporated into the analyses. Such analyses permitted both the government and the bidders to identify areas in which costs appeared to be out of line (either too high or too low) and determine whether adjustments were necessary. GSA resolved problems with two solicitations in this manner—one in which the government had underestimated the costs associated with new services and another in which the private sector sources and the government had widely different assumptions about requirements in the scope of work on which they were bidding. GSA officials in the regional offices told us that the number of bidders competing for some of the real property activities had been limited. Our review of the case file materials for active and previous contracts generally confirmed their observation. We were able to obtain information on the number of responsive, responsible (i.e., technically and financially acceptable) bidders for 42 individual solicitations. Table IV.1 shows the overall distribution of bidders. In four of these cases, competition was limited to a mandatory source, such as workshops employing the severely handicapped. The limited amount of competition for some sample activities underscored the value of GSA’s practice of relying on more than a low bid to determine whether the government was obtaining cost-effective services. Charles I. Patton, Jr., Associate Director, Federal Management and Workforce Issues Frances P. Clark, Assistant Director, Federal Management and Workforce Issues Timothy A. Bober, Evaluator-in-Charge K. Scott Derrick, Evaluator Kiki Theodoropoulos, Communications Analyst Bonnie J. Steller, Senior Statistician The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) the cost-effectiveness and performance of the General Services Administration's (GSA) real property management services to determine whether GSA contracting decisions were sound; and (2) private sector real property management approaches to determine whether GSA management services could be improved. GAO found that: (1) cost, performance, and historical data showed that GSA decisions to retain property management services in-house or contract them out were sound; (2) in general, GSA retained and contracted-out activities at lower actual costs than estimated; (3) GSA continued to use the sector originally selected for about 72 percent of the sampled activities, since only 11 of the 54 activities had serious performance problems; (4) GSA experienced more frequent and serious performance problems with contracted maintenance activities than with other types of activities; (5) GSA oversight of the activities was generally sound and GSA took appropriate corrective actions when necessary; (6) although GSA contracting decisions appeared to be sound, GAO could not conclusively demonstrate that the selected alternatives generated the expected savings estimated; (7) private sector real estate management organizations commonly use such practices as performance measurement and benchmarking to manage and evaluate their operations and to decide whether to contract out certain services; (8) GSA could improve its oversight and evaluation of its services by adopting key private sector performance measures; and (9) GSA is still developing the specific performance measurements it will use after its reorganization. |
The National Cemeteries Act of 1973 (P.L. 93-43) authorized NCS to bury eligible veterans and their family members in national cemeteries. Before 1973, all national cemeteries were operated under the authority of the Department of the Army. However, P.L. 93-43 shifted authority to VA for all national cemeteries except Arlington National Cemetery and the U.S. Soldiers’ and Airmen’s Home National Cemetery. NCS operates and maintains 115 national cemeteries located in 39 states and Puerto Rico. NCS offers veterans and their eligible family members the options of casket interment and interment of cremated remains in the ground (at most cemeteries) or in columbaria niches (at nine cemeteries). NCS determines the number and type of interment options available at each of its national cemeteries. The standard size of casket grave sites, the most common burial choice, is 5 feet by 10 feet, and the grave sites are prepared to accommodate two caskets stacked one on top of the other. A standard in-ground cremains site is 3 feet by 3 feet and can generally accommodate one or two urns. The standard columbarium niche used in national cemeteries is 10 inches wide, 15 inches high, and 20 inches deep. Niches are generally arrayed side by side, four units high, and can hold two or more urns, depending on urn size. In addition to burying eligible veterans and their families, NCS manages the State Cemetery Grants Program, which provides aid to states in establishing, expanding, or improving state veterans’ cemeteries. State veterans’ cemeteries supplement the burial service provided by NCS. The cemeteries are operated and permanently maintained by the states. A State Cemetery grant may not exceed 50 percent of the total value of the land and the cost of improvements. The remaining amount must be contributed by the state. The State Cemetery Grants Program funded the establishment of 28 veterans’ cemeteries, including 3 cemeteries currently under development, located in 21 states, Saipan, and Guam. The program has also provided grants to state veterans’ cemeteries for expansion and improvement efforts. As the veteran population ages, NCS projects the demand for burial benefits to increase. NCS has a strategic plan for addressing the demand for veterans’ burials up to fiscal year 2003, but the plan does not address longer term burial needs—that is, the demand for benefits during the expected peak years of veteran deaths, when pressure on the system will be greatest. Beyond the year 2003, NCS officials said they will continue using the basic strategies contained in the current 5-year plan. According to its 5-year strategic plan (1998-2003), one of NCS’ primary goals is to ensure that burial in an open national or state veterans’ cemetery is an available option for all eligible veterans and their family members. The plan sets forth three specific strategies for achieving this goal. First, NCS plans to build, when feasible, new national cemeteries. NCS is in various stages of establishing four new national cemeteries and projects that all will be operational by the year 2000. A second strategy for addressing the demand for veteran burials is through expansion of existing cemeteries. NCS plans to complete construction in order to make additional grave sites or columbaria available for burials at 24 national cemeteries. NCS also plans to acquire land needed for cemeteries to continue to provide service at 10 cemeteries. Third, NCS plans to encourage states to provide additional grave sites for veterans through participation in the State Cemetery Grants Program. According to the plan, NCS plans to increase the number of veterans served by a state veterans’ cemetery by 35,000 per year beginning in fiscal year 1998. Also, NCS is in the early stages of developing information designed to assist states in the establishment of a state veterans’ cemetery. veterans who will have access to a veterans’ cemetery stop at the year 2003. Although NCS has a 5-year strategic plan for addressing the demand for veterans’ burials during fiscal years 1998 through 2003, plans to address the demand beyond 2003 are unclear. For example, NCS’ strategic plan does not articulate how NCS will mitigate the effects of the increasing demand for burial services. According to NCS’ Chief of Planning, although its strategic plan does not address long-term burial needs, NCS is always looking for opportunities to acquire land to extend the service period of national cemeteries. Also, to help address long-range issues, NCS compiles key information, such as mortality rates, number of projected interments and cemetery closures, locations most in need of veterans’ cemeteries, and cemetery-specific burial layout plans. In addition, NCS officials pointed out that the Government Performance and Results Act of 1993 (the Results Act) requires a strategic plan to cover a 5-year period. However, the Results Act requires that an agency prepare a strategic plan that covers at least a 5-year period and allows an agency to articulate how it plans to address future goals. For example, the National Aeronautics and Space Administration’s plan articulates a “strategic roadmap” that outlines agencywide goals. This roadmap lists separate goals for near-, mid-, and long-term time periods over the next 25 years and beyond. The Environmental Protection Agency’s plan also articulates goals that are not bound by the 5-year time period. For example, it includes an objective to reduce toxic air emissions by 75 percent in 2010 from 1993 levels. Although NCS projects annual interments to increase about 42 percent from 73,000 in 1995 to 104,000 in 2010, peaking at 107,000 in 2008, its strategic plan does not indicate how the agency will begin to position itself to handle this increase in demand for burial benefits. We believe that, given the magnitude of the projected increase in demand for burial benefits, NCS’ strategic plan should discuss how its current strategies will be adjusted to address the demand during the peak years of veterans’ deaths. through the State Cemetery Grants Program. According to NCS’ Chief of Planning, NCS will encourage states to locate cemeteries in areas where it does not plan to operate and maintain national cemeteries. Since the State Cemetery Grants Program’s inception in 1978, fewer than half of the states have established veterans’ cemeteries, primarily because, according to NCS officials, states must provide up to half of the funds needed to establish, expand, or improve a cemetery as well as pay for all equipment and annual operating costs. Furthermore, the Director of the State Cemetery Grants Program told us that few states, especially those with large veteran populations, have shown interest in legislation that VA proposed in its 1998 and 1999 budget submission in order to increase state participation. This proposed legislation would increase the federal share of construction costs from 50 to 100 percent and permit federal funding for up to 100 percent of initial equipment costs. In fact, according to the Director, state veterans’ affairs officials said they would rather have funding for operating costs than for construction. NCS officials told us they will continue to evaluate locations for additional national cemeteries in the future, based on demographic needs. However, according to NCS officials, VA currently has no plans to request construction funds for more than the four new cemeteries, which will be completed by the year 2000. Officials said that even with the new cemeteries, interment in a national or state veterans’ cemetery will not be “readily accessible” to all eligible veterans and their family members. According to NCS officials, the majority of areas not served will be major metropolitan areas with high concentrations of veterans, such as Atlanta, Georgia; Detroit, Michigan; and Miami, Florida. the average columbarium interment cost would be about $280, compared with about $345 for in-ground cremains burial and about $655 for casket burial. Our analysis also showed that the service delivery period would be extended the most using columbarium interment. For example, using columbarium interment in a total of 1 acre of land could extend the service delivery period by about 50 years, while in-ground cremains interment would extend the service period about 3 years and casket burials about half a year. While historical data imply that the majority of veterans and eligible dependents prefer a casket burial, NCS national data show that the demand for cremation at national cemeteries is increasing. For example, veterans choosing cremation increased about 50 percent between 1990 and 1996, and NCS officials expect demand for cremation to continue to increase in the future. The incidence of cremation also continues to increase in the general population. The Cremation Association of North America projects that cremation will account for about 40 percent of all burials by 2010. designed to increase state participation by increasing the share of federal funding. Therefore, NCS needs to rely more on extending the service periods of its existing cemeteries. Columbaria can more efficiently utilize available cemetery land at a lower average interment cost than the other interment options and can also extend the service period of existing national cemeteries. Using columbaria also adds to veterans’ choice of services and recognizes current burial trends. While we recognize that cremation may not be the preferred interment option for many veterans, identifying veterans’ burial preferences, as NCS plans to do, would enable it to better manage limited cemetery resources and more efficiently meet veterans’ burial needs. Mr. Chairman, this concludes my prepared statement. I will be glad to answer any questions you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the National Cemetery System's (NCS) plans to accommodate the increasing demand for burial benefits and what it can do to extend the service period of existing cemeteries. GAO noted that: (1) NCS has adopted a 5-year strategic plan for fiscal years 1998 through 2003 with the goal of ensuring that burial in a national or state veterans' cemetery is an available option for all veterans and their eligible family members; (2) strategies outlined in NCS' plan include: (a) building new national cemeteries; (b) expanding existing cemeteries; and (c) encouraging states to provide additional burial sites through participation in the State Cemetery Grants Program; (3) however, it is unclear how NCS will address the veterans' burial demand during the peak years, when pressure on it will be greatest, since NCS' strategic plan does not indicate how it will begin to position itself to handle the increasing demand for burial benefits; (4) NCS officials stated that beyond 2003, NCS will continue using the basic strategies contained in its current 5-year plan; (5) for example, NCS plans to encourage states to establish veterans' cemeteries in areas where it does not plan to operate national cemeteries; (6) however, since the grant program's inception in 1978, fewer than half of the states have established veterans' cemeteries; (7) states have also shown limited interest in a legislative proposal designed to increase state participation by increasing the share of federal funding; (8) given the magnitude of the projected increase in demand for burial benefits, GAO continues to believe that it is important for NCS to articulate to Congress and other stakeholders how it plans to address the increasing demand; (9) as annual interments increase, cemeteries reach their burial capacity, thus increasing the importance of making the most efficient use of available cemetery space; (10) to identify feasible approaches to extending the service period of existing cemeteries, GAO analyzed the impact of adding burial sites to an acre of land in an existing cemetery; (11) GAO's analysis of three interment options showed that columbaria offered the most efficient option because they would involve the lowest average interment cost and would significantly extend a cemetery's service period; and (12) morever, while the majority of veterans and eligible family members prefer a casket burial, cremation is an acceptable interment option for many, and the demand for cremation, which varies by region, continues to increase. |
CMS has indicated that the primary principle of program integrity is to pay claims correctly. MIP is designed to address fraud, waste, and abuse, including improper payments, by (1) preventing fraud through effective enrollment of providers and through education of providers and beneficiaries; (2) detecting potential improper billing early through, for example, medical review and data analysis of claims; (3) coordinating closely with partners, including contractors and law enforcement agencies; and (4) implementing fair and firm enforcement policies. HIPAA established mandatory funding for MIP that ensured a stable funding source for Medicare program integrity activities from the Federal Hospital Insurance trust fund not subject to annual appropriations. The amount specified in HIPAA rose for the first few years and then was capped at $720 million per year in fiscal year 2003 and future years. CMS received increased and additional mandatory funding for MIP from the Federal Hospital Insurance trust fund in fiscal year 2006 under the Deficit Reduction Act of 2005 (DRA) and, in addition, received discretionary funding beginning in fiscal year 2009. On March 23, 2010, the Patient Protection and Affordable Care Act (PPACA) was signed into law. It included provisions that will provide MIP with a portion of an additional $350 million, to be shared with the Department of Justice (DOJ) and HHS, for fiscal year 2011 through fiscal year 2020 for health care fraud and abuse control efforts. It also increases funding for MIP each year by the percentage increase in the consumer price index for all urban consumers. MIP currently has eight activities, and each of these has multiple subactivities. As we reported in 2006, CMS undertook five original MIP activities required by HIPAA: Benefit Integrity. Aims to deter and detect Medicare fraud by conducting proactive data analysis of claims to identify patterns of fraud and taking other steps to determine whether fraud could be occurring. Potential fraud cases are documented and referred to law enforcement agencies. Provider Audit. Includes desk reviews, audits, and final settlement of institutional provider cost reports, such as those submitted by hospitals and skilled nursing facilities, which are used to establish payment rates. Medicare Secondary Payer (MSP). Identifies when beneficiaries have primary sources of payment—such as employer-sponsored health insurance, automobile liability insurance, or workers’ compensation insurance—that should pay claims that were mistakenly billed to Medicare. MSP also involves recovering improper payments associated with such claims. Medical Review. Includes both automated and manual prepayment and postpayment reviews of individual Medicare claims to determine whether the services are provided by legitimate providers to eligible beneficiaries and are covered, medically reasonable, and necessary. Provider Outreach and Education. Provides training for providers, such as hospitals and physicians that serve Medicare beneficiaries, on appropriate billing practices to comply with Medicare rules and regulations. Since 2006, CMS has begun three additional MIP activities: Medicare-Medicaid Data Match Project (Medi-Medi). Was added to MIP by DRA. DRA provided this activity with its own dedicated funding source through a separate appropriation. Medi-Medi is a joint effort between CMS and states that participate voluntarily to identify providers with aberrant Medicare and Medicaid billing patterns through analyses of claims for individuals with both Medicare and Medicaid coverage. Part C and D Oversight. Consists of subactivities to address improper payments in Medicare Parts C and D. CMS began this activity in fiscal year 2006. Other Medicare Fee-For-Service. Consists of a variety of subactivities related to Medicare fee-for-service not captured by the other activities, such as support for pilot programs and enhancements to CMS data systems that CMS officials told us will allow for better analysis. CMS began these subactivities in fiscal year 2009. CPI is the CMS component responsible for oversight of all of CMS’s program integrity efforts, including MIP, and is led by a deputy administrator. Formed in April 2010, CPI was created to enable CMS to pursue a more strategic and coordinated program integrity approach and to also allow the agency to build on and strengthen existing program integrity efforts. CPI has targeted several program areas to help identify, evaluate, and focus resources and projects. These areas are prevention, detection, recovery, and transparency and accountability. MIP is led by the Director of the Medicare Program Integrity Group within CPI. However, the MIP activity managers and their staff members are not all located within CPI. There are MIP activity managers located in CPI, the Center for Medicare, and OFM. For example, while the Benefit Integrity activity is managed by CPI, the Medical Review activity is managed by CMS’s Provider Compliance Group within OFM. See appendix I for an organizational chart that identifies the CMS components responsible for the oversight of MIP activities. CMS uses a variety of contractors to perform MIP activities, including a Comprehensive Error Rate Testing (CERT) contractor, an MSP contractor, Medicare administrative contractors (MAC), Medicare drug integrity contractors, the National Supplier Clearinghouse, program safeguard contractors, and zone program integrity contractors (ZPIC). For example, the MACs conduct provider audits, prepayment and postpayment review of Medicare claims, and some provider outreach and education. See appendix II for more information on these contractors and appendix III for the activities they perform. MIP’s activity managers and Director participate in a process to recommend funding allocations for MIP’s activities through the Budget Small Group. The MIP activity managers submit budget request documents to the MIP Budget Small Group to help guide the funding allocation process. The MIP Budget Small Group weighs each request and submits a draft MIP budget request to the CMS Chief Financial Officer and Chief Operating Officer. Following review by these officials, the MIP budget request goes to the CMS Administrator, who reviews the request and makes any desired changes. The MIP budget request is integrated into the agency’s entire proposed budget, which is sent to the Secretary of Health and Human Services. A proposed budget for the entire department goes to OMB for consideration on the President’s behalf. Adjustments may be made by OMB or the President before a final version is submitted to Congress, thus beginning the congressional appropriation process. One way that agencies examine the effectiveness of their programs is by measuring performance as mandated by the Government Performance and Results Act of 1993 (GPRA), as amended by the GPRA Modernization Act of 2010. GPRA is designed to improve the effectiveness of federal programs by establishing a system to set goals for program performance and to measure results. Specifically, GPRA requires federal agencies to prepare multiyear strategic plans, annual performance plans, and annual performance reports that provide information on progress achieved. To meet GPRA requirements for fiscal year 2011, CMS established the following: Five high-level strategic objectives, which included the objective of “accurate and predictable payments.” Agencywide GPRA goals, which included three specific to MIP. An agency’s goals should flow from its strategic objectives and be limited to the vital goals that reflect the highest priorities of an agency. Performance measures, which included three specific to MIP, such as reducing the percentage of improper payments made under the Medicare fee-for-service and Part C program. Performance measures are generally more numerous than the GPRA goals and are used to measure progress toward the goals and objectives. PPACA established additional reporting requirements for MIP. PPACA requires that MIP contractors provide the Secretary of Health and Human Services or the HHS Inspector General, upon request, performance statistics. These performance statistics may include the number and amount of overpayments recovered, the number of fraud referrals, and the ROI of activities the contractor undertakes. The act also requires the Secretary to evaluate MIP contractors at least once every 3 years and submit an annual report to Congress on the use of funds for MIP and the effectiveness of the use of those funds. CMS used increased funding it received in fiscal years 2006 through 2010 to expand MIP. From fiscal year 2006 through 2010, CMS received mandatory HIPAA funding along with new DRA funding, and additional discretionary funding in some years, to supplement its existing program integrity activities and support two new activities—Part C and D Oversight activities and Medi-Medi. Additionally, the agency was able to realize savings in some MIP activities, in part, because of the consolidation of claims administration contracts. CMS redistributed some of these savings to Part C and D Oversight and Benefit Integrity activities. CMS received additional MIP funding during fiscal years 2006 through 2010 that was used to support new activities or existing activities not previously supported by MIP. These activities included Part C and D Oversight, Medi-Medi, and Other Medicare Fee-For-Service activities. During fiscal year 2006 through fiscal year 2010, CMS received the maximum amount of mandatory funding stipulated under HIPAA, $720 million per year, as well as additional discretionary and DRA mandatory funding. (See fig. 1.) Prior to fiscal year 2006, CMS used mandatory HIPAA funding for the five original program activities—Benefit Integrity, Provider Audit, Provider Outreach and Education, Medical Review, and MSP. From fiscal year 2006 through fiscal year 2010, CMS continued to use mandatory HIPAA funding predominantly to support the five existing program integrity activities. In addition, beginning in fiscal year 2006, CMS received mandatory DRA funding, which it used to support two new MIP activities—Medi-Medi and Part C and D Oversight. DRA provided funding for Medi-Medi activities of $12 million in fiscal year 2006, which increased to nearly $60 million by fiscal year 2010. CMS officials told us that the Medi-Medi funding was used to support the ZPICs that work directly with the states on the Medi-Medi project. State participation is voluntary, and states do not directly receive MIP funding. Additional MIP funding also went toward Part C and D Oversight. DRA provided CMS with a onetime amount of $100 million in fiscal year 2006, part of which CMS used to perform new Part C and D Oversight. In fiscal years 2007 and 2008, CMS requested but did not receive discretionary funding to perform Part C and D Oversight. As a result, CMS officials told us that mandatory HIPAA funding for MIP was moved from other MIP activities in fiscal years 2007 and 2008 to the Part C and D Oversight activity. In fiscal years 2009 and 2010, CMS received $147 million and $220 million, respectively, in discretionary funding and used more than half of that funding for Part C and D Oversight. In fiscal year 2009, CMS used about $85 million of the discretionary funding (or 58 percent of the $147 million) to perform Part C and D Oversight that addressed CMS’s priority to deter fraud in the Medicare Part C and D programs. For example, CMS contractors conducted reviews of health care plans entering the Part C and D programs, and program and financial audits of the health care plans participating in the Part C and D programs. In fiscal year 2010, CMS used about $142 million (about 65 percent of the $220 million) to continue performing Part C and D Oversight. CMS moved the remaining discretionary funding, about $62 million (42 percent of $147 million) in fiscal year 2009 and $45 million (20 percent of $220 million) in fiscal year 2010, to the Other Medicare Fee-For-Service activity. CMS used the funding in the Other Medicare Fee-For-Service activity, in part, to fund the system that collects and stores enrollment information for all Medicare providers and suppliers in a national database. CMS officials stated that contractor consolidations resulted in some workload efficiencies and cost savings, which enabled the agency to redistribute some mandatory MIP funding to the Part C and D Oversight and Benefit Integrity activities. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 required CMS to transfer all Part A and B claims administration work previously conducted by 51 claims administration contractors to MACs. As a result, from fiscal year 2006 through May 2011, CMS awarded contracts to 15 MACs, which generally covered larger jurisdictions, and replaced most of the contracts with previous claims administration contractors. CMS designed the MAC jurisdictions to achieve operational efficiencies through consolidation. Further, in response to one of our previous recommendations, CMS also consolidated its postpayment recovery efforts into one MSP recovery contractor in October 2006, thereby increasing the efficiency of the MSP activity. CMS officials stated that the operational efficiencies and cost savings resulting from the contractor consolidations enabled the agency to decrease mandatory MIP funding to four of the five existing MIP activities and redistribute those funds to the Part C and D Oversight and Benefit Integrity activities. Specifically, from fiscal years 2006 through 2009, CMS redistributed MIP funding from the Provider Outreach and Education, Medical Review, MSP, and Provider Audit activities because these activities were less costly with fewer contractors performing the work. For example, CMS officials told us that the Provider Outreach and Education activity had less overhead and other administrative costs with fewer contractors, which resulted in reduced program expenditures. Provider Outreach and Education spending decreased from almost $65 million in fiscal year 2006 to about $42 million in fiscal year 2010—about 35 percent. Provider Outreach and Education had the largest percentage decrease in MIP funding. CMS officials stated that another factor in the decrease in Provider Outreach and Education spending was a realignment of some of its activities in fiscal year 2007. CMS officials also told us that under the consolidation into MAC jurisdictions, CMS required only one medical director for each MAC jurisdiction, instead of having one medical director for each state, which lowered the cost to perform the Medical Review activity. CMS officials stated that reduced costs allowed CMS to use some of the newly available MIP mandatory funding to address other priorities, such as funding Part C and D Oversight in fiscal years 2007 and 2008, when the agency received no discretionary funding. In addition, the MSP consolidation allowed CMS to reduce MIP funding for the MSP activity beginning in fiscal year 2007, and the agency used the savings to fund other MIP activities. Based on the funding information provided by CMS, we estimated that the agency saved about $86 million from fiscal year 2006 through fiscal year 2010 by consolidating contracted functions within the MSP activity. Benefit Integrity spending increased as a result of the redistribution among program activities. For fiscal years 2006 through 2010, Benefit Integrity had the largest percentage increase in mandatory HIPAA funding among the original five MIP activities, in part, because of cost efficiencies from contractor consolidation being redistributed to this activity. (See fig. 2.) For this period, CMS increased the amount of mandatory MIP HIPAA funds spent on the Benefit Integrity activity, from about $125 million in fiscal year 2006 to about $166 million in fiscal year 2010— about 33 percent. Benefit Integrity funds, among other subactivities, the work of a CMS contractor responsible for reviewing enrollment applications from durable medical equipment suppliers and conducting site visits to confirm these suppliers’ compliance with Medicare regulations. This followed a period when we and HHS’s Office of Inspector General (OIG) had highlighted problems with fraud in the Medicare program, including problems with suppliers of durable medical equipment. It also occurred during a period, starting in 2009, when HHS and DOJ were increasing coordination on investigating and prosecuting health care fraud through the Health Care Fraud Prevention and Enforcement Action Team, an initiative that marshals resources across the government to prevent health care fraud, waste, and abuse; crack down on those who commit fraud; and enhance existing partnerships between HHS and DOJ. Although CMS measures MIP effectiveness by using the improper payment rates for the Medicare fee-for-service, Part C Medicare Advantage, and Part D prescription drug programs, CMS officials with direct responsibility for MIP generally do not connect the MIP activities and the CMS goals of reducing improper payments. CMS added two new GPRA performance goals for MIP for fiscal year 2012 and is also developing other performance metrics based on PPACA requirements. One way that CMS measures MIP effectiveness is ROI, but the data the agency currently uses to calculate this measure are flawed. Three of CMS’s GPRA goals for MIP in fiscal year 2012 are to reduce the improper payment rates in each part of the Medicare program, which could contribute to the governmentwide effort to reduce improper payments. The GPRA goals, which were also goals in previous fiscal years, are to reduce the percentage of improper payments made in the Medicare fee-for-service, Part C Medicare Advantage, and Part D prescription drug programs. Each goal has a corresponding performance measure. These goals and related measures are particularly important because, as part of the Accountable Government Initiative, the President set goals for federal agencies to reduce overall improper payments by $50 billion and recapture at least $2 billion in improper contract payments and overpayments to health providers by the end of 2012. Because of its size, Medicare represented 38 percent of the governmentwide fiscal year 2010 improper payments. Therefore, CMS’s actions to reduce payment errors in Medicare will affect the success or failure of the governmentwide effort. To respond to the President’s goals, as stated in its performance plan, CMS adopted a target to reduce its improper fee-for-service error rate from 10.5 percent in fiscal year 2010 to 6.2 percent in fiscal year 2012 and the Part C error rate from 14.1 percent in fiscal year 2010 to 13.2 percent in fiscal year 2012. Although CMS has established these GPRA goals as an important way to measure the effectiveness of MIP, our interviews with CMS officials with direct responsibility for MIP activities indicate that these officials generally do not connect MIP activities with the CMS goals of reducing improper payments. Only one of the five MIP activity managers stated that CMS uses the improper payment rates to assess MIP’s overall effectiveness. Some of the remaining four MIP activity managers told us that they were not aware of any overall CMS measures of MIP effectiveness. In addition, some MIP activity managers told us that the improper payment rates did not clearly assess the work done in their activities. MIP activity managers told us that they used a number of other performance measures to assess the effectiveness of the activities for which they had responsibility, including assessments of individual contractors, survey results measuring customer satisfaction, feedback from provider associations, savings from claims processing, funds recovered, and ROI. The statements by agency officials indicate that CMS has not clearly communicated to its staff the relationship between the daily work of conducting MIP activities and the agency’s higher-level performance measures for improper payment reduction. Our prior work has established that responsibility for meeting performance measures should be linked directly to the offices that have responsibility for making programs work. A clearly communicated connection between performance measures and program offices helps to reinforce program managers’ accountability and ensure that managers keep in mind the outcomes their organization is striving to achieve. Within MIP, however, activity managers generally did not connect the activity-specific performance measures they use to assess their activity’s effectiveness and the agencywide GPRA performance goals for reducing improper payments. Our prior work found that leading organizations try to link the goals and performance measures for each organizational level to successive levels and ultimately to the organization’s strategic goals. These leading organizations recognized that without clearly communicated, hierarchically linked performance measures, managers and staff throughout the organization will lack straightforward road maps showing how their daily activities can contribute to attaining organizationwide strategic goals. In its FY 2012 Online Performance Appendix, CMS added two new MIP performance goals to the goals related to the improper payment rate for Medicare fee-for-service, Part C, and Part D, but it is also not clear how they link with performance measures currently used by MIP activity managers. (See table 1.) The first new performance goal is related to increasing the number of law enforcement personnel with training and access to near real-time CMS data. The second new performance goal aims to strengthen CMS’s provider enrollment actions to prevent fraudulent providers and suppliers from enrolling in Medicare and ensure that existing providers continue to meet enrollment requirements. The performance measure associated with this goal will be an increase in the percentage of Medicare enrollment site visits to “high-risk” providers and suppliers that result in administrative actions. It is not clear how the revised GPRA goals relate to the performance measures used by MIP activity managers to assess the effectiveness of their activities because CMS has not established such a linkage. Such linkage is helpful to effectively communicate how performance is measured within the agency. In addition to expanding the GPRA performance goals, CMS officials told us that they hired a contractor to develop agencywide performance metrics in response to PPACA requirements. The performance metrics being developed by the contractor include performance metrics for MIP. CMS officials did not provide a date when the new performance metrics will be completed. According to the Director of the Medicare Program Integrity Group, the PPACA performance metrics are broader than the GPRA goals but generally are consistent with the GPRA goals. In addition to the efforts at CMS to increase program integrity within the Medicare program, HHS officials told us that they are developing a departmentwide strategy to address program integrity in all HHS programs. An official noted that measuring the effectiveness of any program integrity effort is a challenge. She said it is difficult to quantify instances where fraud or abuse was avoided because of program integrity efforts. For instance, Provider Outreach and Education on proper billing practices is a MIP activity, but it would be difficult to quantify how much more improper billing would occur without this education. PPACA requires CMS to report annually on the use of funds for MIP and the effectiveness of the use of those funds. One way that CMS measures program effectiveness is through calculation of ROI. CMS already calculates ROI for each MIP activity, with the exception of Provider Outreach and Education. An overall ROI for MIP is reported to Congress annually in the agency’s budget justification. ROI is calculated as program savings from the activity divided by program expenditures from the activity. The Director of the Medicare Program Integrity Group told us that the current methodology used by MIP for calculating ROI is likely the method the agency will use to meet the PPACA reporting requirements. The data CMS currently uses to calculate the ROI have two flaws. First, CMS calculates the ROI for each activity in January of each year for the prior fiscal year, but contractors can change expenditure data via the submission of additional invoices or corrections through the time they are audited, which can occur up to 2 years after the end of the fiscal year. The ROI figures calculated based on this information are not subsequently updated. When we compared the expenditure data used to calculate activity-level ROIs and final expenditure data provided by OFM, we found differences up to $9.7 million. Given that these dollar amounts are used as the denominator for the ROI, the ROI amounts would likely change if they were updated with final expenditure data. In the case of the $9.7 million difference, for example, the difference represented a 6.4 percent increase in the program expenditures. Second, ROIs for activities conducted by MACs are potentially inaccurate because MACs have discretion to direct MIP funding among the activities they perform, and CMS does not have reliable information to determine the exact amount spent by each MAC on individual MIP activities. CMS officials told us that they were aware of the issue and were making changes to the data collection system so that CMS could calculate actual spending data. As of May 2011, these officials were unable to estimate when the change to the data collection system would be implemented. Decisions about recommendations for how MIP funding should be allocated among the various activities and subactivities are based on a variety of factors. Based on the budget request documents we reviewed, the CMS MIP Budget Small Group may consider any or all of the following factors: prior year’s approved funding levels and requested levels for the current and following fiscal year; rationale for the increase or decrease in requested funding; description and justification of the subactivity; agency performance goals, including the strategic objective and GPRA goal, that the subactivity is intended to meet; and consequence of not funding the subactivity. We reviewed 36 budget request documents for subactivities funded in fiscal year 2010 and found that 11 cited the reduction of the fee-for- service improper payment rate as the GPRA goal the subactivity was intended to meet. It is difficult to determine the factors CMS considers when allocating MIP funds beyond those listed in the budget request documents. There is no record of why submitted subactivities were funded or not funded. Also, CMS has no policies or procedures in place that outline how decisions about funding allocations should be made for MIP. CMS officials told us that a subactivity’s effectiveness may be discussed orally at the meetings, though there is no documentation substantiating this. A budget official in CMS told us that when allocating MIP funds, the MIP Budget Small Group tries to focus on where the problems are in each area and then determines how to efficiently spend the money. For instance, he said that in the past the process for allocating MIP funding within the Part C and D Oversight activity had been difficult because the subactivities were new, and consequently, there were no baseline ROI data available. This same official said that he thought the allocation process for Part C and D Oversight would become more data driven as program savings data become available, which will allow the agency to calculate ROIs for the Part C and D Oversight subactivities. The administration has made reducing the governmentwide improper payment rate a priority. CMS must play a strong role in this effort because, even without Part D, Medicare’s improper payments constitute more than a third of total federal government improper payments. As the CMS program with the goal to reduce Medicare’s improper payments, MIP will be central to the agency’s effort to reduce the Medicare improper payment rates. CMS will need a strong, concerted effort on the part of staff and contractors working on MIP activities to achieve the improper payment reduction goals the agency has set for itself, and MIP staff will need to understand how their work supports these goals and any additional goals developed in response to PPACA requirements. Further, a clear focus on reducing improper payments should be central to MIP budget allocations. Because at least some information is presented orally at the MIP Budget Small Group meetings, we cannot determine the extent to which the risk of improper payments and effectiveness of MIP activities in addressing that risk are discussed during the process. We continue to believe that consideration of how MIP activities will reduce the risk of improper payments and their effectiveness in doing so should be an important part of the funding process and encourage CMS to make that a priority. As we noted in our 2006 report, ROI is a useful method for assessing the effectiveness of MIP activities. However, such reporting is valuable only if the ROI figures reported are reliable. Currently, the data used to calculate the ROI are flawed because the ROI calculations are not updated beyond the end of a fiscal year to account for changes in MIP expenditure data, and CMS does not currently have a way to account for the exact amount of MIP funds MACs spend on individual MIP activities. CMS officials acknowledged the shortcomings of the MAC expenditure data and noted that they were implementing changes to the applicable data collection system to more accurately capture MAC expenditures. Expeditiously completing this task and ensuring that final expenditure data are used to update ROI calculations will be essential to ensuring reliability in ROI reporting. We are making three recommendations to CMS. To enhance accountability and sharpen the focus of the agency on reducing improper payments, we recommend that the Administrator of CMS clearly communicate to staff the linkage between GPRA and PPACA performance measures related to the reduction in improper payments and other measures used to determine the performance of MIP activities. To enhance the reliability of data used to calculate the MIP ROI, we recommend that the Administrator of CMS take the following two actions periodically update ROI calculations after contractor expenses have been audited to account for changes in expenditure data reported to CMS and publish a final ROI after data are complete and expeditiously complete the implementation of data system changes that will permit CMS to capture accurate MAC spending data, thereby helping to ensure an accurate ROI. We provided a draft of this report to HHS for review, and in its written comments, HHS concurred with our recommendations. (HHS’s written comments are reprinted in app. IV.) HHS noted that CMS has expanded its efforts to ensure that GPRA goals become an integral part of its overall management culture, including management of MIP activities. In addition, HHS stated that with the introduction of PPACA, the department is developing performance metrics that are in addition to, and align with, GPRA goals. CMS concurred with our recommendation to clearly communicate to staff the linkage between GPRA and PPACA performance measures related to the reduction in improper payments and other measures used to determine the performance of MIP activities. CMS stated that the agency recently established the position of the Chief Performance Officer to provide leadership, technical direction, and guidance in the development, implementation, communication, and operation of a comprehensive, CMS-wide performance management program. CMS also summarized other agency activities under way to assess program effectiveness, such as developing a new online data tool to report on the progress of key performance indicators, including those related to program integrity. CMS concurred with our recommendation to periodically update ROI calculations after contractor expenses have been audited to account for changes in expenditure data reported to CMS and publish a final ROI after data are complete. According to CMS, the agency will update the ROI when there has been a material change in the data used in the calculation and, at a minimum, will revisit the ROI annually to account for revisions in contractor cost reports and updated savings information. CMS also highlighted the complexities of estimating cost data for the MACs for purposes of the ROI. CMS also concurred with our recommendation to complete the implementation of data systems changes that will permit CMS to capture accurate MAC spending data, thus helping to ensure the accuracy of the ROI. CMS stated that the agency will convene an internal work group consisting of staff from several components to explore more efficient ways to accumulate MAC cost data and calculate ROI performance statistics. CMS also noted that some changes to the cost reporting system for contractor cost submissions have already been completed, particularly in the area of medical review cost reporting. However, the agency plans to pursue a full assessment of the costs reported across all of the MIP functions performed by the MACs to ensure that any additional changes are identified and implemented. We are encouraged by CMS’s plans to implement our recommendations and believe that doing so will lead to a better understanding by the agency and Congress of MIP’s effectiveness. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, Administrator of CMS, appropriate congressional committees, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The Comprehensive Error Rate Testing (CERT) contractor establishes error rates and estimates of improper payments for Medicare, which the Centers for Medicare & Medicaid Services (CMS) uses to assess the performance of MIP. Medicare administrative contractors (MAC) perform medical review of claims, identification and recovery of improper payments, provider audits, provider education, and screening of beneficiary complaints related to alleged fraud. MACs use information generated by the CERT contractors to identify how to target their improper payment prevention activities. In addition to performing these program integrity activities, MACs process Medicare claims and conduct other claims-related activities, such as answering provider inquiries and recouping overpayments. Medicare drug integrity contractors (MEDIC) are tasked with identifying potential fraud and abuse in Part C and D of the Medicare program and referring cases to the Department of Health and Human Services’ Office of Inspector General (OIG) or Department of Justice as necessary. MEDICs are also responsible for auditing the fraud, waste, and abuse compliance programs that are a requirement for participation as a Part D provider. The Medicare secondary payer (MSP) contractors are responsible for researching and conducting all MSP claim investigations. In addition to this role in MIP, the MSP contractor identifies all health insurance held by Medicare beneficiaries and coordinates the payment process. The National Supplier Clearinghouse is responsible for reviewing enrollment applications from durable medical equipment suppliers and conducting site visits to confirm these suppliers’ compliance with Medicare regulations. Program safeguard contractors (PSC) perform benefit integrity subactivities for Parts A and B of Medicare to identify cases of suspected fraud and take action to ensure that Medicare funding is not inappropriately paid and that any mistaken payments are identified. Cases of potential fraud are referred to the Department of Health and Human Services’ OIG. Zone program integrity contractors (ZPIC) will eventually be responsible for performing benefit integrity subactivities for claims under Parts A, B, C, and D of the Medicare program. CMS is currently in the process of replacing the PSCs with ZPICs. Benefit Integrity includes subactivities designed to deter and detect Medicare fraud by conducting data analysis of claims. Provider Audit includes desk reviews, audits, and final settlement of institutional provider cost reports. Medicare Secondary Payer includes subactivities designed to identify claims that were mistakenly billed to Medicare when beneficiaries have primary sources of payment that should have paid the claims. Medical Review includes both automated and manual prepayments and postpayment reviews of individual Medicare claims to determine whether legitimate services are covered, medically reasonable, and necessary. Provider Outreach and Education includes training for providers, such as hospitals and physicians that serve Medicare beneficiaries, on procedures to comply with Medicare rules and regulations. Medicare-Medicaid Data Match Project is a joint effort between CMS and states that participate voluntarily to analyze claims for individuals with both Medicare and Medicaid coverage to identify providers with aberrant Medicare and Medicaid billing patterns. Part C and D Oversight includes subactivities designed to address improper payments in the Medicare private health plan program (Part C) and outpatient prescription drug benefit (Part D). Other Medicare Fee-For-Service includes a variety of Medicare fee-for-service–related subactivities not captured by other activities. In addition to the contact named above, key contributors to this report were Kay L. Daly, Director; Sheila Avruch, Assistant Director; Phillip McIntyre, Assistant Director; Sabrina Springfield, Assistant Director; Lori Achman; Nicole Dow; Emily Loriso; Chelsea Lounsbury; Roseanne Price; Andrea Richardson; and Jennifer Whitworth. | The Medicare program makes about $500 billion in payments per year and continues to have a significant amount of improper payments--almost $48 billion in fiscal year 2010. The Centers for Medicare & Medicaid Services' (CMS) Medicare Integrity Program (MIP) is designed to identify and address fraud, waste, and abuse, which are all causes of improper payments. MIP's authorizing legislation provided funding for its activities and subsequent legislation provided additional funding. GAO was asked to report on how effectively CMS is using MIP funding to address Medicare program integrity. GAO examined (1) how CMS used MIP funding to support the program's activities from fiscal years 2006 through 2010, (2) how CMS assesses the effectiveness of MIP, and (3) factors CMS considers when allocating MIP funding. GAO analyzed CMS budget and other documents, interviewed CMS officials, and examined the agency's method of calculating return on investment (ROI), a performance measure used by CMS to measure the effectiveness of MIP activities. CMS used the increase in total MIP funding received, from $832 million in fiscal year 2006 to $1 billion in fiscal year 2010, to expand MIP's activities. The additional funding supported oversight of Medicare Part C (Medicare benefits managed through private plans) and Part D (the outpatient prescription drug benefit) and agency efforts to examine the claims of Medicare beneficiaries who also participate in Medicaid--a joint federal-state health care program for certain low-income individuals. CMS officials also reported that CMS was able to move some funding from activities, such as provider audit, to other activities because of savings achieved from consolidating contractors. The largest percentage increase from this redistribution went to benefit integrity activities, which aim to deter and detect Medicare fraud through proactive data analysis and coordination with law enforcement. Although CMS has reported that the agency measures MIP's performance with goals related to reductions in the improper payment rates for Medicare fee-forservice, Part C, and Part D, CMS officials with direct responsibility for MIP generally do not connect measurements of effectiveness of MIP activities with the CMS goals of reducing improper payments. These goals to reduce improper payments, which were reported as goals previously and for fiscal year 2012, are particularly important in light of the President's Accountable Government Initiative, which aims to reduce overall improper payments by $50 billion by the end of 2012. In interviews with GAO, CMS officials with direct responsibility for implementing MIP activities generally did not connect the measurement of effectiveness of MIP activities with these CMS goals to reduce improper payments and instead cited other measures of effectiveness. This suggests that CMS has not clearly communicated to its staff the relationship between the daily work of conducting MIP activities and the agency's improper payment reduction performance goals. Because MIP will be central to CMS's efforts to reduce Medicare improper payments, MIP staff need to understand how their work supports these goals. In addition, the Patient Protection and Affordable Care Act requires CMS to report annually on the use of funds for MIP and the effectiveness of the use of those funds. One way that CMS already measures MIP effectiveness is ROI, which CMS calculates as savings from an activity in relation to expenditures. CMS calculates ROI for most of its MIP activities, but the data it uses have two flaws. First, ROI calculations are not updated when program expenditure data, a key component in the ROI calculation, are updated, which may lead to an incorrect ROI. Second, CMS does not have reliable information to determine the amount of MIP spending by activity for one type of contractor that received about 22 percent of total MIP funding in fiscal year 2010. It will be important for CMS to correct these flaws to ensure reliability in ROI reporting. CMS considers a variety of factors when allocating MIP funding. Based on a review of the documents submitted to justify funding of specific MIP activities, CMS may consider the prior year's funding level, the consequence of not funding, and the performance goal that the activity is intended to meet. GAO recommends that CMS communicate the linkage between MIP activities and the goals for reducing improper payments and that CMS expeditiously improve the reliability of data used to calculate ROI. The Department of Health and Human Services concurred with these recommendations. |
In May 1997, we reported on DOD’s actions to improve deployment health surveillance before, during, and after deployments, focusing on Operation Joint Endeavor, which was conducted in the countries of Bosnia-Herzegovina, Croatia, and Hungary. We commented on the provisions of a joint medical surveillance policy draft that called for a comprehensive DOD-wide medical surveillance capability to monitor and assess the effects of deployments on servicemembers’ health. DOD subsequently finalized its joint medical surveillance policy in August 1997. Our 1997 review disclosed problems with the Army’s implementation of the medical surveillance plan for Operation Joint Endeavor in the following areas: Medical assessments. Many Army personnel who should have received post-deployment medical assessments did not receive them and the assessments that were completed were frequently done late. Of the 618 servicemembers in the 12 Army units whose medical records we reviewed, 24 percent did not receive in-theater post-deployment medical assessments, and 21 percent did not receive home station post-deployment medical assessments. Servicemembers who received home station post-deployment medical assessments received them, on average, nearly 100 days after they left theater instead of within 30 days as required by the plan. Further, pre-deployment blood serum samples were not available for 9.3 percent of the 26,621 servicemembers who had deployed to Bosnia as of March 12, 1996. The most recent blood samples for 6.4 percent of the pre-deployment blood samples were more than 5 years old. Medical record keeping. Many of the servicemembers’ medical records that we reviewed were incomplete and missing documentation of in-theater post-deployment medical assessments, medical visits during deployment, and receipt of an investigational new vaccine. More specifically, we found that 91 of the 473 servicemembers (19 percent) with a post-deployment in-theater medical assessment and 9 of the 491 servicemembers (1.8 percent) with a post-deployment home unit medical assessment did not have the assessments documented in their medical records. Furthermore, about 29 percent of the 50 battalion aid station visits we reviewed were not documented in the members’ permanent medical records. Finally, 141 of 588 servicemembers (24 percent) who received an investigational drug vaccine did not have the immunization documented in their medical records. Centralized database. The centralized database for collecting in-theater and home unit post-deployment medical assessments was incomplete for many Army personnel. More specifically, the database omitted 12 percent of the in-theater medical assessments done and 52 percent of the home unit medical assessments done for the 618 servicemembers whose records we reviewed. Deployment information. DOD officials considered the database used for tracking the deployment of Air Force and Navy personnel inaccurate. “(a) SYSTEM REQUIRED—The Secretary of Defense shall establish a system to assess the medical condition of members of the armed forces (including members of the reserve components) who are deployed outside the United States or its territories or possessions as part of a contingency operation (including a humanitarian operation, peacekeeping operation, or similar operation) or combat operation. “(b) ELEMENTS OF SYSTEM—The system described in subsection (a) shall include the use of predeployment medical examinations and postdeployment medical examinations (including an assessment of mental health and the drawing of blood samples) to accurately record the medical condition of members before their deployment and any changes in their medical condition during the course of their deployment. The postdeployment examination shall be conducted when the member is redeployed or otherwise leaves an area in which the system is in operation (or as soon as possible thereafter). “(c) RECORDKEEPING—The results of all medical examinations conducted under the system, records of all health care services (including immunizations) received by members described in subsection (a) in anticipation of their deployment or during the course of their deployment, and records of events occurring in the deployment area that may affect the health of such members shall be retained and maintained in a centralized location to improve future access to the records. “(d) QUALITY ASSURANCE—The Secretary of Defense shall establish a quality assurance program to evaluate the success of the system in ensuring that members described in subsection (a) receive predeployment medical examinations and postdeployment medical examinations and that the recordkeeping requirements with respect to the system are met.” As set forth above, these provisions require the use of pre-deployment and post-deployment medical examinations to accurately record the medical condition of servicemembers before deployment and any changes during their deployment. In a June 30, 2003, correspondence with the General Accounting Office, the Assistant Secretary of Defense for Health Affairs stated that “it would be logistically impossible to conduct a complete physical examination on all personnel immediately prior to deployment and still deploy them in a timely manner.” Therefore, DOD required both pre- and post-deployment health assessments for servicemembers who deploy for 30 or more continuous days to a land-based location outside the United States without a permanent U.S. military treatment facility. Both assessments use a questionnaire designed to help military healthcare providers in identifying health problems and providing needed medical care. The pre-deployment health assessment is generally administered at the home station before deployment, and the post-deployment health assessment is completed either in theater before redeployment to the servicemember’s home unit or shortly upon redeployment. As a component of medical examinations, the statute quoted above also requires that blood samples be drawn before and after a servicemember’s deployment. DOD Instruction 6490.3, August 7, 1997, requires that a pre-deployment blood sample be obtained within 12 months of the servicemember’s deployment. However, it requires the blood samples be drawn upon return from deployment only when directed by the Assistant Secretary of Defense for Health Affairs. According to DOD, the implementation of this requirement was based on its judgment that the Human Immunodeficiency Virus serum sampling taken independent of deployment actions is sufficient to meet both pre- and post-deployment health needs, except that more timely post-deployment sampling may be directed when based on a recognized health threat or exposure. Prior to April 2003, DOD did not require a post-deployment blood sample for servicemembers supporting the OEF and OJG deployments. In April 2003, DOD revised its health surveillance policy for blood samples and post-deployment health assessments. Effective May 22, 2003, the services are required to draw a blood sample from each redeploying servicemember no later than 30 days after arrival at a demobilization site or home station. According to DOD, this requirement for post-deployment blood samples was established in response to an assessment of health threats and national interests associated with current deployments. The department also revised its policy guidance for enhanced post-deployment health assessments to gather more information from deployed servicemembers about events that occurred during a deployment. More specifically, the revised policy requires that a trained health care provider conduct a face-to-face health assessment with each returning servicemember to ascertain (1) the individual’s responses to the health assessment questions on the post-deployment health assessment form; (2) the presence of any mental health or psychosocial issues commonly associated with deployments; (3) any special medications taken during the deployment; and (4) concerns about possible environmental or occupational exposures. The Army and Air Force did not comply with DOD’s force health protection and surveillance requirements for many of the servicemembers in our samples at the selected installations we visited. Specifically, these Army and Air Force servicemembers were missing: pre-deployment and/or post-deployment health assessments; evidence of receiving one or more of the pre-deployment immunizations required for their deployment location; and other pre-deployment requirements related to tuberculosis screening and blood serum sample storage. Also, servicemembers’ permanent medical records were missing required health-related information, and DOD’s centralized database did not include documentation of servicemember health-related information. Neither the installations nor DOD had monitoring and oversight mechanisms in place to help ensure that the force health protection and surveillance requirements were met for all servicemembers. We found that servicemembers missing one or both of their pre- and post-deployment assessments ranged from 38 to 98 percent in our samples. For example, at Fort Campbell for the OEF deployment we found that 68 percent of the 222 active duty servicemembers in our sample were missing either one or both of the required pre-deployment and post- deployment health assessments. The results of our statistical samples for the deployments at the installations visited are depicted in figure 1. For those servicemembers in our samples who had completed pre- or post-deployment health assessments, we found that as many as 45 percent of the assessments in our samples were not completed on time in accordance with requirements (see fig. 2). DOD policy requires that servicemembers complete a pre-deployment health assessment form within 30 days of their deployment and a post-deployment health assessment form within 5 days upon redeployment back to their home station. These time frames were established to allow time to identify and resolve any health concerns or problems that may affect the ability of the servicemember to deploy, and to promptly identify and address any health concerns or problems that may have arisen during the servicemember’s deployment. Not all health assessments were reviewed by a health care provider as required, as shown in figure 3. DOD policy requires that pre-deployment and post-deployment health assessments are to be reviewed immediately by a health care provider to identify any medical care needed by the servicemember. The services did not refer some servicemember health assessments that indicated a need for further consultation. According to DOD policy, a medical provider, namely a physician, physician’s assistant, nurse, or independent duty medical technician is required to further review a servicemember’s need for specialty care when the member’s pre-deployment and/or post-deployment health assessment indicates health concerns such as unresolved medical or dental problems or plans to seek mental health counseling or care. This follow-up may take the form of an interview or examination of the servicemember, and forms the basis of a decision as to whether a referral for further specialty care is warranted. In our samples, the number of assessments that indicated a health concern was relatively small, but large percentages of these assessments were not referred for further specialty care. For example, our sample at Travis Air Force Base included five pre-deployment health assessments that indicated a health concern, but four (80 percent) of the health assessments were not referred for further specialty care. Noncompliance with the requirement for pre-deployment health assessments may result in servicemembers with existing health problems or concerns being deployed with unaddressed health problems. Also, failure to complete post-deployment health assessments may risk a delay in obtaining appropriate medical follow-up attention for a health problem or concern that may have arisen during or following the deployment. Based on our samples, the services did not fully meet immunization and other pre-deployment requirements. Evidence of pre-deployment immunizations receipt was missing from many servicemembers’ medical records. Servicemembers missing the required immunizations may not have the immunization protection they need to counter theater disease threats. Based on our review of servicemember medical records for the deployments at the four installations we visited, we found that between 14 and 46 percent of the servicemembers were missing at least one of their required immunizations prior to deployment (see fig. 4). Furthermore, as many as 36 percent of the servicemembers were missing two or more of their required immunizations. The U.S. Central Command required the following pre-deployment immunizations for all servicemembers that deployed to Central Asia in support of OEF: hepatitis A (two-shot series); measles, mumps, and rubella; polio; tetanus/diphtheria within the last 10 years; yellow fever within the last 10 years; typhoid within the last 5 years; influenza within the last 12 months; and meningococcal within the last 5 years. For OJG deployments, the U.S. European Command required the same immunizations cited above, with the exception of the yellow fever inoculation that was not required for Kosovo. Figure 5 indicates that 7 to 40 percent of the deploying servicemembers in our review were missing a current tuberculosis screening. A screening is deemed “current” if it occurred 1 to 2 years prior to deployment. Specifically, the U.S. Central Command required servicemembers deploying to Central Asia in support of OEF to be screened for tuberculosis within 12 months of deployment. For OJG deployments, the U.S. European Command required Army and Air Force servicemembers to be screened for tuberculosis with 24 months of deployment. U.S. Central Command and U.S. European Command policies require that deploying servicemembers have a blood serum sample in the serum repository not older than 12 months prior to deployment. While nearly all deploying servicemembers had blood serum samples held in the Armed Services Serum Repository prior to deployment, as many as 29 percent had serum samples that were too old (see table 1). The samples that were too old ranged, on average, from 2 to 15 months out-of-date. Servicemembers’ permanent medical records were not complete, and DOD’s centralized database did not include documentation of servicemember health-related information. Many servicemembers’ permanent medical records at the Army and Air Force installations we visited did not include documentation of completed health assessments and servicemember visits to Army battalion aid stations. Similarly, the centralized deployment record database did not include many of the deployment health assessments and immunization records that we found in the servicemembers’ medical records at the installations we visited. DOD policy requires that the original completed pre-deployment and post-deployment health assessment forms be placed in the servicemember’s permanent medical record and that a copy be forwarded to AMSA. Figure 6 shows that completed assessments we found at AMSA and at the U.S. Special Operations Command for servicemembers in our samples were not documented in the servicemember’s permanent medical record, ranging from 8 to 100 percent for pre-deployment health assessments and from 11 to 62 percent for post-deployment health assessments. Army and Air Force policies also require documentation in the servicemember’s permanent medical record of all visits to in-theater medical facilities. Except for the OEF deployment at Fort Drum, officials were unable to locate or access the sign-in logs for servicemember visits to in-theater Army battalion aid stations and to Air Force expeditionary medical support for the OEF and OJG deployments at the installations we visited. Consequently, we limited the scope of our review to two battalion aid stations for the OEF deployment at Fort Drum. We found that 39 percent of servicemember visits to one battalion aid station and 94 percent to the other were not documented in the servicemember’s permanent medical record. Representatives of the two battalion aid stations said that the missing paper forms documenting the servicemember visits may have been lost en route to Fort Drum. Specifically, a physician’s assistant for one of these battalion aid station said the battalion aid station moved three times in theater and each time the paper forms used to document in-theater visits were boxed and moved with the battalion aid station. Consequently, the forms missing from servicemembers’ medical records may have been lost en route to Fort Drum. The lack of complete and accurate medical records documenting all medical care for the individual servicemember complicates the servicemembers’ post-deployment medical care. For example, accurate medical records are essential for the delivery of high-quality medical care and important for epidemiological analysis following deployments. According to DOD health officials, the lack of complete and accurate medical records complicated the diagnosis and treatment of servicemembers who experienced post-deployment health problems that they attributed to their military service in the Persian Gulf in 1990-91. DOD is implementing the Theater Medical Information Program (TMIP) that has the capability to electronically record and store in-theater patient medical encounter data. TMIP is currently undergoing operational testing by the military services and DOD intends to begin fielding TMIP during the first quarter of fiscal year 2004. Based on our samples, DOD’s centralized database did not include documentation of servicemember health-related information. As set forth above, Public Law 105-85, enacted November 1997, requires the Secretary of Defense to retain and maintain health-related records in a centralized location. This includes records for all medical examinations conducted to ascertain the medical condition of servicemembers before deployment and any changes during their deployment, all health care services (including immunizations) received in anticipation of deployment or during the deployment, and events occurring in the deployment area that may affect the health of servicemembers. A February 2002 Joint Staff memorandum requires the services to forward a copy of the completed pre-deployment and post-deployment health assessments to AMSA for centralized retention. Also, the U.S. Special Operations Command (SOCOM) requires deployment health assessments for special forces units to be sent to the Command for centralized retention in the Special Operation Forces Deployment Health Surveillance System. Figure 7 depicts the percentage of pre- and post-deployment health assessments and immunization records we found in the servicemembers’ medical records that were not available in a centralized database at AMSA or SOCOM. Health-related documentation missing from the centralized database ranged from 0 to 63 percent for pre-deployment health assessments, 11 to 75 percent for post-deployment health assessments, and 8 to 93 percent for immunizations. All but one of the servicemembers in our sample at Hurlburt Field were special operations forces. A SOCOM official told us that pre-deployment and post-deployment health assessment forms for servicemembers in special operations force units are not sent to AMSA because the health assessments may include classified information that AMSA is not equipped to receive. Consequently, SOCOM retains the deployment health assessments in its classified Special Operations Forces Deployment Health Surveillance System. Also, a SOCOM medical official told us that the system does not include pre-deployment immunization data. A Deployment Health Support Directorate official told us that the Directorate is examining how to remove the classified information from the deployment health assessments so that SOCOM can forward the assessments to AMSA. For presentation in figure 7, we combined the health assessment and immunization data we found at AMSA and SOCOM for Hurlburt Field. An AMSA official believes that missing documentation in the centralized database could be traced to the services’ use of paper copies of deployment health assessments that installations are required to forward to the centralized database, and the lack of automation to record servicemembers’ pre-deployment immunizations. DOD has ongoing initiatives to electronically automate the deployment health assessment forms and the recording of servicemember immunizations. For example, DOD is implementing a comprehensive electronic medical records system, known as the Composite Health Care System II, which includes pre- and post-deployment health assessment forms and the capability to electronically record immunizations given to servicemembers. DOD has deployed the system at five sites and will be seeking approval in August/September 2003 for worldwide deployment. DOD officials believe that the electronic automation of the deployment health-related information will lessen the burden of installations in forwarding paper copies and the likelihood of information being lost in transit. DOD does not have an effective quality assurance program to provide oversight of, and ensure compliance with, the department’s force health protection and surveillance requirements. Moreover, the installations we visited did not have ongoing monitoring or oversight mechanisms to help ensure that force health protection and surveillance requirements were met for all servicemembers. We believe that the lack of such a system was a major cause of the high rate of noncompliance we found at the units we visited. The services are currently developing quality assurance programs designed to ensure that force health protection and surveillance policies are implemented for servicemembers. Although required by Public Law 105-85 to establish a quality assurance program, neither the Assistant Secretary of Defense for Health Affairs nor the offices of the Surgeons General of the Army or Air Force had established oversight mechanisms that would help ensure that force health protection and surveillance requirements were met for all servicemembers. Following our visit to Fort Drum in October 2002, the Army Surgeon General wrote a memorandum in December 2002 to the commanders of the Army Regional Medical Commands that expressed concern related to our sample results at Fort Drum. He emphasized the importance of properly documenting medical care and directed them to accomplish an audit of a statistically significant sample of medical surveillance records of all deployed and redeployed soldiers at installations supported by their regional commands, provide an assessment of compliance, and develop an action plan to improve compliance with the requirements. At three of the four installations we visited, officials told us that new procedures were implemented that they believe will improve compliance with force health protection and surveillance requirements for deployments occurring after those we reviewed. Specifically, following our visit to Fort Drum in October 2002, Fort Drum medical officials designed a pre-deployment and post-deployment checklist patterned after our review that is being used as part of processing before servicemembers are deployed and when they return. The officials told us that this process has improved their compliance with force health protection and surveillance requirements for deployments subsequent to our visit. Also, the hospital commander at Fort Campbell told us that they implemented procedures that now require all units located at Fort Campbell to use the hospital’s medical personnel in their processing of servicemembers prior to deployment. The hospital commander believes that this new requirement will improve compliance with the force health protection and surveillance requirements at Fort Campbell because the medical personnel will now review whether all requirements have been met for the deploying servicemembers. At Hurlburt Field, officials told us that they implemented a new requirement in November 2002 to withhold payment of travel expenses and per diem to re-deploying servicemembers until they complete the post-deployment health assessment. Officials believe that this change will improve servicemembers’ completion of the post-deployment health assessments. While it is noteworthy that these installations have implemented changes that they believe will improve their compliance, the actual measure of improvements over time cannot be known unless the installations perform periodic reviews of servicemembers’ medical records to identify the extent of compliance with deployment health requirements. In March 2003, we briefed the Subcommittee on Total Force, House Committee on Armed Services, about our interim review results at selected military installations. Subsequently, at a March 2003 congressional hearing, the Subcommittee discussed our interim review results with the Assistant Secretary of Defense for Health Affairs and the services’ Surgeons General. Based on our interim results that DOD was not meeting the full requirement of the law and the military services were not effectively carrying out many of DOD’s force health protection and surveillance policies, in May 2003 the House Committee on Armed Services directed the Secretary of Defense to take measures to improve oversight and compliance. Specifically, in its report accompanying the Fiscal Year 2004 National Defense Authorization Act, the Committee directed the Secretary of Defense “… to establish a quality control program to begin assessing implementation of the force health protection and surveillance program, and to provide a strategic implementation plan, including a timeline for full implementation of all policies and programs, to the Senate Committee on Armed Services and the House Committee on Armed Services by March 31, 2004.” In April 2003, the Under Secretary of Defense for Personnel and Readiness issued an enhanced post-deployment health assessment policy that required the services to develop and implement a quality assurance program that encompasses medical record keeping and medical surveillance data. In June 2003, the Office of Assistant Secretary of Defense for Health Affairs’ Deployment Health Support Directorate began reviewing the services’ quality assurance implementation plans and establishing DOD-wide compliance metrics—including parameters for conducting periodic visits—to monitor service implementation. The DMDC deployment database still does not include the deployment information we identified in 1997 as needed for effective deployment health surveillance. In 1997, we reported that knowing the identity of servicemembers who were deployed during a given operation and tracking their movements within the theater of operations are major elements of a military medical surveillance system. The Institute of Medicine reported in 2000 that the documentation of the locations of units and individuals during a given deployment is important for epidemiological studies and for the provision of appropriate medical care during and after deployments. This information allows (1) epidemiologists to study the incidence of disease patterns across populations of deployed servicemembers who may have been exposed to diseases and hazards within the theater, and (2) health care professionals to treat their medical problems appropriately. Because of concerns about the accuracy of the DMDC database, we recommended in our 1997 report that the Secretary of Defense direct an investigation of the completeness of the information in the DMDC personnel database and take corrective actions to ensure that the deployment information is accurate for servicemembers who deploy to a theater. DOD’s established policies notwithstanding, the services did not report location-specific deployment information to DMDC prior to April 2003, because, according to a DMDC official, the services did not maintain the data. DOD Instruction 6490.3, issued in August 1997, requires DMDC, under the Department’s Under Secretary for Personnel and Readiness, to maintain a system that collects information on deployed forces, including daily-deployed strength, total and by unit; grid coordinate locations for each unit (company size and larger); and inclusive dates of individual servicemember’s deployment. In addition, the Joint Chief of Staff’s Memorandum MCM-0006-02, dated February 1, 2002, required combatant commands to provide DMDC with their theater-wide rosters of all deployed personnel, their unit assignments, and the unit’s geographic locations while deployed. This memorandum stressed that accurate personnel deployment data is needed to assess the significance of medical diseases and injuries in terms of the rate of occurrence among deployed servicemembers. The Under Secretary of Defense for Personnel and Readiness expressed concern about the services’ failure to report complete personnel deployment data to DMDC in an October 2002 memorandum. To address the services’ lack of reporting to DMDC, the Under Secretary of Defense for Personnel and Readiness established a tri-service working group that outlined a plan of action in March 2003 to address the reporting issues. In July 2003, a DMDC official told us that significant improvements had recently occurred and that all of the services had begun submitting their classified deployment databases—including deployment locations—to DMDC. DMDC is currently reviewing the deployment information submitted by the services to determine its accuracy and completeness. It plans to complete this review during the summer of 2003. With regard to DMDC’s efforts to create a system for tracking the movements of servicemembers within a given theater of operations, DMDC officials told us that little progress has been made. They said that the primary reason for a lack of progress in developing this system is that the source information has generally not been available from the services and this may require the development of new tracking systems at the unit level. In June 2003, a DMDC official told us that it had been recently determined that the Air Force has implemented a theater tracking system that may have applicability to the other services. The tracking system—known as the Deliberate Crisis and Action Planning and Execution Segment (DCAPES)—enables field teams to enter classified information about the whereabouts of deployed Air Force personnel at the longitude/latitude level of detail. DMDC began receiving information from this system in April 2003. The Under Secretary of Defense for Personnel and Readiness is reviewing this system to determine whether it could be used for the same purposes by the other services. Also, DOD is developing the Defense Integrated Military Human Resource System (DIMHRS), which will have the capability to track the movements of all servicemembers and civilians in the theater of operations. As of June 2003, DOD plans to implement this system for the Army by about September 2005 and for the other services by 2007 or early calendar year 2008. While DOD and the military services have established force health protection and surveillance policies, at the units we visited we found many instances of noncompliance by the services. Moreover, because DOD and the services do not have an effective quality assurance program in place to help ensure compliance, these problems went undetected and uncorrected. Continued noncompliance with these policies may result in servicemembers with existing health problems or concerns being deployed with unaddressed health problems or without the immunization protection they need to counter theater disease threats. Failure to complete post-deployment health assessments may risk a delay in obtaining appropriate medical follow-up attention for a health problem or concern that may have arisen during or following the deployment. Similarly, incomplete and inaccurate medical records and deployment databases would likely hinder DOD’s ability to investigate the causes of any future health problems that may arise coincident with deployments. To improve compliance with DOD’s force health protection and surveillance policies, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to establish an effective quality assurance program, as required by section 765 of Public Law 105-85 (10 U.S.C. 1074f), that will ensure that the military services comply with the force health protection and surveillance requirements for all servicemembers. The Department of Defense provided written comments on a draft of this report, which are found in appendix II. DOD concurred with the report’s recommendation. The Assistant Secretary of Defense for Health Affairs commented that his office has already established a quality assurance program for pre- and post-deployment health assessments. This program monitors pre- and post-deployment health assessments and blood samples being archived electronically at the Army Medical Surveillance Activity (AMSA) and assures that indicated referrals on the post-deployment health assessments are being conducted by all the services. However, the Assistant Secretary of Defense for Health Affairs’ comments did not discuss how his office is using the monitoring activities to assure the military services’ compliance with force health protection and surveillance policies. According to the Assistant Secretary of Defense for Health Affairs, the services have implemented their quality assurance programs. The Army has developed automated versions of the pre- and post-deployment health assessment forms, and has established a corporate monitoring system that is built upon deployment personnel rosters and monitored weekly by the Army Surgeon General. The Air Force is now receiving monthly deployment health surveillance compliance reports from its medical treatment facilities, and has scheduled a special compliance study through the Air Force Inspection Agency in fiscal year 2004. Navy fleet commanders have implemented their own quality assurance programs, with anticipation of standardization through centralized automated systems. And the Marine Corps has also established unit/command quality assurance procedures. We view these actions as responsive to our recommendation and commend the department for taking quick action to address the compliance issues we found during our audit. However, it remains to be seen how effective these activities will be in ensuring that force health protection and surveillance policies are implemented for all servicemembers. We are sending copies of this report to the Secretary of Defense and the Secretaries of the Army and the Air Force. We will also make copies available to others upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me on (757) 552-8100. Key contributors to this report are listed in appendix III. To meet our objectives, we interviewed responsible officials and reviewed pertinent documents, reports, and information related to force health protection and deployment health surveillance requirements obtained from officials at the Office of the Assistant Secretary of Defense for Health Affairs; the Office of the Deputy Assistant Secretary of Defense for Force Health Protection and Readiness; the Office of the Assistant Secretary of Defense for Reserve Affairs; the Joint Staff; the Marine Corps Force Health Protection Office; and the Offices of the Surgeons General for the Army and Air Force Headquarters in the Washington, D.C., area. We also performed additional work at the Deployment Health Support Directorate, Falls Church, Virginia; the U.S. Army Center for Health Promotion and Preventive Medicine, Aberdeen, Maryland; the Armed Forces Medical Intelligence Center, Fort Dietrick, Maryland; the Army Medical Surveillance Activity, Walter Reed Army Medical Center, Washington, D.C.; the Navy Environmental Health Center in Portsmouth, Virginia; the Defense Manpower Data Center in Monterey, California; and the U.S. Central Command and the U.S. Special Operations Command at MacDill Air Force Base, Tampa, Florida. To determine whether the military services were meeting DOD’s force health protection and surveillance requirements for servicemembers deploying in support of OEF and OJG, we identified DOD and each service’s overall deployment health surveillance policies. We also obtained the specific force health protection and surveillance requirements applicable to all servicemembers deploying to Central Asia in support of OEF from the U.S. Central Command and these requirements for all servicemembers deploying to Kosovo in support of OJG from the U.S. European Command. We tested the implementation of these requirements at selected Army and Air Force installations. To identify locations within each service where we would test implementation of the policies, the Assistant Secretary of Defense for Health Affairs requested the services to identify, by military installation, the number of active duty servicemembers who met the following criteria: For OEF, those servicemembers who deployed to Central Asia for 30 or more continuous days to areas without permanent U.S. military treatment facilities following September 11, 2001, and redeployed back to their home unit by May 31, 2002. For OJG, those servicemembers who deployed to Kosovo for 30 or more continuous days to areas without permanent U.S. military treatment facilities from January 1, 2001, and redeployed back to their home unit by May 31, 2002. Based on deployment data obtained from the services, we decided to limit our testing of the force health protection and surveillance policy implementation to selected Army and Air Force military installations with the largest numbers of servicemembers meeting our selection criteria (described above). We limited our review of medical records for servicemembers deploying in support of OJG to the two Army locations. We decided not to review Navy installations because there were only small numbers of servicemembers who met our selection criteria. We decided not to review Marine Corps installations because officials at the Marine Corps headquarters had difficulty identifying the number of servicemembers who went ashore 30 or more continuous days consistent with our selection criteria. The largest deployers for OEF and OJG were selected and are listed below: 10th Mountain Division, Fort Drum, N.Y. 101st Airborne Division, Fort Campbell, Ky. Travis Air Force Base, Calif. Hurlburt Field, Fla. 10th Mountain Division, Fort Drum, N.Y. 101st Airborne Division, Fort Campbell, Ky. For our medical records review, we selected statistical samples of servicemembers at the selected installations to be representative of those deploying from those military installations for those specific operations. For various reasons, medical records were not always available for review. We, therefore, sampled without replacement, to choose additional records when we were unable to meet our sampling threshold of cases for review. Specifically, there were five reasons identified for not being able to physically secure the servicemember’s medical record for review: 1. Charged to patient. When a patient visits a clinic (on-post or off-post), the medical record is physically given to the patient. The procedure is that the medical record will be returned by the patient following their clinic visit. 2. Expired term of service. Servicemember separates from the military and their medical record is sent to St. Louis, Missouri, and therefore not available for review. 3. Record is not accounted for by the medical records department. 4. Permanent change of station. Servicemember is still in the military, but has transferred to another base. Medical record transfers with the servicemember. 5. Temporary duty off site. Servicemember has left military installation, but is expected to return. The temporary duty is long enough to warrant that the medical record accompany the servicemember. The sample size for deployments was determined to provide 95 percent confidence with a 5-percent precision. The number of servicemembers in our samples and the applicable universe of servicemembers for the OEF and OJG deployments at the installations visited are shown in table 2. At Fort Campbell, there were only 333 servicemembers identified as having met our criteria based on a redeployment date of May 31, 2002; however, only 8 charts were available for review due to rotation of soldiers to other military locations or departure from the military. It was, therefore, necessary to extend our redeployment date to October 31, 2002. Doing so provided an additional 2,953 servicemembers who met all criteria except for a redeployment by May 31, 2002. At Fort Campbell, there were 92 servicemembers who deployed in support of OJG and met our selection criteria if we extended the redeployment date to October 31, 2002. Because the number of servicemembers for OJG at Fort Campbell was small, we reviewed the medical records for all of servicemembers who were still at Fort Campbell. At each sampled location, we examined servicemember medical records for evidence of the following force health protection and deployment health-related documentation required by DOD’s force health protection and deployment health surveillance policies: Pre- and post-deployment health assessments, Tuberculosis screening test (within 1 year of deployment for OEF and 2 years for OJG) influenza (within 1 year of deployment); measles, mumps, and rubella; meningococcal (within 5 years of deployment); polio; tetanus-diphtheria (within 10 years of deployment); typhoid (within 5 years of deployment); and yellow fever (within 10 years of deployment), not required for OJG. To provide assurances that our review of the selected medical records was accurate, we requested the installations’ medical personnel to reexamine those medical records that were missing required health assessments or immunizations and adjusted our results where documentation was subsequently identified. We also requested that installation medical personnel check all possible sources for missing pre- and post-deployment health assessments and immunizations. These sources included the Army’s Soldier Readiness Check folders and automated immunization sources, including the Army’s Medical Protection System (MEDPROS) and the Air Force’s Comprehensive Immunization Tracking Application (CITA). We checked all known possible sources for the existence of deployment health assessments related to servicemembers in our samples. In those instances where we did not find a deployment health assessment, we concluded that the assessments were not completed. Furthermore, installation officials were unable to logistically access the servicemembers’ individual records of immunizations, commonly referred to as yellow-shot records that may have provided documentation for missing immunizations. Consequently, our analyses of the immunization records was based on our examination of the servicemember’s permanent medical record and immunizations that were in the Army’s MEDPROS and the Air Force’s CITA. In analyzing our review results at each location, we considered documentation from all identified sources (e.g., servicemember’s medical record, soldier readiness check folder, Army Medical Surveillance Activity, and immunization tracking systems) in presenting data on compliance with deployment health surveillance policies. To identify whether required blood serum specimens were in storage at the Armed Services Serum Repository, we requested that the Army Medical Surveillance Activity staff query the Repository to identify whether the servicemembers in our samples had a blood serum sample in the repository and the date of the specimen. To determine whether the Army and Air Force are documenting in-theater medical interventions in servicemembers’ medical records, we requested, at each installation visited for medical records review, the patient sign-in logs for in-theater medical care providers, namely the Army’s battalion aid station and the Air Force’s expeditionary medical support, when they were deployed to central Asia in support of OEF and for the two Army installations we visited that deployed in support of OJG. Officials were unable to locate or access the logs at all of our selected installations except for Fort Drum for the OEF deployment. Consequently, we were able to perform our planned examination for this objective at only Fort Drum for the OEF deployment. From these logs, we selected a random sample of 36 patient visits from one battalion aid station and 18 patient visits from another battalion aid station. We did not attempt to judge the importance of the patient visit in making our selections. For the selected patient visits, we then reviewed the servicemember’s medical record for any documentation—such as the Army’s Standard Form 600—of the servicemember’s visit to the battalion aid station. To determine whether the Army and Air Force’s deployment health-related records are retained and maintained in a centralized location, we requested that officials at the Army Medical Surveillance Activity (AMSA) query the AMSA database for the servicemembers included in our samples at the selected Army and Air Force installations. For servicemembers in our samples, AMSA officials provided us with copies of deployment health assessments and immunization data found in the AMSA database. We analyzed the completeness of the AMSA database by comparing the deployment health assessments and the pre-deployment immunization data we found during our medical records review with those in the AMSA database. Since Air Force special operations force units use the Hurlburt Field, we also requested the U.S. Special Operations Command (SOCOM) to query their Special Operation Forces Deployment Health Surveillance System database for servicemembers in our sample at Hurlburt Field for deployment health assessments and pre-deployment immunization data. We then compared the data identified from the SOCOM and AMSA queries with the data we found during our medical records review. To determine whether DOD has corrected problems related to the accuracy and completeness of databases reflecting which servicemembers deployed to certain locations, we interviewed officials within the Deployment Health Support Directorate and the Defense Manpower Data Center and reviewed documentation related to the completeness of deployment databases and planned improvements in capabilities. Our review was performed from June 2002 through July 2003 in accordance with generally accepted government auditing standards. In addition to the individual named above, Steve Fox, Rebecca Beale, Lynn Johnson, William Mathers, Terry Richardson, Kristine Braaten, Grant Mallie, Herbert Dunn, and R.K. Wild made key contributions to this report. | Following the 1990-91 Persian Gulf War, many servicemembers experienced health problems that they attributed to their military service in the Persian Gulf. However, a lack of servicemember health and deployment data hampered subsequent investigations into the nature and causes of these illnesses. Public Law 105-85, enacted in November 1997, required the Department of Defense (DOD) to establish a system to assess the medical condition of service members before and after deployments. GAO was asked to determine whether (1) the military services met DOD's force health protection and surveillance requirements for servicemembers deploying in support of Operation Enduring Freedom (OEF) in Central Asia and Operation Joint Guardian (OJG) in Kosovo and (2) DOD has corrected problems related to the accuracy and completeness of databases reflecting which servicemembers were deployed to certain locations. The Army and Air Force--the focus of GAO's review--did not comply with DOD's force health protection and surveillance policies for many active duty servicemembers, including the policies that they be assessed before and after deploying overseas, that they receive certain immunizations, and that health-related documentation be maintained in a centralized location. GAO's review of 1,071 servicemembers' medical records from a universe of 8,742 at selected Army and Air Force installations participating in overseas operations disclosed that 38 to 98 percent of servicemembers were missing one or both of their health assessments and 14 to 46 percent were missing at least one of the required immunizations. DOD also did not maintain a complete, centralized database of servicemembers' medical assessments and immunizations. Health-related documentation missing from the centralized database ranged from 0 to 63 percent for pre-deployment assessments, 11 to 75 percent for post-deployment assessments, and 8 to 93 percent for immunizations. There is no effective quality assurance program at the Office of the Assistant Secretary of Defense for Health Affairs or at the Army or Air Force that helps ensure compliance with policies. GAO believes that the lack of such a program was a major cause of the high rate of noncompliance. Continued noncompliance with these policies may result in servicemembers deploying with health problems or delays in obtaining care when they return. Finally, DOD's centralized deployment database is still missing the information needed to track servicemembers' movements in the theater of operations. By July 2003, the department's data center had begun receiving location-specific deployment information from the services and is currently reviewing its accuracy and completeness. |
HRSA was established in 1982, and its mission is to improve health and achieve health equity through access to quality services, a skilled health workforce, and innovative programs. HRSA’s strategic plan contains four main goals: (1) improve access to quality health care and services, (2) strengthen the health workforce, (3) build healthy communities, and (4) improve health equity. HRSA also has a human capital strategic plan meant to ensure that the agency has the workforce it needs to carry out its mission. That plan contains five main goals: (1) plan for and align the workforce to ensure employees have the right experience and skills to fit the job, (2) support continuous learning, (3) build leadership bench (4) strengthen the performance culture, and (5) improve strength,employee satisfaction. As of September 2013, HRSA was in the process of updating its human capital strategic plan for the 2013 through 2015 timeframe. According to information from HRSA, the agency had appropriations of about $8.1 billion in fiscal year 2013. Since its inception in 1982, HRSA’s appropriations have generally increased in real terms. (See fig.1.) Increases to HRSA’s appropriations since fiscal year 2009 can partially be attributed to ARRA and PPACA. According to HRSA, ARRA provided an additional $2.5 billion to the agency from fiscal years 2009 through 2011. HRSA received approximately $7.8 billion through PPACA from fiscal years 2010 through 2013, and is expecting another $400 million in fiscal year 2014, for a total of about $8.2 billion over the 5 years. According to HRSA, in fiscal year 2012 the agency used over 90 percent of its budget on funding for its programs through grants, cooperative agreements, scholarships and loan repayments, and other forms of programmatic funding. In addition to these funding mechanisms, HRSA uses contracts—award mechanisms used to acquire services or property from a non-federal party for the benefit or use of HRSA—to support its operations and programs. Grants constitute one form of federal assistance consisting of payments in cash or in kind to a state or local government or a nongovernmental recipient for a specified purpose. Cooperative agreements are another form of financial assistance similar to grants, but where the federal agency is more involved with the recipient during the performance of the project. HRSA also has programs that offer scholarships to students and educational loan repayment to health care providers in exchange for a commitment to provide care in underserved areas or for underserved populations. Through these mechanisms, HRSA provides funding and support for a wide variety of programs. HRSA’s programs include a block grant to fund services for maternal and child health across the country, compensation for people injured by vaccines, grants to a national network of health centers to provide primary health care, loan repayment and scholarships for recruiting and training health care providers who practice in underserved communities, and grants to organizations providing services for people living with HIV/AIDS. As a result of ARRA and PPACA, HRSA has expanded some of its programs, and started new programs in recent years. For example, HRSA expanded its Health Center Program and established a Home Visiting Program to improve coordination of services and outcomes for families living in at-risk communities. HRSA’s staff of nearly 1,900 provides oversight, technical assistance, and operational support for the agency’s programs. Its workforce consists of permanent civilian staff, including those within the General Schedule (GS) employment system, the Senior Executive Service (SES), and other HRSA also employs staff from the government pay plans. Commissioned Corps. In addition to permanent civilian and Commissioned Corps staff, HRSA also employs some nonpermanent staff. For example, for its grant review panels, advisory committees, and certain other activities, HRSA may hire individuals for discrete, time- limited activities for which a particular expertise is needed. HRSA has headquarters staff who are assigned to the agency’s headquarters in Rockville, Maryland, and “regional staff” who work in 1 of the agency’s 10 regional offices or 1 of 2 field locations across the United States and Puerto Rico (see fig. 2). The GS system is a classification and pay system for the majority of civilian federal employees. The GS system has 15 grades—GS-1 (lowest) to GS-15 (highest). SES positions are federal employee positions that are classified above GS-15. Other government pay plans for which HRSA has staff include the federal wage system for hourly, blue collar employees and a pay plan for physicians and dentists. HRSA’s organization consists of the Office of the Administrator and 16 other organizational components—7 programmatic bureaus and 9 cross- cutting operational support offices (see fig. 3). HRSA’s Office of the Administrator provides broad leadership and direction to HRSA staff and plans, directs, and interprets major policies, programs, and initiatives for the agency. The Office of the Administrator also makes final decisions about HRSA’s organization, staff allocation, budget, and contracts. The office includes HRSA’s Administrator, Deputy Administrator, and Senior Advisors. HRSA’s seven programmatic bureaus each manage a portfolio of activities dealing with a specific area of health care services, systems, or workforce. Each of HRSA’s bureaus is led by an Associate Administrator and Deputy Associate Administrator, who are generally members of the SES. The bureaus are organized into smaller components called divisions or offices that are led by a director, generally a GS-15, who reports to the bureau’s Associate Administrator. Some of these divisions and offices are further broken down into subcomponents called branches which are led by chiefs who report to the division or office directors. HRSA’s nine operational support offices provide assistance for the agency’s programmatic work and coordination for cross-cutting or agency-wide issues, such as human resources, acquisitions management, and grants administration. (See table 1 for an overview of HRSA’s bureaus and offices and app. I for a list of HRSA programs by bureau.) HRSA’s underlying organizational structure of bureaus and offices has been in place for some time; however, since 2010, the agency has made several organizational changes. These included creating new organizational components, expanding or otherwise changing the functions of some components, and consolidating functions in order to eliminate a component. In addition, HRSA has made minor organizational changes within bureaus, such as realigning branches or shifting oversight responsibility of certain programs, and the staff responsible for them, between or within bureaus. HRSA officials reported that organizational changes were generally made to improve agency efficiency. For example, in 2010, HRSA established the Office of Operations to consolidate three previously separate offices: (1) the Office of Information Technology; (2) the Office of Management; and (3) the Office of Financial Management, which consisted of procurement, budget, policy, and control functions. These offices had formerly each reported directly to the Office of the Administrator. With this restructuring, the Chief Operating Officer— a position created in 2010—gained responsibility for oversight of these functions. In at least one instance, HRSA made a change as a result of a legislative requirement, namely, in October 2011 HRSA made its Office of Women’s Health, which was previously located within its Maternal and Child Health Bureau, a separate office in response to a requirement in PPACA that the office be established within the Office of the Administrator. Most recently, as a result of fiscal circumstances, including the sequester which went into effect in March 2013, and an ongoing hiring freeze in effect since January 2013, HRSA eliminated its Office of Special Health Affairs and distributed most of its functions to other existing bureaus and offices. According to HRSA, the elimination of the Office of Special Health Affairs was made to reduce overhead costs and better utilize staff. HRSA has mechanisms in place to share information important for supporting the agency’s mission across various levels of staff in the agency, including among agency leaders, programmatic bureau and operational support office leaders, and staff. These communication mechanisms include the agency’s operational planning process; cross- cutting workgroups and meetings; and regular communications among the Office of the Administrator, leaders in the bureaus and offices, and agency staff. The mechanisms HRSA has in place are consistent with internal control standards for the federal government, which state that effective communications within organizations should occur in a broad sense with a flow of information down, across, and up the organization. HRSA officials have established an annual operational planning process to facilitate the exchange of information across the agency to plan its budget and allocation of other resources. According to HRSA officials, each bureau and office develops a proposal to request contracts, budget, and other resources for the coming fiscal year. Next, these proposals are shared and discussed among all bureau and office leaders to allow for coordination and to reduce the risk of duplication or overlap of resources. Finally, HRSA’s Administrator makes a determination about resource allocation—such as budgets and contracts for the coming fiscal year— which is documented in a decision memo for each bureau and office. Agency officials told us that this process improves efficiency and reduces the chance for duplication of effort among the bureaus and offices. In addition, HRSA has established 20 active workgroups to coordinate across the agency’s bureaus and offices on cross-cutting topics. For example, according to agency officials, HRSA established a workgroup in April 2010— following the passage of PPACA—to coordinate communications and activities related to the implementation of provisions in the act pertaining to HRSA. The workgroup includes senior leaders from across the agency or their designees. In November 2012, HRSA established a Standard Operating Procedures Workgroup to monitor the implementation of standard operating procedures related to grantee oversight across bureaus, discuss the status of implementation, and to share successful practices regarding their use. Members include individuals who are tasked with leading the implementation of standard operating procedures within each bureau. Another workgroup—the HRSA Program Integrity Initiative Workgroup, launched in June 2010—is tasked with identifying risks to the agency’s management of programs and working to reduce those risks by initiating new or improved oversight efforts. The workgroup is comprised of representatives from all bureaus and offices. Other established workgroups focus on issues such as providing technical assistance to potential grantees who may be new to the application process, analyzing requests for information technology capital projects, and monitoring performance of the agency’s ongoing technology investments. In addition to the formal workgroups, officials in all the bureaus told us they regularly work with colleagues from other bureaus and offices as needed to coordinate on program areas where topics and issues overlap. HRSA also has mechanisms in place to ensure the flow of information up and down the organizational hierarchy, such as from the Office of the Administrator down to individual bureaus and offices. The Office of the Administrator uses a variety of standing meetings and reporting tools to communicate and exchange information with bureau and office leadership on a broad range of policy, program, and management matters. For example, HRSA’s Administrator holds a weekly senior staff meeting with the leaders of all of HRSA’s bureaus and offices. Topics for discussion include HRSA’s budget, operations, and implementation of PPACA. During these meetings, bureau and office leaders have the opportunity to share their concerns and discuss issues that they think may be of interest to the other organizational components of the agency. In addition, officials told us that HRSA’s Administrator, Deputy Administrator, and a Senior Advisor hold a meeting every other week with each of the bureaus’ Associate Administrators to discuss any problems that arise concerning grantees, plans for upcoming grant awards, program integrity issues, and any other bureau news or updates. Officials indicated that HRSA’s Administrator meets monthly with the directors from all nine operational support offices; officials from the Office of the Administrator also meet weekly for one-on-one discussions with the directors from most of these offices. In addition, the leaders of each of the bureaus told us they hold regular meetings with their staff such as one-on-one meetings with division directors, weekly senior staff meetings within the bureau where participants can raise issues of concern or topics for discussion, and “all- hands” meetings used to inform all bureau staff. In addition to participating in these standing meetings, bureau leaders make use of routine reports and other written communication to convey key programmatic information from the bureau to the Office of the Administrator. For example, bureau leaders provide monthly written updates on their programs and activities for inclusion in an agency-level report for the Secretary of HHS. These reports include information about HRSA collaboration with other agencies, programmatic updates, status of efforts related to PPACA, areas of concern, and completed congressional testimonies. HRSA officials also prepare decision papers to outline policy options for the Administrator’s consideration. For example, officials sent decision papers to the Administrator to outline proposals for organizational changes, such as the disbanding of the Office of Special Health Affairs. These papers outlined the rationale for the change, budget and staffing implications, and specific recommendations for the Administrator. HRSA’s staff has grown approximately 30 percent over the last 5 years. While the number of staff has grown, HRSA experienced attrition averaging 9 percent per year over the past five years. Looking forward, almost half of HRSA’s leadership will be eligible to retire by fiscal year 2017. HRSA periodically tracks attrition and retirement eligibility and has focused its succession planning efforts on leadership development. HRSA’s staff grew by more than 30 percent from fiscal years 2008 to 2012; the number of HRSA employees at the end of each fiscal year grew from 1,418 in fiscal year 2008 to 1,857 in fiscal year 2012. (See fig. 4.) HRSA officials indicated that the staffing increases correspond in part with HRSA’s increased responsibilities and funding due to ARRA and PPACA. The majority of HRSA’s staff, about 86 percent in fiscal year 2012, were stationed in HRSA’s Rockville, Maryland headquarters. The remaining employees were regional staff who were located in one of HRSA’s 10 regional offices or 2 field locations in the United States and Puerto Rico. While the overall number of staff grew, the total number of regional staff declined by about 14 percent—from 311 in fiscal year 2008 to 269 in fiscal year 2012. HRSA’s organizational components vary in size and how staff are distributed across headquarters and regions. As of the end of fiscal year 2012, the organizational component with the greatest number of staff was the Bureau of Primary Health Care (308 employees) and the one with the fewest staff, with 4 employees, was the Office of Women’s Health. Eight of HRSA’s organizational components—five programmatic bureaus and three operational support offices—had regional staff. The Office of Regional Operations had the largest number and proportion of regional staff–70 of 84 staff (83 percent), followed by the Healthcare Systems Bureau (40 percent) and the Bureau of Clinician Recruitment and Service (25 percent). (See fig. 5.) As of the end of fiscal year 2012, HRSA’s staff were employed in one of 76 job occupations. The 5 most common occupations were: Public Health Program Specialist (635 employees), Management and Program Analysis (336 employees), General Health Science (165 employees), Grant Management (92 employees), and Miscellaneous Administration and Program (70 employees). Officials indicated that within HRSA, the most common job function is a project officer. HRSA has over 400 project officers who are responsible for the ongoing oversight of an assigned portion of program funding recipients, such as grantees. Individuals from different occupations may serve as project officers, as the project officer function is not a distinct occupation. The majority of HRSA’s staff, 1601 individuals or 86 percent of the staff in fiscal year 2012, were civilians within the GS pay plan, with GS-13s representing the largest number of employees—651 staff members (35 percent of HRSA employees). The remaining HRSA staff were SES, Commissioned Corps officers, or employees paid under one of several additional civilian pay plans. (See fig. 6 for the distribution of HRSA staff by pay plan.) Within HRSA, 510 staff members (27 percent) were GS-14s or above (including individuals in the SES)—individuals who are generally supervisors, according to HRSA officials. From fiscal years 2008 through 2012, HRSA lost an average of about 9 percent of its staff per year to attrition. HRSA’s annual attrition rates from fiscal years 2008 through 2012 ranged from a low of 7.6 percent in fiscal year 2009 to a high of 9.9 percent in fiscal year 2008. In fiscal year 2012, HRSA had an attrition rate of 8.8 percent. Of those who left HRSA in that year, approximately 59 percent resigned, 35 percent retired, and 4 percent were terminated or removed. Attrition rates varied by pay plan and organizational component. In fiscal year 2012, attrition ranged from a high of 16.1 percent among GS-1s through GS-8s to a low of 3.9 percent for SES employees. While three organizational components, including the Office of the Administrator, had no attrition in fiscal year 2012, the Office of Planning, Analysis, and Evaluation had an attrition rate over 21 percent. Across HRSA’s programmatic bureaus, attrition rates ranged from 6.1 percent in the Office of Rural Health Policy to 13.4 percent in the Bureau of Health Professions. (See fig. 7.) Agency-wide, 31.3 percent of HRSA’s permanent employees will be eligible to retire by the end of fiscal year 2017; a rate similar to that for the entire federal government. However, a larger portion of HRSA’s leadership, nearly 50 percent, is eligible to retire in the next few years. Specifically, over 55 percent of HRSA’s SES employees, who serve as the leaders of HRSA’s bureaus and offices, and almost 50 percent of GS-15s, which include Division Directors, will be eligible to retire by the end of fiscal year 2017. Although eligibility to retire does not necessarily mean that employees will do so at the time they become eligible, if there were a large number of retirements among the agency’s leadership during this time period, HRSA runs the risk of having gaps in leadership and potential loss of important institutional knowledge. Within HRSA, retirement eligibility rates also vary by organizational component. By fiscal year 2017, over 40 percent of the employees in HRSA’s Office of the Administrator, the Healthcare Systems Bureau, the Maternal and Child Health Bureau, and several of HRSA’s operational support offices, will be eligible to retire. (See fig. 8.) HRSA periodically tracks attrition and retirement eligibility data. Collecting and analyzing data on attrition rates and retirement eligibility are considered a fundamental element for measuring the effectiveness of human capital approaches in support of an agency’s mission and goals. HRSA receives a quarterly report from HHS that provides agency-wide attrition data by reason for departure. Additionally, HRSA officials stated that staff in the Office of Operations track employee attrition data as needed for making agency-wide hiring and budget decisions. According to these officials, staff attrition data is shared with leaders of the programmatic bureaus and operational support offices as requested, or when high rates of attrition occur. In addition to reviewing attrition data, officials from several bureaus reported that they use information from exit interviews to help them understand the reasons for attrition. According to officials, a common reason why staff leave the agency is limited promotion potential, particularly from the GS-12 or GS-13 levels into more senior positions. Another reason officials reported for attrition is that some employees have expertise or skill sets that are easily transferrable and in demand elsewhere, including within other HHS and federal agencies. In particular, officials noted that the skill sets of project officers and those with information technology backgrounds are highly sought elsewhere in the government. While exit interviews provide insight into why staff are leaving the agency, officials also reported that they use information from the Federal Employee Viewpoint Survey to get a sense of the number of staff considering leaving the agency in the next year. HRSA also tracks retirement eligibility at the agency, bureau, and office levels. Quarterly, HRSA receives agency-wide data from HHS on the proportion of staff, including supervisory staff, eligible to retire in the next 5 years. Additionally, in early 2013, HRSA officials began providing leaders in each bureau and office with the names and retirement eligibility dates of their staff who are eligible to retire within the next 5 years. HRSA officials indicated they plan to provide these data on an annual basis going forward; however, officials indicated that the retirement eligibility reports are of limited use to them because eligibility to retire does not mean that an employee actually plans to retire. To respond to retirements and other types of attrition, HRSA has instituted succession planning efforts which generally focus on providing leadership development to agency staff. In 2011, HRSA launched two agency-wide leadership development programs to help prepare staff to take on leadership roles when such opportunities arise. One of these programs, the Mid-Level Leadership Development Program, is for staff at the GS-12 and GS-13 levels and focuses on leadership skills development, interdepartmental project experience, exposure to HRSA leaders, and an understanding of HRSA’s mission, challenges, and opportunities. The other program, the Administrative Management Development Program, focuses on individuals who are interested in careers handling the administrative management functions of the agency. HRSA officials indicated that, as of September 2013, 59 staff had completed one of these two programs. According to HRSA officials, as of July 2013, the agency was in the process of developing two additional leadership development programs—one targeted to staff at the GS-11 level and below and another for staff at the GS-14 and GS-15 levels. Officials estimated these additional programs would become operational in fiscal year 2014. In addition to leadership development programs, HRSA officials said that there are several other opportunities for staff to gain leadership experience and professional development. For example, HRSA has established a mentoring program, which focuses on leadership development for both the participating mentor and mentee, and a coaching program, which provides participating supervisors and managers with opportunities to focus on specific areas for further development. Officials across the agency also promote opportunities for employees to be assigned to acting roles in more senior positions when there is a vacant position or when a supervisor is on leave. For example, if a branch chief were to retire or be out of the office for an extended period, a senior level employee in the branch may be asked to act as the branch chief until the position can officially be filled or the branch chief returns. HRSA officials indicated that they work to train individuals to enhance their capabilities, which may better position staff to be successful candidates when leadership positions open up in the agency. In addition, opportunities to serve in an acting capacity in a role more senior to their own allows employees to smoothly transition into a position should they be selected for it on a permanent basis. For example, HRSA officials we spoke with told us that when the Associate Administrator of one of the bureaus was recently promoted—leaving a key vacancy—the Deputy Associate Administrator served as Acting Associate Administrator until promoted into the position permanently. Leaders from some bureaus also noted additional bureau-specific succession planning efforts. For example, the HIV/AIDS Bureau created its Organizational Development Unit in fiscal year 2012 to help deal with staff attrition issues within the bureau by providing training, helping staff create individual development plans, and providing mentoring opportunities to encourage staff to stay and continue to grow professionally. HRSA officials told us they also promote cross-training opportunities among staff, where employees work on multiple programs to assure a broader range of knowledge so that they are able to take over for each other should someone leave the bureau or agency. For example, Office of Rural Health Policy leaders said that they have their staff work on multiple programs to ensure they obtain a broader range of skills than they would acquire by working on just one program. Similarly, they assign their policy staff to a lead and backup role on key regulations so that if one person leaves the organization or is out of the office for an extended period of time, another is also familiar with the topic and able to complete the review. HRSA officials noted that some of these succession planning efforts, such as leadership training, also help with staff retention. In addition, officials noted that they have other efforts in place to promote staff retention, such as employee recognition and morale boosting opportunities. In fiscal year 2012, HRSA obligated over $240 million, or about 3 percent of its appropriations, to contracts to acquire goods or services necessary to support its operations. With the exception of fiscal year 2008, when HRSA had approximately $167 million in contract obligations, the amount of HRSA’s contract obligations generally remained steady over the past 5 years. (See fig. 9.) The vast majority of HRSA’s fiscal year 2012 contract obligations (approximately 97 percent) were used to obtain services, while the remaining 3 percent of obligations went toward goods, such as computer software. Nearly 60 percent of HRSA’s fiscal year 2012 contract (1) information obligations were for two categories of services:technology and telecommunications services, which includes HRSA’s contract to support the operation and management of the agency’s online system for documenting its grantee oversight activities, called the Electronic Handbook; and (2) professional support services, which includes HRSA’s contracts for the provision of technical assistance, such as site visits, to grantees. (See table 2.) In fiscal year 2012, nearly 40 percent, or $95,697,751, of HRSA’s contract obligations provided cross-cutting support, meaning that they were utilized by more than one HRSA organizational component. The remaining 60 percent of HRSA’s obligations were for programs and activities specific to a single programmatic bureau, though the amount of obligations varied by bureau. (See table 3 for a summary of the amount of contract obligations by organizational component and app. II for information on the contract with the highest obligation for each component.) HRSA’s bureaus utilized contracts for different purposes. For example, nearly 78 percent of the Bureau of Health Profession’s fiscal year 2012 obligations were for information technology and telecommunications services, primarily for the National Practitioner Data Bank, while the Maternal and Child Health Bureau did not have any obligations for that purpose.Bureau’s obligations in fiscal year 2012 were for professional services such as for a newborn hearing, screening, and intervention programs study, while about 12 percent of the Bureau of Health Professions’ obligations were for professional services. See appendix III for information on the top categories of contracted services or goods by organizational component. Conversely, 69 percent of the Maternal and Child Health According to HRSA officials, the agency uses contracts to support its operations for a variety of reasons, including to supplement HRSA staff because of time constraints, or to fulfill short-term needs. In addition, HRSA uses contracts to perform functions that require specialized skills for which HRSA staff do not have the appropriate expertise, such as clinical or financial expertise. For example, the Office of Rural Health Policy uses contract staff with special expertise in areas such as oral and primary health care to provide technical assistance to its broad range of grantees, while the Bureau of Primary Health Care uses contract staff with financial expertise to conduct site-visits to health center grantees, assist HRSA staff with understanding grantees’ financial audits, and help grantees develop plans to improve their financial stability. Furthermore, according to HRSA officials, the agency uses contracts to support its operations when contract staff can perform the functions more efficiently and at a lower cost than HRSA staff. For instance, the Maternal and Child Health Bureau obtains logistical support services, such as supporting large advisory committee meetings, from a contractor because it is more efficient and cost effective than having bureau staff manage these functions. Finally, HRSA uses contracts for other reasons including when the agency is legislatively required to do so. For example, HRSA is required by law to contract with one or more entities to carry out certain aspects of its C.W. Bill Young Cell Transplantation Program, a program overseen by the Healthcare Systems Bureau related to cord blood, bone marrow, and transplantation. We provided a draft of this report to HHS for its review. In its written comments, HHS noted that the report recognized the mechanisms HRSA has in place to ensure the coordinated flow of communication and plan for succession. (HHS comments are reprinted in app. IV.) As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and the Administrator of HRSA. In addition, the report will be available on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Provides individuals from disadvantaged backgrounds with an eligible health professions degree (e.g., dentistry, physician assistant) opportunities to serve as faculty members in an accredited and eligible health professions school for a minimum of two years. For each year of service, participants are awarded up to $30,000 for their educational loans. Designates federal HPSAs (areas in which there may be a shortage of primary medical care, dental, or mental health providers), Medically Underserved Areas (areas in which residents have a shortage of personal health services), and Medically Underserved Populations (which may include groups of persons who face economic, cultural, or linguistic barriers to health care). Shortage designations are used to prioritize HRSA’s health professional scholarship and loan repayment programs and other federal and state programs. Offers assistance to HPSAs in every U.S. state and territory to recruit and retain qualified primary care providers by providing scholarships or loan repayments to individuals who agree to provide services in shortage areas. Supports the demand for more health care professionals to deliver primary health services to Native Hawaiians in the State of Hawaii by providing scholarships in return for a commitment to serve in designated areas for a specified time period. Alleviates the shortage of nurses and economic barriers that may be associated with pursuing a career in nursing or teaching as nurse faculty by offering loan repayment assistance to registered nurses in return for a commitment to serve as a nurse in a critical shortage facility (in designated HPSAs) or as nurse faculty at an accredited eligible school of nursing, and offering scholarships to nursing students in return for service in a critical shortage facility. In partnership with HRSA’s Bureau of Primary Health Care, this program supports cooperative agreements with 54 State Primary Care Offices and territorial agencies to facilitate the coordination of activities such as needs assessments and technical assistance within a state that relate to the delivery of primary care services, and the recruitment and retention of critical health providers. Provides infrastructure grants to schools to build and enhance advanced nursing education programs, and two traineeships—the Advanced Education in Nursing Traineeship and the Nurse Anesthetist Traineeship. In addition, the Advanced Nursing Education Expansion Program provides grants to schools of nursing to accelerate the production of primary care advanced practice nurses. Promotes a national role in addressing health care workforce shortages, particularly in the areas of health career awareness and interdisciplinary and interprofessional community- based primary care training. Supports activities to enhance the academic performance of underrepresented minority students, support underrepresented minority faculty development, and facilitate research on minority health issues. Supports graduate medical education and training of residents and fellows in freestanding children’s teaching hospitals and enhances the supply of primary care and pediatric medical and surgical subspecialties. Description Provides support to train and educate individuals who provide geriatric care for the elderly. Improves access to quality health care to the elderly through a range of programs that focus on increasing the number of geriatric specialists and increasing geriatrics competencies in the generalist workforce through education and training to improve care. Collects and analyzes health workforce data and information through the National Center for Health Workforce Analysis (National Center) in order to provide national and state policy makers and the private sector with information on health workforce supply, demand, and needs. The National Center also evaluates workforce policies and programs as to their effectiveness in addressing workforce issues. Supports activities for kindergarten through 12th grade, baccalaureate, post- baccalaureate, and graduate students to improve the recruitment and enhance the academic preparation of students from disadvantaged backgrounds into the health professions. Works to close the gap in access to mental and behavioral health care services by increasing the number of adequately prepared mental and behavioral health and substance abuse providers. Serves as a flagging system intended to prompt a comprehensive review of health care practitioners’ licensure activity, medical malpractice payment history and record of clinical privileges. Used in conjunction with information from other sources, the National Practitioner Data Bank assists in promoting quality health care, and deterring fraud and abuse in the health care delivery system. Supports initiatives to expand the nursing pipeline, promote career mobility, enhance nursing practice, provide continuing education, and support retention. Supports the establishment and operation of a loan fund within participating schools of nursing to assist nurses in completing their graduate education to become qualified nurse faculty. Increases nursing education opportunities for individuals from disadvantaged backgrounds, including racial and ethnic minorities underrepresented among registered nurses, by providing student stipends and scholarships. Includes a range of programs designed to increase access to culturally competent, high quality dental health services to rural and other underserved communities by increasing the number of oral health care providers and improving the training programs for oral health care providers. Primary Care Training and Enhancement Supports and develops primary care physician and physician assistant training programs. Supports activities that train public health and preventive medicine students, residents, and professionals to enhance the supply and expertise of the public health workforce. Increases diversity in the health professions and nursing workforce by providing grants to eligible professions and nursing schools for use in awarding scholarships to students from disadvantaged backgrounds with financial need, many of whom are underrepresented minorities. Provides Graduate Medical Education payments to support community-based training by covering the costs of resident training in community-based ambulatory primary care settings, such as health centers, and bolstering the primary care workforce. Provides medical malpractice protection at sponsoring health clinics to encourage health care providers to volunteer their time at free clinics, thus expanding the capacity of the health care safety net. Supports the construction and renovation of health centers. The Health Center Program administers the Federal Tort Claims Act program, under which employees of eligible health centers may be deemed to be federal employees qualified for malpractice coverage under the Federal Tort Claims Act. Provides grants to eligible health centers to deliver comprehensive, high-quality, cost- effective primary health care to patients regardless of their ability to pay. Provides grants for the establishment of school-based health centers; grant funds can be used for expenditures for facilities, equipment, or similar expenditures. Requires drug manufacturers to provide discounts or rebates to a specified set of HHS- assisted programs and hospitals that meet the criteria in the Public Health Service Act and the Social Security Act for serving a disproportionate share of low-income patients. Provides compensation to individuals for serious physical injuries or deaths from pandemic, epidemic, or security countermeasures. Attempts to increase the number of transplants for recipients suitably matched to biologically unrelated donors of bone marrow and cord blood. Provides care and treatment for Hansen’s Disease (leprosy) and related conditions to any patient living in the United States or Puerto Rico through direct patient care at its facilities in Louisiana, through grants to an inpatient program in Hawaii, by contracting with 11 regional outpatient clinics, and providing payments to the State of Hawaii for hospital and clinic facilities at Kalaupapa, Molokai, and Honolulu. Also provides for the renovation and modernization of the Louisiana facilities to eliminate structural deficiencies and keep with accepted standards of safety, comfort, human dignity, efficiency, and effectiveness. Works on building a genetically and ethnically diverse inventory of high-quality umbilical cord blood for transplantation. Provides compensation to people found to be injured by certain vaccines given routinely to children and adults, such as seasonal flu vaccine, measles, mumps, rubella, or polio. Attempts to extend and enhance the lives of individuals with end-stage organ failure for whom an organ transplant is the most appropriate therapeutic treatment by providing a national system to allocate and distribute donor organs to individuals waiting for an organ transplant. Funds poison centers; maintains a single, national toll-free number to ensure universal access to poison center services and connect callers to the poison center serving their area; and implements a nationwide media campaign to educate the public and health care providers about poison prevention, poison center services, and the toll-free number. Provides grants to metropolitan areas experiencing the greatest burdens of the country’s human immunodeficiency virus and acquired immunodeficiency syndrome (HIV/AIDS) epidemic, and provide those communities with resources they need to confront the highly concentrated epidemic within the jurisdiction. Description Provides grants to all 50 states, the District of Columbia, Puerto Rico, the U.S. Virgin Islands, Guam and 5 U.S. Pacific Territories or Associated Jurisdictions to provide services for people living with HIV/AIDS. The AIDS Drug Assistance Program supports the provision of HIV medications and related services. Provides grants to 344 community and faith-based primary health clinics and public health providers in 49 states, Puerto Rico, the District of Columbia, and the U.S. Virgin Islands for targeting HIV medical services to underserved and uninsured people living with HIV/AIDS in specific geographic communities, including rural and frontier communities. Provides grants to public or private nonprofit entities that provide or arrange for primary care and support services for HIV-positive women, infants, children, and youth. Funds the AIDS Education and Training Centers—a network of 11 regional centers with more than 130 local performance sites and five national centers—that offers specialized clinical education and consultation on HIV/AIDS transmission, treatment, and prevention to front-line health care providers. Provides access to oral health care for people living with HIV/AIDS by reimbursing dental education programs for the unreimbursed costs associated with providing care to people with HIV and by working with partners to provide education and clinical training for dental care providers, especially those in community-based settings. Although overseen by multiple federal agencies, the HIV/AIDS Bureau manages HRSA’s contributions to this program whose mission is to deliver HIV/AIDS care and treatment and helps build sustainable health systems so that host countries can confront their epidemics in the future. Supports the development of innovative models of HIV care to quickly respond to the emerging needs of clients served by the Ryan White HIV/AIDS CARE Act Program by evaluating the effectiveness of the models’ design, implementation, utilization, cost, and health-related outcomes, and promoting the dissemination and replication of successful models. Under the auspices of the Combating Autism Act of 2006, this program supports activities to provide information and education to increase public awareness, promote research into the development and validation of screening tools and interventions, promote early learning of individuals at higher risk, increase the number of individuals who are able to confirm or rule out a diagnosis, and increase the number of individuals able to provide evidence-based interventions for autism spectrum disorders or other developmental disabilities. Emergency Medical Services for Children Focuses on generating evidence on best practices regarding pediatric emergency care as well as direct outreach to the states, territories, and the District of Columbia to implement these best practices. Provides grants funded by the Patient Protection and Affordable Care Act to family- staffed, family-run organizations to ensure families have access to adequate information about health care, community resources, and support in order to make informed decisions around their children’s health care. Provides grants to communities with exceptionally high rates of infant mortality to reduce disparities in access to and utilization of health services, improve the quality of the local health care system, empower women and their families, and increase consumer and community voices and participation in health care decisions. Description Works to improve the ability of states to provide newborn and child screening for heritable (genetic) disorders. Provides grants to support the physiologic testing of newborn infants prior to their hospital discharge; audiologic evaluation by three months of age; and entry into a program of early intervention by six months of age with linkages to a medical home and family-to-family support. Aims to improve the health of all mothers, children, and their families to reduce health disparities, improve access to health care, and improve the quality of health care. The program has three components: (1) block grant funds to states distributed by formula; (2) Special Projects of Regional and National Significance which supports a variety of projects in research, training, screening, and other services; and (3) Community Integrated Service Systems which supports projects that seek to increase the capacity for service delivery at the local level and to foster formation of comprehensive, integrated, community level service systems for mothers and children. Collaborates with the Administration for Children and Families to improve coordination of services for at-risk communities, to identify and provide comprehensive services to improve outcomes for families who reside in at-risk communities, and to strengthen and improve the programs and activities carried out under the Maternal and Child Health Block Grant program. Develops systemic mechanisms for the treatment of sickle cell disease and the prevention of morbidity and mortality associated with the condition. Provides grants to (1) fund the development and implementation of statewide systems that ensure access to comprehensive and coordinated traumatic brain injury services including: transitional service, rehabilitation, education and employment, and long-term community support; and (2) provide services such as referrals, advice, and legal representation to individuals with traumatic brain injury. Provides grants to public and private entities, including faith-based and community-based organizations, to establish and operate clinics that provide for the outreach and education, diagnosis, treatment, rehabilitation, and benefits counseling of active and retired coal miners and others with occupation-related respiratory and pulmonary impairments. Provides grants to states, local governments, and appropriate health care organizations to support programs for cancer screening for individuals adversely affected by the mining, transport, and processing of uranium, and the testing of nuclear weapons for the nation’s weapons arsenal. Provides funds to community partnerships, which then purchase and distribute automatic external defibrillators to be placed in rural communities and train emergency first responders to use the devices. Provides grants to improve access to care, coordination of care, integration of services, and focus on quality improvement in rural communities. Supports a range of policy analysis, research, and information dissemination for the Office of Rural Health Policy. Supports a range of activities focusing primarily on Critical Access Hospitals through three grant programs: (1) the Medicare Rural Hospital Flexibility (Flex) Grant Program; (2) Small Hospital Improvement Program; and (3) the Flex Rural Veterans Health Access Program. Provides grants to states to establish and maintain State Offices of Rural Health. Description Provides grants that support telehealth technologies through the following three programs: (1) Telehealth Network Grant Program, which provides funding for pilot projects to examine the cost impact and value-added from telehome care and tele- monitoring services and activities such as chronic disease management and distance learning; (2) Telehealth Resource Center Grant Program, which provides technical assistance to communities wishing to establish telehealth services; and (3) Licensure Portability Grant Program, which assists states to improve clinical licensure coordination across state lines. As of April 2013, the Healthcare Systems Bureau managed two programs that no longer receive funding: (1) the Health Care and Other Facilities Program, which provided grants for new construction, renovation, design development and equipment to hospitals, community health centers, universities, and research centers; and (2) the Hill-Burton Loan Guarantee and Project Grant Program which provided loan guarantees and grants to facilities for construction. Although these programs were not funded in fiscal year 2012, HRSA officials told us that they continue to monitor recipients of prior years funding under these programs. A block grant is a type of grant where funding recipients have substantial discretion over the type of activities to support, with minimal federal administrative requirements or restrictions. At-risk communities are communities with concentrations of (1) premature birth, low-birth weight infants, and infant mortality, or other indicators of at-risk prenatal, maternal, newborn or child health; (2) poverty; (3) crime; (4) domestic violence; (5) high-school drop outs; (6) substance abuse; (7) unemployment; or (8) child maltreatment. Critical Access Hospitals are small, rural hospitals. To be certified as a Critical Access Hospital a facility must meet certain criteria, including being located in a rural area, having no more than 25 inpatient beds, and furnishing 24-hour emergency care services 7 days a week. Telehealth is the use of electronic information and telecommunications technologies to support long distance health care, patient and professional health related education, public health, and health administration. Provides services to maintain, update, and enhance the National Practitioner Data Bank Provides supplemental expert assistance and support to health center grantees and federal staff by providing technical and consultative assistance through site visits, documentation reviews, and consultations to new and existing grantees Establishes and maintains the National Bone Marrow Coordinating Center Provides technical assistance for the Ryan White HIV/AIDS CARE Act Program Operates the Maternal, Infant, and Early Childhood Home Visiting Program Technical Assistance Coordinating Center Provides technical assistance for grantee programs to expand access to, coordinate, restrain the cost of, and improve the quality of health care through the development of health care networks in rural areas and regions Supports development, maintenance, and enhancement efforts for HRSA’s Electronic Handbook, the agency’s online system for documenting its grantee oversight activities, by integrating new business processes into the Electronic Handbook or integrating the Electronic Handbook with other existing systems We have included contract data for the National Hansen’s Disease Program with the data for the Healthcare Systems Bureau, as this bureau took oversight responsibility for this program in August 2012. Prior to that, the program was under the auspices of the Bureau of Primary Health Care. Cross-cutting refers to contracts that were utilized by more than one HRSA organizational component. Percent of component’s contract obligations 48.9% 2. Education and Training 1. 3. Special Studies and Analysis 2. Special Studies and Analysis 3. 1. Medical, Dental, and Surgical Services 2. 3. Administrative Support Services 3. Administrative Support Services 1. We have included contract data for the National Hansen’s Disease Program with the data for the Healthcare Systems Bureau, as this bureau took oversight responsibility for this program in August 2012. Prior to that, the program was under the auspices of the Bureau of Primary Health Care. Cross-cutting refers to contracts that were utilized by more than one HRSA organizational component. In addition to the contact named above, Michelle B. Rosenberg, Assistant Director; Jill K. Center; Kathleen Diamond; Cathleen J. Hamann; Julia Kennon; Emily Loriso; Rebecca Shea; and Jennifer M. Whitworth made key contributions to this report. | HRSA is charged with improving access to health care services for people who are uninsured, isolated, or medically vulnerable. HRSA carries out its mission by providing funding and support to a wide variety of programs, which have grown in number and size since the agency was established in 1982. To manage these programs, HRSA has a staff of nearly 1,900 employees, supplemented by contract staff who perform a variety of tasks to support HRSA's programs and operations. HRSA's staff are organized into seven programmatic bureaus that are responsible for overseeing HRSA's programs and nine cross-cutting operational support offices--each of which reports to the Office of the Administrator. In recent years, GAO reported on weaknesses in HRSA's oversight and monitoring of certain programs. Given GAO's past findings and the expansion of the agency's programs, GAO was asked to review HRSA's management and operations. This report examines (1) HRSA's internal communication mechanisms and how they are used to support the agency's mission; (2) HRSA's staffing and how the agency plans for attrition; and (3) HRSA's use of contracts to support its operations. GAO reviewed and analyzed HRSA's communication methods and organizational structure; analyzed data on HRSA personnel and contracts for fiscal years 2008 through 2012; interviewed HRSA officials knowledgeable about the agency's organization, staffing, and use of contracts; and reviewed relevant documentation. The Department of Health and Human Services' (HHS) Health Resources and Services Administration (HRSA) has mechanisms in place to share information important for supporting the agency's mission across its various organizational components and levels of staff--a practice that is consistent with internal control standards for the federal government. These communication methods include an annual operational planning process for allocating agency resources, workgroups that involve staff from across the agency to work on issues of a cross-cutting nature, and regular meetings between the Office of the Administrator and leaders of the agency's various organizational components. HRSA's staff grew by more than 30 percent from fiscal years 2008 to 2012. The number of HRSA employees grew from 1,418 in fiscal year 2008 to 1,857 in fiscal year 2012. According to agency officials, the most common job function within HRSA is a project officer--an employee responsible for the oversight of grantees funded by the agency's programs; and HRSA has over 400 project officers. From fiscal years 2008 through 2012, HRSA lost an average of 9 percent of its staff annually to attrition. Of those who left HRSA in fiscal year 2012, approximately 59 percent resigned and 35 percent retired. Agency-wide, over 30 percent of HRSA's permanent employees will be eligible to retire by the end of fiscal year 2017. An even larger portion of HRSA's leadership, nearly 50 percent, will be eligible to retire by 2017. If a large portion of the agency's leadership were to actually retire during this time period, HRSA runs the risk of having gaps in leadership and potential loss of important institutional knowledge. HRSA periodically tracks attrition and retirement eligibility. To respond to retirements and other attrition, HRSA has instituted succession planning efforts which generally focus on leadership development for agency staff. For example, HRSA has instituted two leadership development programs, has two other programs under development, and has established mentoring and coaching programs. In fiscal year 2012, HRSA obligated over $240 million, or about 3 percent of its appropriations, to contracts to acquire goods and services necessary to support its operations, an amount that has generally remained steady over the past few years. Over half of the fiscal year 2012 contract obligations were for two categories of services--information technology and telecommunications services, and professional support services, which includes providing technical assistance to grantees. According to HRSA officials, the agency uses contracts to support its operations for a variety of reasons; these include supplementing HRSA staff or fulfilling short-term needs and performing functions that require specialized skills for which HRSA staff do not have the appropriate expertise, such as clinical or financial expertise. We provided a draft of this report to HHS for its review. In its written comments, HHS noted that the report recognized the mechanisms HRSA has in place to ensure the coordinated flow of communication and plan for succession. |
International parental child abductions reported to the State Department have been increasing. The State Department reported that it received 1,135 new requests for assistance in international parental child abduction cases in fiscal year 2009, the most recent fiscal year with comparable data. The annual number of new requests received has increased each fiscal year since fiscal year 2000 (see fig. 1). According to literature we reviewed, such abductions can take an emotional toll on children—who can encounter serious psychological effects—and on the parent whose child has been abducted. Research shows that recovered children often experience a range of problems, including anxiety, eating problems, nightmares, mood swings, sleep disturbances, and aggressive behavior. Parents whose children have been abducted may encounter substantial psychological, emotional, and financial problems in fighting for the return of their children. When a child has been abducted across international borders, a parent may face an unfamiliar legal system, as well as significant cultural differences and linguistic barriers that can hinder a parent’s attempts to reunify with his or her child. Although we could not find definitive data on the extent to which parents and others have used airline flights to abduct children abroad, many international parental child abductions most likely involve airline flights. The State Department reported that, from fiscal year 2007 through 2009, it received 3,011 requests for assistance in returning 4,365 children to the United States from other countries. About 30 percent of these children were abducted to Mexico, while about 6 percent were abducted to Canada. The remaining 64 percent were abducted to other countries that do not share a border with the United States. The State Department and other organizations told us that an airline flight was likely the primary means of transportation for most abductions to these nonborder countries. Of the six nonborder countries that had the most child abductions, it is highly likely that an airline flight was used in many of these abductions (see fig. 2). Child custody and abduction issues have historically been addressed at the state and local level. State family courts determine child custody status, including issuing custody and court orders that can limit the travel of children. According to State Department officials, currently there is no nationwide database that captures information from custody and court orders. State and local law enforcement are generally tasked with enforcing the provisions of these custody and court orders. When a child is at risk of imminent abduction or harm, a judge may issue an order and direct law enforcement to take physical custody of a child. A court order can prohibit the removal of a child from the United States and that can allow a parent or law enforcement official to contact the airport authority police, who may assist in intercepting the abductor. However, enforcement of such orders is difficult, in part because of the lack of a nationwide database that maintains custody orders, and because the United States does not generally exercise exit controls on its borders that would prevent an adult U.S. citizen holding a valid passport from leaving the country with his or her child who also holds a valid passport. Generally, any citizen holding a valid passport may leave or enter the United States freely. According to a DOJ report on international child abductions, parents who fear that their children may be abducted can request a court order to have the other parent surrender his/her passport and the child’s passport to the court. Foreign governments, however, are not bound by U.S. custody orders and may issue passports to children who are their nationals. The lack of exit controls makes timing crucial in preventing international parental child abductions involving an airline flight. If a child has a valid passport, preventing an abduction on an international airline flight could be very difficult even if a parent has obtained a custody order barring such travel because that parent would not only need to involve law enforcement but do so with enough time to intercept the abducting parent and the child before they board an international flight. Once a parent reports a child as abducted, rapid communication and coordination among law enforcement, airport, and airline authorities are necessary to prevent a child from boarding an international flight. What can often happen in these cases, however, is that a parent does not know that another family member plans to board the child on an international flight, and thus may not contact law enforcement in time. For example, the American Bar Association led a survey of 97 left-behind parents that found that nearly half of the abductions reported by the left-behind parents occurred during a legal visitation between the abducting parent and abducted child. The left- behind parent was likely unaware of the other parent’s abduction intentions. As private sector entities, airlines in the United States do not have the authority to verify or enforce court and custody orders. Stakeholders we interviewed stated that the airline’s main role related to the prevention of international parental child abductions is cooperating upon request with law enforcement officials or prosecutors. For example, a few alleged abductions in progress have been intercepted when local court officials or law enforcement officers contacted airport police and airline personnel to prevent a suspected abducting parent and at-risk child from leaving on an international airline flight. Several airline stakeholders told us that law enforcement should take the main role in preventing international parental abductions, but that airlines work to support the law enforcement agencies in this role. While airlines may not be in a position to question the appropriateness of a child and adult traveling together, airlines have procedures in place for children traveling alone internationally or domestically. Although policies and procedures can vary by airline, most domestic airlines will permit children who have reached their fifth birthday to travel unaccompanied. Children aged 5 through 11 who are flying alone must usually travel pursuant to special “unaccompanied minor” procedures, which involve an additional fee. On many domestic carriers, children aged 5 through 7 may only fly unaccompanied on nonstop and through flights; children 8 and over may take connecting flights unaccompanied. As a common procedure for unaccompanied minors, airlines require the names and contact numbers of the persons dropping the child off and picking the child up. The person picking up the child may be asked to show his or her identification. However, because airlines do not have authority to verify court or custody orders, the unaccompanied minor procedures would not include checking the parentage or legal guardianship status of any of those persons dropping off or picking up children traveling unaccompanied. Once a child has reached the age of 12 (or 15 on some airlines), most domestic carriers do not apply “unaccompanied minor” procedures or seek parental permission for the child to travel. Airlines may apply some additional procedures for unaccompanied minors traveling internationally; for example, some airlines automatically apply the unaccompanied minor procedures to children through age 17 for international travel. For certain international destinations, airlines can request that children traveling with only one parent have a letter of consent from the nonaccompanying parent to help passengers meet the entry requirements of the country of destination. For example, according to the State Department, Mexico and Chile require that children entering or departing those countries by airline flight without both parents have such a letter of consent. As such, the airlines in our study reported instructing passengers to be ready with such documentation if traveling with children to countries that may have such requirements. Representatives of the Air Transport Association told us that any airline flying to these countries may be forced to provide the passengers with a free trip back to the United States for accepting children onto their flight without having documentation showing that both parents or guardians consented to the international travel. We discuss this parental-consent letter requirement in more detail later in our report. The State Department has preventative measures that are focused outside of the airport environment, before a suspected abductor reaches an airport with a child, while DHS’s measures focus on preventing child abductions once an abductor reaches an airport with a child. Figure 3 illustrates these measures, which are described in greater detail in the next section. The State Department has a signature requirement and a passport issuance alert program in place to directly address international parental child abductions. A law passed in 1999 requires both parents to execute and provide documentary evidence of custodial rights on any application for a passport for a minor. If this cannot be done, a parent can take certain steps, in accordance with the law, to execute the passport application, such as by providing documentary evidence that he or she has sole custody of the child, has the documented consent of the other parent to the issuance of the passport, or is acting in place of the parents and has the documented consent of both parents. The State Department also administers the Children’s Passport Issuance Alert Program, a service through which a parent can request State Department notification if a passport application is submitted for his or her child of less than 18 years of age. State Department officials told us that if a passport application is received for a child listed in the alert program, State Department officials would contact the parent who requested the alert notice to see if the parent’s concern still exists before determining whether to issue the passport. The issuance alert program enhances prevention opportunities since there are exceptions to the two- parent signature requirement. State Department officials told us that about 42,000 children are currently registered in the program and that the program’s database includes information such as name, date, and place of birth for each child. Before adding a child to this alert system and adding the parent as the person to alert, State Department officials verify the relationship between the parent and the child through documentation such as the birth certificate, custody orders, and other identifying documentation. State Department officials noted that, even if a parent requesting an issuance alert loses custody of the child after the child has been entered into the alert system, the State Department would still notify a parent if the other parent or another person applies for that child’s passport. According to the State Department, in some instances, enrollment in the issuance alert program has succeeded in locating children whose whereabouts were unknown before the new passport application was submitted, which thereby allowed the State Department to assist the left-behind parent in seeking the child’s return. However, the signature requirement and passport issuance alert programs have the following limitations: Once it issues a passport to a child, the State Department may not revoke that passport except in limited situations. Thus, some children may have been lawfully issued passports before a possible international abduction situation arose. The State Department does not have a way to track the use of a passport once it has been issued since the United States does not generally exercise exit controls for citizens leaving the country. Parents with citizenships from other countries can obtain a foreign- issued passport for their child, which can circumvent State Department’s signature requirement and the passport issuance alert program. While the State Department’s efforts are focused on passport issuance, DHS administers a child abduction component of its broader Prevent Departure program, designed to keep non-U.S. citizens identified as potential abductors from leaving the country with a child at risk for abduction. DHS’s broader Prevent Departure program is aimed at preventing the departure of non-U.S. citizens whose departure could be harmful to the security of the United States. Such persons could include, for example, suspected fugitives fleeing prosecution for felony crimes. The Prevent Departure program originated from the Immigration and Nationality Act, which authorized departure control officers to prevent non-U.S. citizens’ departure from the United States under certain specified circumstances. Specifically, DHS implementing regulations do not permit such departure if the departure would be prejudicial to the interests of the United States, as enumerated in regulation. DHS established a parental child abduction component of the Prevent Departure program in 2003. DHS officials have interpreted international parental abductions by non- U.S. citizens to be prejudicial to national interests, thus falling under its Prevent Departure program authority. DHS policy stipulates that only law enforcement officers and specified State Department officials can request an alert for a non-U.S. citizen potential abductor traveling with an identified at-risk child under this program. Although parents cannot contact DHS directly, parents, family members, prosecutors, and others concerned about a forthcoming abduction could contact the State Department’s Office of Children’s Issues to add names to the list. In addition, DHS requires law enforcement officers and State Department officials to provide court orders specifying that a child, regardless of age, is banned from traveling internationally with a non-U.S. citizen parent or person acting on behalf of the parent. If State Department officials determined that a case meets all the criteria for inclusion on the list, the agency would pass this information to DHS officials who would then place a potential abductor on the list. DHS officials told us that, once a potential abductor is on the list, an accompanying note is made identifying the at-risk child who is not to travel internationally with the potential abductor. Subsequently, if a person on the list is identified as attempting to board an international flight with an identified child, the airlines and DHS collaborate with law enforcement to prevent the boarding of the non-U.S. citizen with the child. DHS officials told us that this measure is an effective tool at preventing some cases of international parental child abductions. Prevent Departure is the only program we identified that has the potential to prevent international child abductions at the airport when it is not known that an abduction is in progress, but the potential abduction risk and the potential abductor have been identified. However, the usefulness of this program is limited because it only applies to non-U.S. citizens. DHS also checks the Federal Bureau of Investigation’s National Crime Information Center (NCIC) Missing Persons File routinely for travelers leaving the United States, which, in very limited circumstances, may result in intercepting a child before an international flight departs. For passengers traveling internationally on a commercial flight, airlines are required to provide passenger manifest data (generally, information listed on government-issued passports) obtained at check-in from all passengers to DHS’s Customs and Border Protection no later than 30 minutes prior to the securing of the aircraft doors, or transmit manifest information on an individual basis as each passenger checks in for the flight up to but no later than the securing of the aircraft. DHS officials told us that they have automated systems to check this passenger manifest data against the NCIC Missing Persons File and, that if a match is made, DHS officials contact the law enforcement officials who originally entered the case into the missing persons file to determine what action to take. Actions could include collaborating with law enforcement and airlines to, among other things, prevent the child from departing on an international flight. According to DHS officials, however, even if there were a match between passenger manifest data and the missing person’s file, they still may not be able to prevent an international parental child abduction on an airline; DHS officials can receive passenger manifest data as late as 30 minutes before securing an aircraft, making it difficult to coordinate with law enforcement, airport, and airline officials in enough time to prevent the abducted child from departing on an international flight. Furthermore, names might not be entered into the database in time for a match to be made. To include an abducted child in this database, a parent would need to contact a local or state law enforcement agency and file a missing person’s report. In addition, local law enforcement officers may not enter reported parental abduction cases into the NCIC database because they may not view them as qualifying; they may view them as private family disputes instead of criminal matters. DHS could only confirm two cases in which it identified a match using this system, and an official who administers the matching stated that she did not know if the two matched cases resulted in preventing the child from boarding an international flight. Other federal agencies also have efforts in place that may indirectly support the prevention of international parental child abductions involving airline flights. DOJ, in particular, has educational efforts and the AMBER Alert (America’s Missing: Broadcast Emergency Response) program that may help to prevent abductions. DOJ’s Office of Juvenile Justice and Delinquency Prevention develops educational materials and training programs aimed at increasing the awareness among parents, the law enforcement community, and others about the issue of international parental abductions. For example, A Family Resource Guide to International Parental Kidnapping is an educational guide for parents, intended to provide them with information on how to better prevent these abductions or stop them while in-progress, among other things. DOJ also provides training to more than 4,500 local law enforcement officers each year about how to respond to cases of missing children, including parental abduction cases. In addition, since 2007, the Transportation Security Agency (TSA) within DHS has partnered with the National Center for Missing and Exploited Children (NCMEC) and other agencies to distribute AMBER Alerts at airports across the country to help prevent child abductions involving airline flights. AMBER Alert programs are voluntary partnerships between law enforcement agencies, broadcasters, and transportation agencies to use the Emergency Alert System to air a description of an abducted child and the person suspected of abducting the child to assist in the search for and safe recovery of the child. Since the first local AMBER Alert program was launched in Texas in 1996, similar programs have been implemented at state and local levels across the United States creating a nationwide alert network that has successfully led to the recovery of over 500 children. However, because a main criterion for disseminating an AMBER Alert is that law enforcement officials must believe the abducted child is in imminent danger of serious bodily injury or death, many international parental child abductions may not be entered into the AMBER Alert system since physically harming a child is usually not the abducting parent’s intent. According to DOJ, in many parental abduction cases, the abducting parent’s goal is to permanently alter custodial access by taking the child across state or international borders. Nongovernmental organizations also indirectly support the prevention of international parental child abductions, often in collaboration with local, state, and federal agencies. For example, NCMEC offers a variety of services that aid in national and international searches for missing children, including a toll-free hotline; photograph and poster distribution; technical case analysis and assistance; recovery assistance; training and coursework for investigators; and legal strategies, among other services which indirectly support the prevention of abductions involving airline flights. In addition, the Association of Missing and Exploited Children’s Organizations, Inc., has a “sub-AMBER alert” program that allows local law enforcement officers to contact businesses in airport terminals to notify them to look out for a child believed to be abducted and possibly at their airport so that staff can contact local law enforcement officers or airport police to halt the abduction. Even with these efforts in place, preventing international child abductions can be very difficult and depends on a number of factors, including the parent’s knowledge of the abduction risk and the existence of clear custody status for the child. While prevention efforts available to parents, such as contacting the State Department to request a passport alert for a child, generally require that the parent has some knowledge beforehand of the risk that an abduction might occur, abductions often occur when the parent has no such knowledge. In general, prevention efforts also require clear child custody status. For example, in order for a parent to add a child and suspected abductor to DHS’ Prevent Departure list, the requesting parent must demonstrate that he or she has parental or custodial rights to the child and that there is a court order barring the child from traveling internationally with the suspected abductor. However, custody laws vary by state, and many parents may not have such clear custody documentation available. For example, according to DOJ, many unmarried parents may not be aware that they would need to pursue court procedures to obtain a custody order for their child. Such documentation is often essential for a parent who wishes to demonstrate custodial rights in any context when no court order exists because states vary widely in their statutory presumptions regarding the child custody rights of unmarried parents. In addition, according to DOJ, many parents in these situations cannot afford to hire attorneys to obtain the necessary documentation of custody. In cases where the parent is unaware of the abduction risk, and where there is no documentation of the child’s custody status, preventing such abductions is extremely difficult. Concerns about increasing cases of international parental child abductions have led federal agency officials, nongovernmental organizations, and others to suggest a number of potential options aimed at preventing such abductions. Based on input from various stakeholders, we identified two options that directly address the issue of international parental child abductions involving airline flights: a parental-consent letter requirement and a high-risk abductor list of adults. We further explored these options with airlines, federal agencies, and nongovernmental organizations to understand their views on the potential effectiveness of these options and to identify advantages and limitations of these options. A parental-consent letter requirement could specify that children traveling alone or without both parents on international flights be required to have a note of consent from the nonaccompanying parent(s) authorizing the child to travel. DHS recommends, but does not require, parents to travel with such documentation. As previously mentioned, certain foreign countries have similar parental consent letter requirements in place. Under such a consent requirement option (and pending the grant of authority), airline or security staff, such as TSA employees, could check that all children traveling internationally have such parental-consent letters as a condition to boarding an international flight. A program to identify adults at high risk for committing child abductions could operate similarly to DHS’s Prevent Departure program but would apply to U.S. citizens—such a program may require additional statutory authority. DHS could provide a list of children at high risk for abduction, and family members identified as potential abductors, to the airlines, who would then prevent those placed on the list from boarding international flights if traveling together. DHS would only add names of potential abductors and children at risk to this list if the request came from designated law enforcement officers or federal officials, but not from the parents. Similar to Prevent Departure, DHS could require law enforcement officers and State Department officials to provide court orders specifying that a child, regardless of age, is banned from traveling internationally with a U.S. citizen parent or someone acting on behalf of the parent. Federal agency, airline, and nongovernmental organization stakeholders reported that the presence of some type of parental-consent letter requirement may be effective in deterring some parents from attempting to abduct their children abroad. One nongovernmental organization official noted that, this requirement may deter a parent from attempting an abduction, since the parent would have to take this parental consent requirement into consideration before going to the airport thus deterring abductions that might occur without advance planning. However, these stakeholders also identified a limitation that may compromise the effectiveness of such a consent requirement: most stakeholders pointed out that it would be very easy to produce fraudulent consent letters. Of the eight airlines we surveyed on the two options, half reported that this measure would not be effective. Several stakeholders noted that a parental-consent requirement could be more effective if parents were required to have the letters notarized. Even with a notarization requirement, however, the majority of stakeholders we met with told us that parents who want to abduct their children abroad could still try to forge the consent letter documents, and airline or TSA staff may have difficulty verifying the authenticity of such letters, if they had the authority to do so. Airline officials told us that their staff does not have the training or authority to verify the authenticity of such documentation. DOT officials added that, if a consent document were required on all children traveling internationally, airline employees would not be able to call the parent not traveling to verify and confirm the parent’s consent, given the sheer volume of children traveling. As a result, another organization may need to provide airline and security staff with assurance of the authenticity of these consent letters. An official at Child Find of America stated that the consent letters would only be successful if the letters came with an additional requirement for parents to submit the letters to a federal authority in advance to verify the authenticity of the letters. She added, however, that this additional verification step would be very burdensome for parents and the federal agency tasked with the verification responsibility. Stakeholders also cited the following three key issues to consider before implementing such a requirement: Parental-consent letters could place a major burden not only on parents—particularly single parents—but on all airline travelers. Several stakeholders said that single and divorced parents would have to take burdensome additional steps to contact the other parent and obtain their permission for the international travel. This requirement could be particularly difficult for a single parent traveling legitimately with a child if that single parent faced an uncooperative ex-spouse or if the parent had to provide documentation such as custody papers. This requirement could impact and burden parents and children traveling when there is very little risk of an abduction situation. A State Department official noted that a separate line may be needed at the airport for children traveling internationally if a parental-consent letter requirement were in place, so as to not delay other travelers. Similarly, NCMEC officials told us in the current airport configuration—where travelers with domestic and international destinations enter the same security screening lines—checks to verify parental consent that occur during security screening would be quite burdensome for all travelers due to the extra time needed to make such verifications. A parental-consent requirement could significantly increase an airline’s liability. For example, a domestic airline official told us that, if a family member were to forge such a note and abduct a child to another country, the left-behind parent could file a lawsuit against the airline for failing to prevent the abduction. An International Air Transport Association official added that airlines do not keep copies of documents presented at check-in, so it could be difficult for an airline to defend against such a lawsuit. He added that carriers do not capture and hold copies of passengers’ documents following check-in because this could, among other things, violate national personal data protection or data privacy laws. Airlines may face financial losses depending on whether airlines would have to deny boarding to passengers not having the required parental consent letter. For example, a domestic airline official told us that this requirement could impact the airlines financially if airlines were required to deny boarding, and potentially refund travelers, for lacking the required parental consent letter. Four of the seven domestic airlines that responded to our survey offer refunds to passengers they refuse to transport due to a lack of identification. Consequently, these airlines may be financially liable for denying boarding to those that do not have the required consent letters. In addition, neither the airlines nor DHS currently have the authority to implement a parental-consent letter requirement and would thus need to seek authority and necessary resources before such a requirement could take effect. A high-risk abductor list may be helpful in preventing international parental child abductions involving airline flights in cases where a U.S. citizen has been identified as a high risk for attempting an abduction. Stakeholders, however, pointed out that the relatively difficult and time- consuming steps needed to place a child and potential abductor on this list may limit its effectiveness. A majority of the airline stakeholders surveyed on the options added that such a list would only be effective if incorporated into the security screening processes already in use and would not be effective if the airlines were charged with managing this list. In addition, DHS will not be able to establish such a high-risk abductor list without statutory authority and potentially additional financial resources. Stakeholders who viewed this list as effective emphasized that it would be helpful in keeping family members already identified as high risk for abducting a child from boarding an international flight with the child of concern. The results of the aforementioned American Bar Association survey of 97 left-behind parents suggested that at least some of the parents were aware of the abduction risk before the abduction occurred; 51 percent of the surveyed parents reported that they had taken measures to prevent the abduction beforehand, such as seeking supervised visitation arrangements, custody orders prohibiting removal of the child from the jurisdiction, and passport denial or restrictions. As previously discussed, DHS officials told us that their Prevent Departure list—which requires a custody or court order specifically banning the child in question from traveling internationally with a specified parent or someone acting on behalf of the parent—is quite effective at preventing abductions involving non-U.S. citizen abductors. Officials at the State Department added that a similar list for U.S. citizens would be very effective in cases where there was already a custody or court order preventing the child from traveling abroad with the specified parent. Some nongovernmental organization stakeholders reflected similar views. For example, an official at Child Find of America noted that a list would be helpful in cases where an abduction attempt is anticipated. Several stakeholders cited the relatively difficult, time-consuming steps needed to place a child and potential abductor on a high-risk abductor list as a factor limiting its effectiveness. Parents would need to obtain a custody or court order banning the child from traveling internationally with the suspected adult to provide assurance that their request to include a child on the list stems from authentic abduction concerns rather than other conflicts between parents, but they may face difficulty in having a judge issue such a ban. DHS officials told us that many judges who deal with custody issues simply are not aware of the risk for international parental child abductions and thus may fail to issue a court order banning such travel. Officials at the State Department added that some judges are not adequately trained to issue court or custody orders that ban international travel in cases where abduction is a real concern. Obtaining such a custody order may require a parent to obtain support from local law enforcement to prove that a suspected abductor has previously attempted to abduct the child or has refused to follow a child custody determination, among other things. Stakeholders emphasized, however, that local law enforcement may view such custody disputes as a private matter and would thus be reluctant to get involved. In addition, the steps needed to put a potential abductor on such a list may not occur swiftly enough to prevent an anticipated abduction. Three nongovernmental organization stakeholders told us that an abduction could occur before a parent succeeds in involving law enforcement, courts, and others and then taking the needed steps to put the abductor on the high-risk list. Six of the eight airline stakeholders we surveyed about the options reported that a high-risk abductor list would only be effective if the list was incorporated into current security screening processes already in use, such as Secure Flight; the Prevent Departure list is not part of Secure Flight. A few airline stakeholders added that any other administration of the list would burden them with creating new systems for administering such a list. Officials at two airlines told us that a high-risk abductor list would benefit from additional information beyond just names, such as biometric information, to ensure that the correct travelers are identified. An official at a foreign airline added that his airline would have to develop a customized program to input such biometric information, which would be costly. DHS’s Prevent Departure list is not incorporated into Secure Flight, however, indicating that airlines may not need to develop a customized program to administer a high-risk abductor list. Thus, a high-risk abductor list similar to Prevent Departure may not significantly burden airlines. As a final barrier, DHS may need additional statutory authority and potentially additional financial resources to implement a high-risk abductor list for U.S. citizens. As previously discussed, the Immigration and Nationality Act provided departure control officers with the statutory authority necessary to prevent non-U.S. citizens from departing the country through the Prevent Departure program. This authority is insufficient to establish and administer such a list for U.S. citizens. Consequently, DHS would need to explore other current existing statutory authority or seek new authority to administer a program similar to the Prevent Departure program that would apply to U.S. citizens. In addition, DHS and the State Department may need additional financial resources to hire additional staff to handle incoming requests and collaborate with airlines to prevent boarding. These departments’ potential success in obtaining additional resources is unclear. Although it is very difficult to prevent an international parental child abduction, and we found that the options for doing so are limited, DHS may have the potential to better prevent high-risk abductors—as identified through court and custody orders—from taking children out of the country. DHS already has a program it finds to be effective at preventing non-U.S. citizens identified as high risk from undertaking international parental child abductions. Thus, a similar program designed to prevent U.S. citizens identified as high risk for undertaking these abductions from departing on an international flight with an identified child could be appropriate. While such a program will not prevent all international parental child abductions on airline flights, it may help in developing a comprehensive approach to keep people identified as high risk for attempting such abductions from succeeding. Where options for directly preventing international parental child abductions on airline flights are limited, such an improvement may be a step forward. To further help prevent international parental child abductions involving airline flights, particularly for persons identified as high risk for attempting such abductions, we recommend that the Secretary of Homeland Security consider creating a program similar to the child abduction component of the Prevent Departure program that would apply to U.S. citizens. We provided a draft of this report to the Departments of Homeland Security, Justice, State, and Transportation for review and comment. The Departments of Justice and State had no comments. The Department of Transportation provided technical clarifications, which we incorporated into the report as appropriate. The Department of Homeland Security provided written comments, which are reproduced in appendix II. DHS concurred with our recommendation and agreed with our conclusions. However, while stating its commitment to working with the Department of State and other stakeholders to better prevent these abductions, DHS also discussed challenges, including "potential constitutional, operational, privacy, and resource issues," to viably implementing a high-risk abductor list for U.S. citizens. We are sending copies of this report to the appropriate congressional committees and the Secretaries of Homeland Security, Justice, State, and Transportation. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In response to your request, this report provides (1) information on policies and measures airlines, federal agencies, and other entities have to prevent international parental child abductions involving airline flights and (2) options federal agencies, nongovernmental organizations, airlines, and others could consider to prevent international parental child abductions involving airline flights, as well as the advantages and limitations of those options. In determining policies and efforts federal agencies, airlines, and others have to prevent international parental child abductions, we examined relevant laws and regulations and met with and obtained and analyzed information provided by the federal agencies (Departments of Homeland Security, Justice, State, and Transportation) and seven child advocacy associations. During these meetings, we obtained and analyzed information related to major policies and measures taken to prevent international parental child abductions. We also met with, and obtained and analyzed information provided by, two airline associations to determine what policies and measures airlines had in place to prevent international parental child abductions. In addition, we surveyed eight airlines regarding their policies and measures to prevent international parental child abductions. See our discussion later for more detail regarding the eight airline companies we surveyed. Our focus was primarily on international parental child abductions that occur when a parent, family member, or person acting on behalf of the parent or family takes a child from the country violating the rights of the custodial parent or guardian left behind. In determining options federal agencies and others could consider to prevent international parental child abductions on airline flights including advantages and limitations of these options, we obtained a list of hypothetical options for preventing international parental child abductions on airline flights from federal agencies, child advocacy associations, and airline associations mentioned above. From the list of all hypothetical options, we identified two that directly addressed international parental child abductions on airline flights. We then designed and administered a Web-based survey of domestic and foreign airlines and interviewed nongovernmental organizations representing child advocacy associations on their views regarding the effectiveness, key issues, advantages, and limitations of the two measures that directly address preventing international parental child abductions on airline flights. A large number of the questions on the survey were closed-ended, meaning that respondents were provided with a list of possible responses. Most of the questions, however, were open-ended, meaning that respondents were provided with space to explain or elaborate on their answers. In developing the questionnaires, we took steps to ensure the accuracy and reliability of the responses. To ensure that the questions were clear, comprehensive, and unbiased, and to minimize the burden on respondents, we sought input on our question set from the Air Transport Association and officials from a domestic airline, as well as internal GAO stakeholders, including methodological specialists. In determining which airlines to survey, we initially selected all eight major domestic airlines that travel internationally. We also selected seven flag carriers (airlines registered under the laws of countries whose respective governments give them partial or total monopoly over international routes) representing countries from which parents requested State Department assistance in recovering children to which about 50 percent of children abducted from the United States from fiscal years 2007 through 2009 were taken. These countries include Mexico, Canada, United Kingdom, Germany, India, Japan, and Nigeria. See table 1 for a listing of the destination countries that accounted for the most international parental child abductions from the United States. We learned that the United States had only recently allowed Nigerian airlines to conduct operations in the United States and consequently eliminated that airline from our sample. The eight domestic airlines we contacted accounted for 64 million of the 83 million (77 percent) international passengers flying on U.S. airlines in 2009. Of the eight domestic airlines we contacted, six responded to our survey, and another domestic airline (United) completed less than half. These six airlines accounted for 42 million (51 percent) of the 83 million international passengers flying on U.S. airlines in 2009. Of the six foreign airlines remaining in our sample, two completed the entire questionnaire—representing the countries that had the second and sixth most children abducted from the United States between fiscal years 2007 and 2009—and another foreign airline (Aero Mexico) completed less than half—representing the country that had the most children abducted from the United States from fiscal year 2007 through 2009. Because only two foreign carriers provided us with usable information, our data are not reflective of the views of most foreign carriers representing countries outside the continent of North America, including Europe and Africa. The airlines who fully responded to our survey are listed in table 2. We also obtained views on additional measures for preventing international parental child abductions from five nongovernmental child advocacy organizations. From the seven nongovernmental child advocacy organizations we initially met with or gained preliminary information from, we surveyed (through interviews) the five that had nonprofit status according to Internal Revenue Service information, four of whom fully responded to our survey, while another (the Association of Missing and Exploited Children’s Organizations) provided responses from their membership on the high-risk abductor list option. The nongovernmental child advocacy organizations who fully responded to our survey are listed in table 3. We analyzed airline and nongovernmental child advocacy organization responses to assess the advantages, limitations, and key issues the airlines and nongovernmental organizations identified for the two main options to determine their practicality. In addition to the individual named above, Maria Edelstein (Assistant Director), Samer Abbas, Jessica Bryant-Bertail, Lauren Calhoun, Pamela Davidson, and Amy Rosewarne made key contributions to this report. | Since 2000, the annual number of new international parental child abduction cases reported to the Department of State--many of which likely involved air travel--has nearly tripled. Such abductions occur when a parent, family member, or person acting on behalf thereof, takes a child to another country in violation of the custodial parent's or guardian's rights. Once a child is abducted, the laws, policies, and procedures of the foreign country determine the child's return. Thus, preventing such abductions can help keep parents and children from being separated for a long period or indefinitely. As requested, this report addresses (1) the policies and measures airlines, federal agencies, and others have to prevent international parental child abductions on airline flights and (2) options federal agencies, airlines, and others could consider for helping prevent such abductions on airline flights, as well as the advantages and limitations of those options. To perform this work, GAO reviewed applicable laws and policies, interviewed government officials, and surveyed airlines and nonprofit associations. As private sector entities, airlines do not have the authority to verify or enforce court and custody orders in an effort to prevent international parental child abductions and thus, upon request, work in cooperation with law enforcement. The Department of State has measures such as a dual-signature passport requirement and a passport notification program that are focused on preventing abductions before abductors reach an airport. The Department of Homeland Security (DHS) has measures that are focused on prevention when abductors reach the airport, such as a Prevent Departure list which prevents non-U.S. citizens from departing on an international flight with a child of concern, if certain criteria are met. DHS also checks the National Crime Information Center Missing Persons File and has partnered with other agencies to distribute AMBER Alerts at airports if child abductions meet certain criteria. Two options--a parental-consent letter requirement and a high-risk abductor list--were cited by stakeholders (federal agency, airline, and nongovernmental organization officials) as having potential to prevent abductions, but consent letters may be impractical to adopt while a high-risk list may help prevent some abductions. A consent letter policy could require that children traveling alone, or without both parents, have a note of consent from the nonaccompanying parents authorizing the child to travel. Stakeholders GAO met with and surveyed noted that such consent letters may be effective in deterring some abductions, but the relative ease in forging a letter along with other significant issues indicate that such a requirement is not a practical option. A high-risk abductor list program could operate similarly to the Prevent Departure list program but would apply to U.S. citizens. While stakeholders pointed out certain limitations to such a high-risk abductor list--such as the relatively difficult and time-consuming steps needed to place a child and potential abductor on this list--such a list may be helpful in preventing abductions on airline flights. GAO recommends that DHS consider creating a program similar to the child abduction component of its Prevent Departure program that would apply to U.S. citizens. DHS concurred with the recommendation, but cited challenges toward implementing it, such as potential constitutional, operational, privacy, and resource issues. |
As part of its responsibilities for civil aviation security, TSA enforces law and regulations requiring that passengers be screened to ensure that potential weapons, explosives, and incendiaries are not carried into an airport sterile area or on board a passenger aircraft. To provide the general public with guidance on the types of property TSA policy prohibits from being brought into airport sterile areas and on board aircraft, TSA publishes, and on occasion has updated, an interpretive rule in the Federal Register—known as the PIL—that, among other things, lists items prohibited from being carried on a passenger’s person or in the passenger’s accessible property into airport sterile areas and into the cabins of passenger aircraft. TSA also maintains a current list of prohibited items on its public website. The list is not intended to be exhaustive, and TSOs may exercise discretion, informed by the categories and examples included in the PIL and their standard operating procedures, to prohibit an individual from carrying an item through the checkpoint if in the screener’s determination the item could pose a threat to transportation (i.e., whether it is or could be a weapon, explosive, or incendiary) regardless of whether it is or is not on the PIL. TSA has divided prohibited items into nine categories. Table 1 provides a description of the items included in the nine categories. Individuals are prohibited from carrying these items into an airport sterile area or on board an aircraft either in their carry-on bags or on their person. At passenger screening checkpoints, TSOs inspect individuals and property as part of the passenger screening process to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other items included on the PIL into the sterile area or on board an aircraft.shown in figure 1, TSOs use the following methods, among others, to screen passengers: X-ray screening of property, Advanced imaging technology scanners (often referred to by the public as body scanners) or walk-through metal detector screening of individuals, pat-down screening of individuals, physical search of property, trace detection for explosives, and behavioral observation. TSA has developed checkpoint screening standard operating procedures that establish the process and standards by which TSOs are to screen passengers and their carry-on items at the screening checkpoint. According to TSA standard operating procedures, passengers may be screened through the use of a walk-through metal detector, advanced imaging technology scanner, or a pat-down. Passengers are also generally required to divest their property, including the removal of shoes and outer garments, and empty their pockets. During this screening process, TSOs look for any prohibited or dangerous items on a passenger or among the passenger’s property. Ordinarily, passenger screening at the checkpoint begins when the individual divests and places his or her accessible property on the X-ray conveyor belt or hands such property to a TSO. A TSO then reviews images of the property running through the X-ray machine and looks for signs of prohibited items. The passengers themselves are typically screened via a walk-through metal detector or an advanced imaging technology scanner, and passengers have the option to request screening by a pat-down if they do not wish to be screened via the advanced imaging technology scanner. TSA uses additional screening techniques on a random basis to provide an additional layer of security. These additional screening techniques, referred to as an Unpredictable Screening Process, are prompted automatically by the walk-through metal detector, which picks out a certain percentage of passengers at random to be selected for additional screening. For example, TSA uses explosives trace detection (ETD) to swab the hands or property of passengers on a random basis to screen for explosives. According to TSA officials, because of statutory and other considerations, TSA has revised the PIL six times since its inception in February 2003 (see table 2). In general, TSA modifies the PIL as necessary when circumstances prompt the agency to revise the items listed as prohibited from being carried into an airport sterile area or on board an aircraft. For example, in 2005, TSA modified the PIL in response to a statutory requirement to prohibit passengers from carrying any type of lighter on their person or in their accessible property on board aircraft. Later that year, TSA also modified the PIL to allow passengers with ostomates to carry small ostomy scissors with them onto aircraft because the agency had heard from persons with ostomies that they avoid flying, in part, because they are not allowed to carry the scissors they need onto the aircraft. In 2006, TSA further modified its policy with respect to permitted and prohibited items in response to a specific terrorist threat by initially prohibiting the carriage of liquids, gels, and aerosols on board an aircraft, and subsequently permitting passengers to carry limited amounts of liquids, gels, and aerosols on board an aircraft in a manner prescribed by the agency. See TSA Security Directive 1544-14-02 (Feb. 6, 2014) and TSA Emergency Amendment 1546-14-01 (Feb. 6, 2014) (imposing additional security requirements on U.S. and foreign air carrier operations, respectively, to and from the Russian Federation). The security directive and emergency amendment did permit the carriage of medication in liquid, gel, or aerosol form. such measures are carried out, TSA generally accomplishes this by issuing security directives and emergency amendments and, as circumstances permit, in coordination and consultation with host governments, the International Civil Aviation Organization (ICAO), and other affected parties. TSA officials told us that when evaluating whether or not to change the PIL, they generally consider the following four factors: (1) the security risks posed by each item on the current PIL or potential item to be added, (2) opportunities a potential change may have to improving checkpoint screening and passenger experience, (3) harmonization with international aviation security standards and recommended practices published by ICAO, and (4) stakeholder perspectives on the change. For example, as part of a broader set of potential changes related to adopting a risk-based security approach to passenger screening, TSA formed a working group in 2011 to conduct a risk-based review of the PIL; assessed the individual risk posed by each PIL item; and then considered how removing a particular item, or set of items would present opportunities, constraints, and challenges for TSA security operations at the checkpoint. TSA officials stated they then considered how any changes would affect TSA personnel costs and passenger experience such as likely screening throughput time if TSA personnel no longer had to screen for particular items. TSA then evaluated how interested parties such as Congress, airlines, and flight attendants would respond to permitting particular items on board an aircraft. TSA also considered ICAO guidance on prohibited items and took into account whether any changes it made to the PIL would further align TSA’s guidelines for prohibiting items with ICAO standards and recommended practices. TSA officials told us that TSA does not have policies that require a specific process to be followed or a specific set of criteria to be used when evaluating potential modifications to the PIL, since the circumstances for each potential PIL change are unique. Officials stated the steps they take when considering a modification often vary depending on the nature of the proposed revision. In its 2011 review of the PIL, TSA’s working group addressed these factors as follows: Impacts on security risk: The working group evaluated the risk to transportation security presented by each prohibited item by assessing the likelihood of an adversary successfully using the item to achieve different terrorist objectives. TSA assigned risk ratings of high, medium, low, or none to each item on the PIL for each terrorist objective. TSA assessed the levels of risk posed by small knives for each terrorist objective. Impacts on screening operations: The working group also considered how the removal of small knives would affect checkpoint screening operations. For example, TSA estimated, using historical data prior to 2009, that approximately half of all nonfirearm, nonincendiary voluntarily abandoned property (VAP) left behind at the checkpoint consisted of small knives with blades shorter than 2.36 inches. TSA concluded that TSOs spent a disproportionate amount of their time searching for these items. TSA reasoned that removing small knives from the PIL would have a positive impact on screening operations since TSOs would no longer have to detect and deal with small knives at the checkpoint, reducing direct and indirect personnel costs, increasing passenger throughput, and reducing distractions to TSOs. TSA also concluded that not requiring TSOs to screen for small knives would in turn improve their ability to screen for higher-threat items, such as IEDs, and thus reduce risk overall. For example, the TSA risk assessment cited a research study focused on how success rates for screening items vary based on what screeners look for. TSA cited the study in support of its assertion that TSOs would be more successful identifying IEDs if they did not have to screen for small knives. Harmonization with international standards and guidance: TSA also considered the harmonization of the PIL with ICAO standards and recommended practices. TSA concluded that making certain changes to the PIL, such as removing small knives, could better harmonize its policies with ICAO guidance. Specifically, ICAO guidance provides that member states should consider prohibiting knives with blades of more than 6 centimeters (approximately 2.36 inches) from being carried on board aircraft. TSA concluded that there would be operational and policy benefits from harmonizing the PIL with ICAO guidance because greater harmony among the various countries promotes greater cooperation on all security issues. Further, TSA asserted that inconsistencies between the PIL and the ICAO guidance could create confusion for passengers when items were allowed onto aircraft in one country, but prohibited in another. Stakeholder perspectives: The TSA working group also noted the need to coordinate with stakeholders on some of the options for modifying the PIL, as these options were likely to cause concern among some of these groups, if implemented. For example, for the working group’s proposed recommendation to remove small knives from the PIL, TSA officials noted past concerns from stakeholders over the prospect of allowing small knives or other items on board aircraft and stated that coordination and collaboration with key stakeholders would be a critical success factor for implementation. They also noted that stakeholder support would be greatly enhanced by a unified approach to communicating to stakeholder groups that TSA planned to shift its resource focus from finding small knives to other efforts that would result in better security. Although TSA recognized that allowing small knives on planes would raise the potential risk of other terrorist aircraft scenarios, TSA concluded the change would not raise the overall risk of catastrophic aircraft destruction. However, rather than make an immediate decision about changing the PIL, TSA elected to suspend working group activities and delay making any decisions while it focused greater attention and TSA resources on other emerging risk-based security initiatives, such as the Known Crewmember and expedited passenger screening programs. TSA resumed working group evaluations of the PIL in July 2012. As previously discussed, TSA used its risk assessment to conclude that overall risk to aviation security would be lowered by allowing small knives onto aircraft because security screeners would be able to better focus on identifying higher-risk items, such as IEDs. However, TSA did not conduct sufficient analysis to show that removing small knives would ultimately reduce risk and improve checkpoint screening. TSA’s reasoning for its decision to remove small knives from the PIL was to further align the PIL with ICAO guidance on prohibited items, decrease time spent rescreening or searching bags for these items, and better enable its TSOs to focus more attention on higher-threat items, such as IEDs, thereby potentially increasing security. DHS guidance for managing and assessing risk states that risk assessments should evaluate all the In its risk assessment, risk scenarios considered by the assessment.TSA assessed the risk posed by small knives for each terrorist objective; however, it did not complete data collection or an evaluation to determine whether TSOs would actually be better able to identify high-risk items, such as IEDs, if they were not looking for small knives. Furthermore, the research cited by TSA did not evaluate a situation where screeners had to differentiate between knives with blades greater or less than 2.36 inches in length, as proposed by TSA. Without conducting a more valid evaluation of the actual proposed change, TSA could not sufficiently evaluate whether the added risk of allowing small knives onto aircraft would be offset by a reduction in risk achieved through improved screening for IEDs. Such an analysis would have allowed TSA to actually measure whether airport screeners would be better able to identify explosives if they no longer had to screen for small knives, and better determine whether the added risk of allowing small knives onto aircraft would be offset by potential efficiencies in screening for explosives. Moreover, 25 of 35 TSOs (including supervisory TSOs) and 8 of the 10 Transportation Security Managers we interviewed during visits to six airports did not agree that allowing small knives on planes would have helped them better screen for IEDs, as TSA concluded in its risk assessment. Four TSOs and 1 supervisory TSO we interviewed noted that the exact size of a knife is difficult to ascertain on an X-ray. Therefore, these 4 TSOs and the supervisor believed they would have to open bags in many instances and physically measure the knife to make sure it conformed to TSA’s definition of a permissible knife, which, according to TSA’s definition, was a nonfixed blade less than 2.36 inches and not exceeding a 0.5 inch in width with no locking mechanism, and no molded grip or nonslip handle. TSA officials told us that the training provided to TSOs specified that each TSO was expected to use his or her judgment in determining, based on the X-ray image, whether a knife was permissible or not. We previously recommended in 2007 that TSA strengthen its evaluation of proposed modifications to the PIL and other checkpoint screening procedures to better justify its decisions. Specifically, in April 2007, we found that TSA did not conduct the necessary analysis to support its 2005 decision to remove small scissors (4 inches or less) and certain tools (7 inches or less) from the PIL. As with TSA’s more recent rationale for removing small knives from the PIL, TSA stated that the reason for its decision to remove small scissors and tools was to shift TSO focus from items considered by TSA to pose a low threat to items considered to pose a high threat, such as IEDs, as well as to better allocate TSA resources to implement other security measures that target IEDs. However, we found that TSA did not conduct the necessary analysis to determine the extent to which removing small scissors and tools from the PIL could improve TSO performance in detecting higher-threat items, nor did TSA analyze other relevant factors such as the amount of time taken to search for small scissors and tools and the number of TSOs conducting these searches. As a result, we recommended that TSA, when operationally testing proposed modifications to its checkpoint screening procedures, such as the PIL, develop sound evaluation methods to assist it in determining whether proposed procedures would achieve their intended result, such as enhancing the agency’s ability to better detect prohibited items, and free up existing TSO resources. TSA conducted one evaluation on proposed X-ray screening procedures and one test on a proposed ETD procedure. Regarding the X-ray procedure change, TSA collected and analyzed the necessary data to determine whether the X-ray screening procedures would improve passenger throughput. However, in its evaluation of ETD devices, TSA was not able to provide documentation that explained the intended purpose of the proposed ETD procedure, the type of data TSA planned to collect, or how the data would be used. changes to standard operating procedures, as we recommended in April 2007. Without sound evaluation methods, TSA will be limited in its ability to determine whether proposed modifications to standard operating procedures—such as the PIL—will result in the intended risk reduction, for example, by enhancing the agency’s ability to better detect IEDs and other high-risk items. TSA consulted both internal and external stakeholders during development of its decision to remove small knives from the PIL, but it did not adequately consult with several external aviation stakeholder groups. Some of these groups later raised strong objections after TSA publicly announced the change. GAO’s Standards for Internal Control in the Federal Government states that an organization’s management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency’s operations and its achievement of organizational goals. These internal control standards further state that management is responsible for developing detailed policies, procedures, and practices to fit its agency’s operations and to ensure that they are built into, and an integral part of, operations. Moreover, TSA’s risk assessment and other planning documents leading up to its proposal to remove small knives from the PIL called for full coordination and collaboration with key external stakeholders who might have reservations about the change before moving forward with any revisions to the PIL. In coordinating with stakeholders, TSA primarily consulted with internal groups who, according to TSA, were generally supportive overall of the proposed revision. Specifically, TSA’s efforts to coordinate with internal groups included the following: The TSA “Idea Factory”: Provides for online comments from TSA personnel with “likes” and “dislikes” similar to those on Facebook. According to TSA, results received from April 2011 through December 2012 indicated that some TSA personnel, including screeners, thought that removing small knives would be a good idea and would improve their ability to screen for IEDs. TSA National Advisory Council: An internal employee advisory committee representing TSA employees at various levels, including management, supervisors, and TSOs. An ad hoc subcommittee of this council reviewed the PIL in July 2012 and recommended removing small knives. Administrator Pistole’s informal discussions with TSOs: According to TSA, Administrator Pistole visited several airports, starting with a town hall meeting at Charlotte Douglas International Airport in December 2012, to gather input on the small knives proposal. TSA officials stated that, during these meetings, TSOs were supportive of the knives proposal and thought it would improve their ability to screen for explosive devices. Federal Air Marshal Service (FAMS): In February 2013, FAMS provided TSA management with comments that led to the small knives decision being more restrictive than TSA executives had originally considered. Specifically, the decision no longer allowed fixed or locking blades, or tactical or combat knives, regardless of length. FAMS officials stated they were generally opposed to allowing small knives on aircraft, but their concerns were mitigated by TSA management’s revision of the proposal. TSA also reached out to some external stakeholder groups who, according to TSA, were also supportive of the decision to eliminate small knives, including the Airline Pilots Association, Families of September 11, and DHS’s Homeland Security Advisory Council (HSAC). In addition, the TSA Administrator discussed possible changes to the Prohibited Items List in various appearances before Congress from 2010 to 2012 where he expressed the belief that screening personnel should concentrate on items that can cause catastrophic destruction of an aircraft. However, TSA did not discuss the proposal and solicit feedback from other relevant external stakeholders prior to its announcement. For example, TSA did not coordinate with or obtain input from the Aviation Security Advisory Committee (ASAC), which is its primary external advisory group for aviation security matters and whose membership includes various airline industry associations.stakeholders—from whom TSA did not adequately solicit feedback— subsequently expressed strong opposition to the proposal, which contributed to TSA reversing its decision to implement the proposal. For example, TSA did not adequately consult with flight attendant groups during development of the small knives proposal, including the Association of Flight Attendants—an ASAC member—and the Coalition of Flight Attendant Unions. Specifically, in a November 30, 2012, phone call primarily regarding another matter, TSA informed the AFA president that it was also planning to modify the PIL to remove small knives. AFA officials disagreed with this decision. However, this conversation occurred after TSA had developed the proposal for the decision over the preceding months. Shortly after this meeting, the TSA Administrator approved the decision to remove small knives from the PIL, which was followed by the March 5, 2013, public announcement of the decision. In response to feedback received after its March 5, 2013, public announcement of the small knives decision, TSA conducted a classified briefing with the ASAC. TSA officials met with the ASAC on April 22, 2013, more than a month after TSA’s March 5, 2013, public announcement of its proposed change and just prior to its planned implementation date of April 25, 2013, and briefed ASAC members on the announced change. Immediately following this meeting, and on the basis of input received by ASAC members and other stakeholders, the TSA Administrator announced a delay in implementation of the change to allow the agency additional time to more fully coordinate with various external stakeholders groups and incorporate additional input on the change. Following the ASAC briefing and announcement of the delay, TSA held similar briefings with other stakeholder groups including the Victims of Pan Am Flight 103 and the National Air Disaster Alliance/Foundation. On June 5, 2013, the TSA Administrator announced that on the basis of extensive engagement with the ASAC and other stakeholder groups, including law enforcement officials and passenger advocates, TSA would continue to enforce the current PIL and not go forward with the decision to remove small knives from the list. As described earlier, TSA management officials stated that they do not have a formal policy or a specific process for evaluating PIL modifications; this also means that they have no specific requirements for coordinating with stakeholders during development of potential revisions to the PIL. TSA officials stated that if some of the steps for stakeholder coordination defined in other TSA processes for emergency amendments and security directives had been in place for PIL changes—such as obtaining key stakeholder input when developing a security policy change—they may have helped to ensure better stakeholder coordination during consideration of the knives change. For example, TSA officials stated that, in hindsight, meeting with the ASAC and having more in- depth discussions with flight attendants during internal deliberations over modifying the PIL would have improved their efforts to fully coordinate and ensure they appropriately obtained and considered all key stakeholder perspectives. TSA officials also stated that they would have benefited from broader engagement earlier in the process with external groups, such as the ASAC and flight attendants. In the case of the small knives decision, the officials added that this broader and more timely engagement could have provided additional insight into the breadth and depth of potential concerns associated with removing certain items from the PIL. Clear processes outlining the appropriate types of stakeholders to consult—including when in the process stakeholders should be consulted—could help ensure that TSA’s process for determining PIL changes is effective and efficient. For example, having clearly defined processes for stakeholder coordination could ensure that TSA fully obtains and considers stakeholder views—consistent with internal control standards and TSA’s planning documents—that could help mitigate potential inefficiencies resulting from reversing policy decisions. Going forward, a formal process to ensure the solicitation of input from relevant external stakeholders on proposed changes to the PIL, including when in the PIL modification process TSA officials are to coordinate with such stakeholders, would help provide reasonable assurance that TSA has a more complete understanding of stakeholder perspectives earlier in the decision-making process. This could help avoid rescission of those changes after investing resources in training TSOs and informing the general public of the change, as was the case with the proposed change to remove small knives from the PIL. According to TSA personnel from the Office of Training Workforce and Engagement (OTWE), TSA evaluates on a case-by-case basis what training tools it will use to ensure TSOs are adequately trained to implement a change to the PIL. However, TSA typically provides TSOs with one or more of the following methods to prepare and train them to implement a PIL change: Online training—This type of training is web-based and may be completed by the TSOs either individually or as a group. This training may include test questions to assess the TSOs’ mastery of the material. Instructor-led classroom training—Training personnel conduct formal classroom training with multiple TSOs. Informational briefings, bulletins, and memos—These include oral briefings by TSA trainers or supervisors in addition to notifications TSA headquarters sends to field personnel. These methods may be used to notify the field personnel of standard operating procedure changes or other matters. Trainers conduct briefings at the beginning of a TSO shift or may do so at another designated time, such as following a formal training session. The notifications sent by TSA headquarters may include “read and sign” memos, in the case of standard operating procedure changes, or may be presented online for other important matters. TSA training personnel stated that they maintain a flexible approach by using different methods to prepare TSOs to implement PIL changes since the changes have differed in their complexity, and therefore some PIL changes require less training and preparation than others. TSA training personnel stated they work closely with the Office of Security Operations (OSO) to determine the proper approach to prepare TSOs to implement each change. As an example of how the training approach can vary based on the nature of the PIL change, the TSA training officials cited the 2005 change to prohibit all lighters from sterile areas or aircraft as one that required less TSO preparation, in terms of training, compared with the 2013 proposal to remove small knives. This was because the small knives proposal encompassed more variables with regard to which knives could be allowed (e.g., length of knife, type of knife, etc.) and therefore required more evaluation and judgment on the part of the TSOs to implement and operationalize the change correctly. By contrast, for the lighters change, TSOs simply had to know they would not allow any lighters past the checkpoint. In developing training for the rollout of the small knives decision, TSA required all TSOs to complete web-based training, individually or as a TSA’s web-based training group, covering the specifics of the change.was followed by a “training brief” that a TSA trainer would provide either (1) immediately following a web-based training group session or (2) as part of a “shift brief” at the beginning of TSOs’ work period (after completion of the web-based session) in order to allow TSOs to ask questions and gain clarity on the specifics of the PIL change. TSA required TSOs to complete all training within a 20-day window prior to the planned implementation of the approved knives proposal. TSA’s web-based training sessions on the knives decision included images that provided examples of knives and sporting equipment that would not be allowed under the new guidelines. As shown in figure 2, these examples included illustrations of knives that would not be allowed into secure areas or on board aircraft because of their size (length greater than 2.36 inches, width greater than 0.5 inch) and design features (e.g., locking blades, hand-molded grip, etc.) of knives that should be prevented from being carried into sterile areas and on board aircraft. In addition, the training included X-ray images to train TSOs on what an allowed and a disallowed knife would look like on the screen. TSA’s web-based training also covered the new procedures associated with knives that TSOs were to follow at the checkpoint, such as requiring travelers to remove any knives they may be carrying from their carry-on baggage or their person so that these items may be screened separately. Last, the web training tested TSOs in their knowledge of the new guidelines for the upcoming PIL change. Similar to the web-based training, TSA’s training brief included example images of allowed/disallowed knives and sporting equipment. The training brief also included coverage of the revised standard operating procedures associated with this PIL change. Proposals to add or remove items from TSA’s PIL can have critical impacts, not just for the security of millions of air travelers each year, but on the efficiency and effectiveness of passenger screening at airport security checkpoints and perceptions of risk by external stakeholders. Making determinations about potential PIL changes can take time and extensive consideration on the part of TSA as the agency balances its aviation security goals with efficient passenger throughput. While we commend TSA’s efforts to consider the risk posed by each item on the PIL, and potential screening efficiencies that may be created by allowing small knives and other items to be carried onto aircraft, conducting the analyses to demonstrate the potential efficiencies and to show that such efficiencies would offset the added risk presented by allowing small knives to be carried on board aircraft would help ensure that critical changes to the PIL will have the intended impact on both security and efficiency. These types of analyses would be consistent with the previous recommendation we made that TSA should strengthen its evaluation of proposed modifications to checkpoint screening procedures. Further, TSA stated in its risk assessment and other planning documents that it would be critical to involve stakeholders in its deliberations regarding the change to the PIL. However, by not taking the necessary steps to sufficiently consult with relevant external stakeholders who may be directly affected by the proposal to allow small knives onto aircraft, TSA ultimately reversed its decision to implement the small knives change to the PIL after having already publicly announced its decision and invested resources in training and implementation. Developing a formal process for stakeholder coordination when making changes to the PIL would help to ensure that TSA’s decisions to change the PIL are fully informed by stakeholder perspectives, and help to ensure the efficient use of agency resources when revising and implementing PIL policies. To help ensure its proposed PIL modifications fully account for the views of key external stakeholders in the aviation industry, we recommend that the Transportation Security Administration’s Administrator establish a formal process to ensure the solicitation of input from relevant external stakeholders on proposed changes to the PIL, including when in the PIL modification process TSA officials are to coordinate with such stakeholders, before deciding to make a PIL change. We provided a draft of this report to DHS for comment. DHS provided written comments, which are summarized below and reproduced in full in appendix I. TSA concurred with our recommendation and described actions planned to address it. In addition, DHS provided written technical comments, which we incorporated into the report as appropriate. In concurring with our recommendation, DHS agreed with the need for a formal process to ensure the solicitation of input from relevant external stakeholders on proposed changes to the PIL. DHS stated that TSA’s senior leadership team works year-round to build and maintain strategic partnerships with various stakeholders to develop policy, share best practices, and participate in setting industry security standards, among other things, and that a formal process for making changes to the PIL will build upon these activities to ensure relevant stakeholders are offered the opportunity to engage with TSA and inform its decisions. DHS stated that a formal process should also make stakeholder engagement more disciplined and concise and result in decisions that are viable and acceptable. TSA has identified the Office of Security Policy and Industry Engagement and the Office of Security Operations as the appropriate offices to create such a process and plans for them to work closely with the Office of Intelligence and Analysis, the Office of the Chief Risk Officer, and the Office of Chief Counsel. TSA plans to create such a formal process by November 30, 2015. This process, when fully implemented, should address the intent of our recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate committees and the Secretary of Homeland Security. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or groverj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. In addition to the contact named above, Chris Ferencik (Assistant Director), Dan Rodriguez (Analyst-in-Charge), Mike Harmond, Brendan Kretzschmar, Thomas Lombardi, Stanley Kostyla, Susan Hsu, Kathryn Godfrey, Linda Miller, and Eric Hauswirth made key contributions to this report. | As part of its responsibilities for securing civil aviation, TSA ensures that all passengers and their accessible property are screened and prohibits individuals from carrying onto aircraft items that it determines to be a threat.TSA maintains a public list of such items, known as the Prohibited Items List, and updates it as necessary. In March 2013, TSA announced it would modify the PIL to allow small knives and certain sporting equipment onto aircraft, stating the change would result in more efficient security screening. However, several aviation industry groups opposed the decision, leading TSA to reverse its decision to implement the change. GAO was asked to review TSA's procedures for modifying the PIL. This report examines, among other issues, (1) on what basis TSA modifies the PIL and the extent to which TSA assessed risk when considering recent modifications to the PIL, and (2) the extent to which TSA involved stakeholders when considering these modifications. GAO reviewed TSA's standard operating procedures, risk assessment, documentation of its decisions and stakeholder outreach, and interviewed TSA officials at six airports. This is a public version of a report with Sensitive Security Information that GAO issued in December 2014. Information TSA deemed sensitive has been redacted. Transportation Security Administration (TSA) officials stated that TSA considers four factors when determining whether to make modifications to the Prohibited Items List (PIL), but the agency did not fully assess risk when considering its recent proposed PIL modifications, as GAO has previously recommended. TSA generally considers the following four factors when determining whether to modify the PIL: (1) the security risks posed by each carry-on item, (2) opportunities to improve screening operations and passenger experience, (3) harmonization with international security standards and practices, and (4) stakeholder perspectives. While TSA considered these four factors when making its March 5, 2013, decision to allow small knives and certain sporting equipment on aircraft, TSA officials also reasoned that the proposed change could help screening personnel focus less on lower-threat items, such as small knives, and more on higher-threat items, such as explosives, thereby potentially increasing security for passengers. However, TSA did not conduct sufficient analysis to show that the increased risk of allowing small knives on aircraft—as determined in its risk assessment—would be offset by a resulting reduction in risk from improved screening for explosives. GAO has previously recommended that TSA strengthen its evaluation methods for operationally testing proposed modifications to checkpoint screening procedures, including changes to the PIL. However, TSA has not consistently implemented this recommendation. Conducting additional risk analysis would have allowed TSA to actually measure whether airport screeners would be better able to identify explosives if they no longer had to screen for small knives. GAO continues to believe that TSA should develop and apply sound evaluation methods when considering modifications to the PIL, as GAO recommended in April 2007. TSA did not effectively solicit feedback on its 2013 PIL decision from relevant external stakeholders, some of whom subsequently expressed strong opposition to the decision to remove small knives from the PIL. For example, prior to announcing its decision, TSA did not coordinate with or obtain input from the Aviation Security Advisory Committee, which is TSA's primary external advisory group for aviation security matters and whose membership includes various airline industry associations. Some relevant stakeholders, such as flight attendant groups—from whom TSA did not adequately solicit feedback—subsequently expressed strong opposition to the proposal, which contributed to TSA reversing its decision to implement the change after having already trained screening personnel for its implementation. Having a defined process and associated procedures in place to communicate with relevant stakeholders earlier in the decision-making process could allow TSA to ensure appropriate consideration of their perspectives in the decision-making process. Use of a defined process and associated procedures could also allow TSA to better avoid rescission of any future changes after investing resources in training screening personnel and informing the general public of the change—as happened in the case of TSA's 2013 PIL decision. GAO recommends that TSA establish a formal process for soliciting input from relevant external stakeholders on proposed modifications to the PIL before making changes to it. DHS agreed with the recommendation. |
Since the early 1990s, DOD has used contractors to meet many of its logistical and operational support needs during combat operations, peacekeeping missions, and humanitarian assistance missions, ranging from Somalia and Haiti to Bosnia, Kosovo, and Afghanistan. Today, contractors are used to support deployed forces at a number of locations around the world as figure 1 shows. A wide array of DOD and non-DOD agencies can award contracts to support deployed forces. Such contracts have been awarded by the individual services, DOD agencies, and other federal agencies. These contracts typically fall into three broad categories—theater support, external support, and systems support. Theater support contracts are normally awarded by contracting agencies associated with the regional combatant command, for example, U.S. Central Command or service component commands like U.S. Army-Europe or by contracting offices at deployed locations such as Bosnia and Kosovo. Contracts can be for recurring services—such as equipment rental or repair, minor construction, security, and intelligence services—or for the one time delivery of goods and services at the deployed location. External theater contracts are awarded by commands external to the combatant command or component commands, such as the Defense Logistics Agency, the U.S. Army Corps of Engineers, and the Air Force Civil Engineer Support Agency. Under external support contracts, contractors are generally expected to provide services at the deployed location. The Army’s Logistics Civil Augmentation Program contract is an example of an external theater contact. Finally, system contracts provide logistics support to maintain and operate weapons and other systems. Systems may be new or long-standing ones, and often the contracts are intended to support units at their home stations. These types of contracts are most often awarded by the commands responsible for building and buying the weapons or other systems. Within a service or agency, numerous contracting officers, with varying degrees of knowledge about the needs of contractors and the military in deployed locations, can award contracts that support deployed forces. Depending on the type of service being provided under a contract, contractor employees may be U.S. citizens, host country nationals, or third country nationals. Contracts to support weapons systems, for example, usually require U.S. citizens, while contractors that provide food and housing services frequently hire local nationals or third country nationals. Contractors provide the military with a wide variety of services from food, laundry, and recreation services to maintenance of the military’s most sophisticated weapons systems. DOD uses contractors during deployments because limits are placed on the number of U.S. military personnel assigned to a region, required skills may not be available in the service, or the services want to husband scarce skills to ensure that they are available for other contingencies. Contractors provide a wide range of services at deployed locations. The scope of contractor support often depends on the nature of the deployment. For example, in a relatively stable environment such as the Balkans, contractors provide base operations support services such as food, laundry, recreational, construction and maintenance, road maintenance, waste management, fire-fighting, power generation, and water production and distribution services. Contractors also provide logistics support such as parts and equipment distribution, ammunition accountability and control, and port support activities as well as support to weapons systems and tactical vehicles. In a less secure environment, as was the case shortly after U.S. forces deployed to Afghanistan, contractors principally provided support to weapons systems such as the Apache helicopter and chemical and biological detection equipment. Table 1 illustrates some types of contractor support provided at selected deployed locations. We were completing our work as the 2003 war with Iraq began and so were unable to fully ascertain the extent of contractor support to U.S. forces inside Iraq. Limits on the number of military personnel allowed in an area, called “force caps”, lead DOD to use contractors to provide support to its deployed forces. In some countries or regions the size of the force is limited due to law, executive direction, or agreements with host countries or other allies. For example, DOD has limited U.S. troops to 15 percent of the North Atlantic Treaty Organization force in Kosovo while the Philippine government limited the number of U.S. troops participating in a recent deployment to 660. Since contractors are not included in most force caps, as force levels have been reduced in the Balkans, the Army has substituted contractors for soldiers to meet requirements that were originally met by soldiers. In Bosnia, for example, the Army replaced soldiers at the gate and base perimeter with contracted security guards. In Kosovo, the Army replaced its firefighters with contracted firefighters as the number of troops authorized to be in Kosovo decreased. By using contractors the military maximizes its combat forces in an area. In some cases, DOD lacks the internal resources to meet all the requirements necessary to support deployed forces. The military services do not always have the people with specific skill sets to meet the mission. Army National Guard members deployed to Bosnia told us that they used contractors to maintain their Apache and Blackhawk helicopters because the Guard has no intermediate maintenance capability. In addition, recently fielded systems and systems still under development may have unique technical requirements for which the services have not had time to develop training courses and train service personnel. For example, when the Army’s 4th Infantry Division deployed in support of the recent war in Iraq, about one-third of the 183 contractor employees that deployed with the division deployed to support the high tech digital command and control systems still in development. Similarly, when the Air Force deployed the Predator unmanned aerial vehicle, it required contractor support because the vehicle is still in development and the Air Force has not trained service members to maintain the Predator’s data link system. In addition, some weapons systems, such as the Marine Corp’s new truck, were designed to be at least partially contractor supported from the beginning, or the services made the decision to use contractor support because the limited number of assets made contractor support cost effective in DOD’s judgment. For example, the Army’s Guardrail surveillance aircraft is entirely supported by contractors because, according to Army officials, it was not cost effective to develop an organic maintenance capability for this aircraft. The increasing reliance on the private sector to handle certain functions and capabilities has further reduced or eliminated the military’s ability to meet certain requirements internally. For example, at Air Force bases in the United States contractors now integrate base telephone networks with local telephone systems. Since the Air Force eliminated this internal capability to integrate the base telephone network with the local telephone networks, it no longer has the military personnel qualified to perform this task at deployed locations. Also, the use of commercial off- the-shelf equipment results in an increased use of contractors. For example, the Air Force and the Navy use commercial communications systems at deployed locations in Southwest Asia and support this equipment with contractors. According to one Navy official with whom we spoke, the Navy uses contractors because it does not train its personnel to maintain commercial systems. In other cases, required skills are limited, and there is a need to conserve high-demand, low-density units for future operations. Air Force officials in Southwest Asia told us that they use contractors to maintain the generators that provide power to the bases there because the Air Force has a limited number of qualified maintenance personnel, and their frequent deployment was having a negative impact on retention. While most commanders believed that replacing service members with contractors in deployed locations had no negative impact on the training of military members, some believed that service members who did not deploy with their units were missing valuable training opportunities. We found opinions varied depending on the skill or military occupation that was being replaced. For example, commanders told us that food service personnel and communications personnel would not benefit from deploying to Bosnia and Kosovo at this time because these locations no longer replicate field conditions, rather they more closely resemble bases in Germany or the United States. Other commanders told us that they believed that logistics personnel as well as vehicle maintenance personnel were missing the opportunity to work in high volume situations in a more intense environment. At some locations, contractor employees who work with military personnel are providing training although such training may not be a requirement of the contract. Contractors are training soldiers on systems they ordinarily would not be exposed to, such as specially modified high mobility multipurpose wheeled vehicles (Humvees) in Bosnia and commercial power generators in Kuwait. They also train soldiers to operate and maintain the newest technologies, such as computers and communications systems supporting intelligence operations in Southwest Asia. Training is comprised of not only hands-on experience but often structured training classes as well. Contractors provide DOD with a wide variety of services at deployed locations, and while DOD uses contractors as part of the total force mix and recognizes the need to continue essential contractor services during crises, it has not included them in operational and strategic planning. DOD policy requires its components to annually review all contractor services, including new and existing contracts to determine which services will be essential during crisis situations. Where there is a reasonable doubt about the continuation of essential services during crisis situations by the contractor, the cognizant component commander is required to prepare a contingency plan for obtaining the essential service from alternate sources. However, we found that the required contract reviews were not done, and there was little in the way of backup plans. Many commanders assumed that other contractors or military units would be available to provide the essential service if the original contractors were no longer available. However, the commanders had no way of knowing if these assets would actually be available when needed. Additionally, DOD has not integrated its contractor workforce into its human capital strategy. As early as 1988, DOD noted the lack of a central policy or an oversight mechanism for the identification and management of essential contractor services. A DOD Inspector General report, issued in November 1988, noted that DOD components could not ensure that the emergency essential services performed by contractors would continue during a crisis or hostile situation. The report also stated that there was “no central oversight of contracts for emergency essential services, no legal basis to compel contractors to perform, and no means to enforce contractual terms.” The report recommended that all commands identify (1) “war-stopper” services that should be performed exclusively by military personnel and (2) those services that could be contracted out, if a contingency plan existed, to ensure continued performance if a contractor does not perform. DOD concurred with the reports findings and recommendations and drafted a directive to address them. This effort led to the issuance of DOD Instruction 3020.37, in November 1990, which addresses the continuation of essential contractor services during crisis situations. In 1991, the Department of Defense Inspector General reported on this issue again. The Inspector General reported that generally “contingency plans did not exist to ensure continued performance of essential services if a contractor defaulted during a crisis situation.” The Inspector General’s report also stated that there was no central policy or oversight for the identification and management of essential services until DOD Instruction 3020.37 was issued. The Inspector General’s report noted that none of the major or subordinate commands that they visited could provide them with data concerning all contracts vital to combat or crisis operations. The report concluded that although DOD’s instruction provided the needed central policy that promotes the continuation of emergency essential services during crises and hostile situations, the instruction needed revision to provide additional assurances such as the identification of war-stopper services and an annual reporting system identifying the numbers of emergency essential contracts and their attendant personnel. DOD concurred with the report findings but believed that since DOD Instruction 3020.37 had just been issued, the services and agencies should be given time to implement it. DOD Instruction 3020.37 assigns responsibilities and prescribes procedures to implement DOD policy to assure that components (1) develop and implement plans and procedures that are intended to provide reasonable assurance of the continuation of essential services during crisis situations and (2) prepare a contingency plan for obtaining the essential service from alternate sources where there is a reasonable doubt about the continuation of that service. Responsibility for ensuring that all contractor services are reviewed annually, to include new and existing contracts, to determine which services will be essential during crisis situations rests with the heads of DOD components. They must also conduct an annual assessment of the unexpected or early loss of essential contractor services on the effectiveness of support to mobilizing and deployed forces. The results of these assessments are to be included in the affected contingency or operations plans. Planning procedures for component activities using essential contractor services are specified in DOD Instruction 3020.37. The component is to identify services that are mission essential and designate them in the contract statement of work. Where a reasonable assurance of continuation of essential contractor services cannot be attained, the component activity commander is to do one of three things. The first is to obtain military, DOD civilian, or host nation personnel to perform the services concerned, and, in consultation with legal and contracting personnel, determine the proper course of action to transition from the contractor-provided services. The second is to prepare a contingency plan for obtaining the essential services from other sources if the contractor does not perform in a crisis. The third option for the commander is to accept the risk attendant with a disruption of the service during a crisis situation. Figure 2 shows the essential planning process required by DOD Instruction 3020.37. DOD has also directed regional combatant commanders to identify contractors providing mission essential services and develop plans to mitigate their possible loss. In late 2002, the Joint Staff modified the logistics supplement to the Joint Strategic Capabilities Plan to require the development of a mitigation plan that details transitioning to other support should commercial deliveries and/or support become compromised. This was partly in response to problems with fuel deliveries in Afghanistan during Operation Enduring Freedom. Also, Joint Staff guidance for the development of operational plans by the regional combatant commanders requires that those plans identify mission essential services provided by contractors and identify the existence of any contingency plans to ensure these services continue. As noted earlier, DOD Instruction 3020.37 was issued in response to a 1988 DOD Inspector General report, and in 1991 DOD stated that the components should be given time to implement it. However, as of April 2003, 12 years later, we found little evidence that the DOD components are implementing the DOD Instruction. The heads of DOD components are required by the instruction to ensure that the instruction’s policies and procedures are implemented by relevant subordinate organizations. However, none of the services are conducting the annual review to identify mission essential services that are being provided by contractors. Service and combatant command officials we spoke with were generally unaware of the requirement to review contracts annually and identify essential services. None of the regional combatant commands, service component commanders, or installations visited during our review had an ongoing process for reviewing contracts as required by DOD Instruction 3020.37. Without identifying mission essential contracts, commanders do not know what essential services could be at risk during operations. Furthermore, the commanders cannot determine when backup plans are needed, nor can they assess the risk they would have to accept with the loss of contractor services. One Air Force official indicated that our visit had prompted a review of their contracts to identify those that provided essential services and that he became aware of this requirement only when we asked about their compliance with the instruction. Additionally, DOD has limited knowledge of the extent to which DOD Instruction 3020.37 is being implemented. The instruction states that an office within the Office of the Secretary of Defense will “periodically monitor implementation of this instruction.” However, we found no evidence that the required monitoring had ever taken place. In discussion with the office that has primary responsibility for the instruction (located in the Office of the Under Secretary of Defense for Personnel and Readiness) we were told that the monitoring process is informal and that since DOD components have not advised the office of any significant problems in implementing the instruction (as required by the instruction) it is assumed that it is being implemented. We found little in the way of backup plans to replace mission essential contractor services during crises if necessary. This is not surprising since a prerequisite to developing a backup plan is the identification of those contracts that provide essential services. Many of the people we talked to assumed that the personnel needed to continue essential services would be provided, either by other contractors or organic military capability and did not see a need for a formal backup plan. The only written backup plan that we found was for maintenance of the Air Force’s C21J executive aircraft. According to the plan, if contractors are unavailable, Air Force personnel will provide maintenance. However, according to Air Force officials, no one in the Air Force is trained to maintain this aircraft. Our review of unclassified portions of operations plans addressing logistics support revealed no backup planning. For example, in our review of the logistics portion of the operations plan for the war in Iraq, which addresses contracting, we found that there were no backup plans should contractors become unavailable to provide essential services. The plan provides guidance on certain aspects of contracting, such as the creation of a joint contracting cell, but there is no language pertaining to backup plans. In addition, our review of operations plans for the Balkans did not identify any reference to plans for the mitigation of the loss of contractor support. In response to our questions about a lack of backup plans, many DOD officials noted that contractors have always supported U.S. forces in deployed locations and the officials expect that to continue. While most of the contractor personnel we spoke with in the Persian Gulf indicated that they would remain in the event of war with Iraq, they cannot be ordered to remain in a hostile environment or replace other contractors that choose not to deploy. DOD can initiate legal action against a contractor for nonperformance, but the mission requirement the contractor was responsible for remains. Assuming that existing contractor employees will be available to perform essential services may not always be realistic. Reasons for the loss of contractor support can extend beyond contractors refusing to deploy to or remain in the deployed location. Contractors could be killed (seven contractor employees were killed in the 1991 Gulf War) or incapacitated by hostile action, accident, or other unforeseen events. Furthermore, there is no guarantee that a contractor will be willing to deploy to replace the original contractor. Should contractors become unavailable, many of the people we talked to assumed that the personnel needed to continue essential services would be provided either by other contractors or organic military capability, or they would do without the service. However, these assumptions have not been vetted, and key questions remain. The ability to replace existing contractor services with a new contractor can be dependent on the support being provided. Assumptions that military resources will be available may not recognize that multiple commands may be relying on the same unit as backup and that these units therefore may not be available, or organic capability may not exist. As we noted earlier the lack of organic capability is one reason that DOD uses contractors. The Air Force’s lack of in-house maintenance capability for its C21J aircraft mentioned earlier and the Army’s total dependence on contractor support for all its fixed wing aircraft are examples of the lack of organic capability. For some contracts, comparably skilled contractor personnel may not be available from other companies. For example, we were told at one location that only certain contractors have access to proprietary technical and backup data from the manufacturers of specific aircraft or systems. Additionally, the contracted services required for military operations may also be needed by others. For example, shortages of qualified linguists to support Operation Enduring Freedom in Afghanistan delayed interrogations and signals exploitation. Among the reasons given for the shortage were the competing demands of other government agencies for the same skills. If the decision to do without the essential service is made, the risk associated with this decision must be examined and determined to be acceptable, particularly in light of the reliance on contractors. Without contractor support certain missions would be at risk. For example, Task Force Eagle in Bosnia relies on contracted linguistic and intelligence analyst services. We were told that if the contracted services were lost, it would mean an immediate critical loss would occur for the military because DOD does not have service personnel with these skills. Another example is biological detection equipment used by the Army deployed in Afghanistan in October 2001. The equipment is operated by Army personnel but is entirely dependent upon contractor support for maintenance in the field. The loss of this contractor support would adversely affect the Army’s ability to detect biological threats at deployed locations. “The total force policy is one fundamental premise upon which our military force structure is built. It was institutionalized in 1973 and … as policy matured, military retirees, DOD personnel, contractor personnel, and host-nation support personnel were brought under its umbrella to reflect the value of their contributions to our military capability.” Furthermore, DOD policy states “the DOD Components shall rely on the most effective mix of the Total Force, cost and other factors considered, including active, reserve, civilian, host-nation, and contract resources necessary to fulfill assigned peacetime and wartime missions.” While DOD policy may consider contractors as part of the total force, its human capital strategy does not. As we recently reported, DOD has not integrated the contractor workforce into its overall human capital strategic plans. The civilian plan notes that contractors are part of the unique mix of DOD resources, but the plan does not discuss how DOD will shape its future workforce in a total force context that includes contractors. This situation is in contrast to what studies on human capital planning at DOD have noted. For example, the Defense Science Board’s 2000 report on Human Resources Strategy states that DOD needs to undertake deliberate and integrated force shaping of the civilian and military forces, address human capital challenges from a total force perspective, and base decisions to convert functions from military to civilians or contractors on an integrated human resources plan. In addition, the National Academy of Public Administration noted that as more work is privatized and more traditionally military tasks require support of civilian or contractor personnel, a more unified approach to force planning and management will be necessary; serious shortfalls in any one of the force elements (military, civilian, or contractor) will damage mission accomplishment. DOD disagreed with our March 2003 recommendation that it develop a departmentwide human capital strategic plan that integrates both military and civilian workforces and takes into account contractor roles. In disagreeing, DOD said that it presently has both a military and civilian plan; the use of contractors is just another tool to accomplish the mission, not a separate workforce, with separate needs, to manage. The intent of our recommendation is that strategic planning for the civilian workforce be undertaken in the context of the total force—civilian, military, and contractors—because the three workforces are expected to perform their responsibilities in a seamless manner to accomplish DOD’s mission. We continue to believe that strategic planning in a total force context is especially important because the trend toward greater reliance on contractors requires a critical mass of civilian and military personnel with the expertise necessary to protect the government’s interest and ensure effective oversight of contractors’ work. Integrated planning could also facilitate achieving a goal in the Quadrennial Defense Review to focus DOD’s resources (personnel) in those areas that directly contribute to war fighting and to rely on the private sector for non-core functions. Guidance at the DOD, combatant-command, and service levels regarding the use of contractors to support deployed forces varies widely as do the mechanisms for managing these contractors, creating challenges that may hinder a commander’s ability to oversee and manage contractors efficiently. There is no DOD-wide guidance that establishes baseline polices to help ensure the efficient use of contractors that support deployed forces. The Joint Staff has developed general guidance for regional combatant commanders. At the service level, only the Army has developed comprehensive guidance to help commanders manage deployed contractors effectively. Furthermore, there is little or no visibility of contractors or contracts at the regional combatant or service component command level. As a result, contractors have arrived at deployed locations unbeknownst to the ground commander and without the government support they needed to do their jobs. Moreover, ground commanders have little visibility over the totality of contractors that provide services at their installations, causing concerns regarding safety and security. Guidance for issues that impact all the components originates at the DOD level. Typically, DOD will issue a directive—a broad policy document containing what is required to initiate, govern, or regulate actions or conduct by DOD components. This directive establishes a baseline policy that applies across the combatant commands, services, and DOD agencies. DOD may also issue an instruction—which implements the policy, or prescribes the manner or a specific plan or action for carrying out the policy, operating a program or activity, and assigning responsibilities. For example: DOD Directive 2000.12 establishes DOD’s antiterrorism and force protection policy. DOD Instruction 2000.16 establishes specific force protection standards pursuant to the policy established by DOD Directive 2000.12. In the case of contractor support for deployed forces, we found no DOD- wide guidance that establishes any baseline policy regarding the use of contractors to support deployed forces or the government’s obligations to these contractors. However, there are varying degrees of guidance at the joint and service level to instruct commanders on the use of contractors. The Joint Staff has developed guidance for regional combatant commanders. Joint Publication 4-0, Doctrine for Logistic Support of Joint Operations, “Chapter V, Contractors in the Theater” sets forth doctrine on the use of contractors and provides a framework for addressing contractor support issues. The Joint Publication describes the regional combatant commander’s general responsibilities, including integration of contractors as part of the force as reflected in the Time-Phased Force and Deployment Data, logistics plans, and operation plans; compliance with international, U.S., and host nation laws and determination of restrictions imposed by international agreements on the status of contractors; establishment of theater-specific requirements and policies for contractors and communication of those requirements to the contractors; and establishment of procedures to integrate and monitor contracting activities. No single document informs the combatant commander of his responsibilities with regards to contractors. Rather, there is a variety of guidance that applies to contractors and appears in joint or DOD publications. For example, in addition to Joint Publication 4-0, the following DOD documents address contractors at deployed locations: DOD Directive 2000.12 and DOD Instruction 2000.16, define the anti-terrorism and force protection responsibilities of the military. These include force protection responsibilities to contractors as well as requirements placed on contractors who deploy. Joint Publication 3-11, includes a requirement that mission-essential contractors be provided with chemical and biological survival equipment and training. DOD Directive 4500.54 requires all non-DOD personnel traveling under DOD sponsorship to obtain country clearance. While the directive does not specify contractors, it does apply to them, further complicating the ability of a commander to become aware of this responsibility. Joint Publication 4-0 only applies to combatant commanders involved in joint operations. However, at the regional combatant commands we visited, contracting, logistics, and planning officials were not implementing the Joint Publication. At the service level, only the Army has developed comprehensive guidance to help commanders manage contractors effectively. As the primary user of contractors while deployed, the Army has taken the lead in formulating policies and doctrine addressing the use of contractors in deployed locations. Army regulations, field manuals, and pamphlets provide a wide array of guidance on the use of contractors. The following are examples: Army Regulation 715-9—Contractors Accompanying the Force— provides policies, procedures, and responsibilities for managing and using contracted U.S. citizens who are deployed to support Army requirements. Army Field Manual 3-100.21—Contractors on the Battlefield —addresses the use of contractors as an added resource for the commander to consider when planning support for an operation. Its purpose is to define the role of contractors, describe their relationships to the combatant commanders and the Army service component commanders, and explain their mission of augmenting operations and weapons systems support. It is also a guide for Army contracting personnel and contractors in implementing planning decisions and understanding how contractors will be managed and supported by the military forces they augment. Army Pamphlet 715-16—Contractor Deployment Guide—informs contractor employees, contracting officers, and field commanders of the current policies and procedures that may affect the deployment of contractors. The guide focuses on the issues surrounding a U.S. citizen contractor employee who is deploying from the United States to a theater of operation overseas. These documents provide comprehensive and detailed direction to commanders, contracting personnel, and contractors on what their roles and responsibilities are and how they should meet them. Officials we spoke with at various levels of the Army were generally aware of the Army’s guidance. For example, in Kosovo we received a briefing from the commander of the Area Support Group that included the applicable Army guidance on the use of contractors in deployed locations. Additionally, the Army Materiel Command has established a Web site that contains links to primary and secondary documents that provide guidance on the use of contractors on the battlefield. The other services make less use of contractors to support deployed forces. Nevertheless, their contractors provide many of the same services as the Army’s contractors, often under similar austere conditions at the same locations and therefore have similar force protection and support requirements as Army contractors. For example, both Air Force and Army contractors work at bases in Kuwait and do not have significant differences in terms of their living and working conditions or the types of threats they face. Also, it is not uncommon to find Air Force contractors deployed in support of the other services, as is the case in Bosnia where Air Force contractors maintain the Army’s Apache and Blackhawk helicopters. However, the other services have not developed the same level of guidance as the Army to guide commanders and contracting personnel on how to meet those requirements. Like the Army, the Air Force uses contractors for base operations support (including security, trash removal, and construction services) in deployed locations. Contractors also provide many essential services to Air Force units deployed to Bosnia and Southwest Asia. In Southwest Asia contractors provide support for base communications systems, systems that generate the tactical air picture for the Combined Air Operations Center, and maintenance support for both the Predator unmanned aerial vehicle and the data links it uses to transmit information. In 2001, the Air Force issued a policy memorandum addressing the use of contractors in deployed locations. The purpose of the memorandum is to provide consistent and uniform guidance on the use of U.S. contractor personnel to augment the support of Air Force operations in wartime and contingency operations. For example, the memorandum states as follows: Any determination regarding commercial support must consider the essential services that must be maintained and the risks associated due to contractor non-performance. Contractors may be provided force protection and support services such as housing and medical support commensurate to those provided to DOD civilians, if the contract requires it. Contractors should not be provided uniforms or weapons. However, the Air Force has not developed the guidance to instruct its personnel on how to implement this policy. For example, the Air Force does not have a comparable document to the Army’s Contractor Deployment Guide, to instruct contracting personnel or contractor employees on deployment requirements such as training, medical screening, and logistical support. The Navy and the Marine Corps have also not developed much guidance on dealing with contractors in deployed locations. The Marine Corps has issued an order addressing the use of contractors, which is limited to a statement that contractor personnel should not normally be deployed forward of the port of debarkation and that contractor logistics support requirements be identified and included in all planning scenarios. This guidance only addresses contractor support for ground equipment, ground weapons systems, munitions, and information systems. As with the Air Force memorandum, the Marine Corps does not have the guidance in place to instruct personnel on how to implement this order. The Navy does not have any guidance related to contractor support of deployed forces. Navy officials stressed that because most Navy contractors are deployed to ships, many of the issues related to force protection and levels of support do not exist. Nevertheless, some contractors do support the Navy ashore and therefore may operate in an environment similar to contractors supporting the Army. In fact, of the seven contractors killed in the 1991 Persian Gulf War, three were working for the Navy. Furthermore, we learned that there have been issues with the support of contractors deployed on ships. For example, officials at the Navy’s Space and Naval Warfare Systems Command told us they were not sure if the Navy was authorized to provide medical treatment to their contractors deployed on ships. The differences in the DOD and service guidance can lead to sometimes contradictory requirements, complicating the ability of commanders to implement that guidance. For example, guidance related to providing force protection to contractor personnel varies significantly. Joint guidance states that force protection is the responsibility of the contractor; Army guidance places that responsibility with the commander; and Air Force guidance treats force protection as a contractual matter, specifically, as follows: Joint Publication 4-0 “Chapter V,” states “Force protection responsibility for DOD contractor employees is a contractor responsibility, unless valid contract terms place that responsibility with another party.” Army Field Manual 3-100.21 states, “Protecting contractors and their employees on the battlefield is the commander’s responsibility. When contractors perform in potentially hostile or hazardous areas, the supported military forces must assure the protection of their operations and employees. The responsibility for assuring that contractors receive adequate force protection starts with the combatant commander, extends downward, and includes the contractor.” The Air Force policy memorandum states, “The Air Force may provide or make available, under terms and conditions as specified in the contract, force protection … commensurate with those provided to DOD civilian personnel to the extent authorized by U.S. and host nation law.” As a result, the combatant commander does not have a uniform set of requirements he can incorporate into his planning process but instead has to work with requirements that vary according to the services and the individual contracts. In fact, an official on the Joint Staff told us that the combatant commanders have requested DOD-wide guidance on the use of contractors to support deployed forces to establish a baseline that applies to all the services. Many of the issues discussed in the balance of this report, such as the lack of standard contract language related to deploying contractors, the lack of visibility over contractors, and adequate support to deployed contractors stem in part from the varying guidance at the DOD and service levels. According to DOD officials, DOD is in the initial phase of developing a directive that will establish DOD policy with regard to managing contractors in deployed locations as well as a handbook providing greater detail. The officials expect this guidance to be issued by the end of 2003. DOD officials involved stated this guidance would bring together all DOD policies that apply to contractors who support deployed forces and clarify DOD policy on issues such as force protection and training. These officials indicated that the DOD directive and handbook would be based on the Army guidance on the use of contractors to support deployed forces. There is no standard contract language applicable DOD-wide (such as in the Defense Federal Acquisition Regulation Supplement) related to the deployment and support of contractors that support deployed forces. Contracting officers therefore may not address potential requirements related to deployments or may use whatever deployment language they believe to be appropriate, which may not address the necessary deployment requirements. The Defense Acquisition Deskbook Supplement entitled Contractor Support in the Theater of Operations includes suggested clauses for contracts in support of deployed forces. However, these clauses are not mandatory and did not appear to be widely known by contracting officers. As a result, there is no common baseline of contract language specifically addressing deployment that is required for contracts that may support deployed forces and no assurance that all of these contracts will properly address deployment requirements. The degree to which individual contracts adequately address deployment requirements varies widely. System support contracts are often written before the need to deploy is identified, and the contracting officer may not have considered the possibility of deployment. Also, some weapons systems are being deployed before they are fully developed, and deployment language was not included in the development contracts. Some of the system support contracts we looked at did not include language clearly specifying that contractors may need to deploy to hostile and austere locations to provide support to deployed forces, as in the following examples: The contract for an Army communications system needed to be modified when the system was relocated from Saudi Arabia to Kuwait (and would need to be modified again if the system were brought into Iraq) because the contract did not contain provisions for deployment to other locations. The Air Force Predator unmanned aerial vehicle contract did not envision deployment since the Predator was developed as an advanced technology concept demonstration project. An engineering support contract for the Navy did not contain a specific deployment clause but only stated that the contractor must support the Navy ashore or afloat. The Army’s Combined Arms Support Command found a similar situation when it reviewed system support contracts for the 4th Infantry Division. The 4th Infantry Division is the Army’s first digitized division and serves as the test bed for the latest command and control systems, many of which are still under development. The Combined Arms Support Command study reviewed 89 contracts that supported the division. The command determined that 44 of the 89 contracts would likely require that contractor personnel be deployed and found that 21 of the 44 either had no deployment language or vague deployment language. However, this did not impede the division’s deployment for Operation Iraqi Freedom. According to Army officials, 183 contractor employees prepared to deploy in support of the 4th Infantry Division’s deployment, including some whose contracts were noted in the 4th Infantry Division study as having had either no deployment language or vague deployment language. To ensure that problems do not arise when units deploy, the Army has taken steps to address some of the issues identified in the study. Specifically, in 2002, the Assistant Secretary of the Army for Acquisition, Logistics, and Technology issued the following memorandums: A January 2002 memorandum stating that development contracts providing support contractor personnel shall contain appropriate deployment guidance if they have any likelihood of being deployed outside of the United States. A June 2002 memorandum stating that Program Executive Officers and Program Managers should strive to develop systems that do not require contractor support in forward deployed locations. Military officials we spoke with told us that the lack of specific deployment language in contracts could increase the time it would take to get contractor support to deployed forces as well as the cost of that support. For example, the contract for support of the Army’s prepositioned equipment in Qatar did not include language that provided for a potential deployment to Kuwait. As a result, when the need arose to move the equipment to Kuwait, the contract needed to be modified. (The cost of the modification was $53 million although it is not clear what amount, if any, the government could have saved had deployment language already been included in the contract.) Contacts may also lack language to enforce policies pertaining to contractors in deployed locations. For example, Army policy requires that contractors follow all general orders and force protection policies of the local commander. However, these requirements were not always written into the contract documents and thus may not be enforceable. In such situations, commanders may not have the ability to control contractor activities in accordance with general orders. For example, judge advocate officials in Bosnia expressed their concern that the base commander was not authorized to prevent contractor personnel from entering a local mosque in a high threat environment. These officials suggested that commanders should always be able to control contractor activities where matters of force protection are concerned. Several officials indicated that many of these issues could be addressed if DOD implemented a policy that required all contracts that support deployed forces to include language that applies the general orders and force protection policies of the local commanders to contractor employees. DOD has established specific policies on how contracts, including those that support deployed forces, should be administered and managed. Oversight of contracts ultimately rests with the contracting officer who has the responsibility for ensuring that contractors meet the requirements set forth in the contract. However, most contracting officers are not located at the deployed locations. As a result, contracting officers appoint monitors who represent the contracting officer at the deployed location and are responsible for monitoring contractor performance. How contracts and contractors are monitored at a deployed location is largely a function of the size and scope of the contract. Contracting officers for large scale and high value contracts such as the Air Force Contract Augmentation Program, the Army’s Logistics Civil Augmentation Program, and the Balkan Support Contract have opted to have personnel from the Defense Contract Management Agency oversee contractor performance. These onsite teams include administrative contracting officers who direct the contractor to perform work and quality assurance specialists who ensure that the contractors perform work to the standards written in the contracts. For smaller contracts, contracting officers usually appoint contracting officer’s representatives or contracting officer’s technical representatives to monitor contractor performance at deployed locations. These individuals are not normally contracting specialists and serve as contracting officer’s representatives as an additional duty. They cannot direct the contractor by making commitments or changes that affect price, quality, quantity, delivery, or other terms and conditions of the contract. Instead, they act as the eyes and ears of the contracting officer and serve as the liaison between the contractor and the contracting officer. At the locations we visited, we found that oversight personnel were generally in place and procedures had been established to monitor contractor performance, but some issues were identified. The officials we spoke with expressed their satisfaction with contractor performance and with the level of oversight provided for the contracts under their purview. However, officials mentioned several areas where improvements to the oversight process could be made. One area involved training of contracting officer’s representatives. While the contracting officer’s representatives we spoke with appeared to be providing appropriate contract oversight, some stated that training before they assumed these positions would have better prepared them to effectively oversee contractor performance. The Defense Federal Acquisition Regulation Supplement requires that they be qualified by training and experience commensurate with the responsibilities to be delegated to them. However, not all contracting officer’s representatives were receiving this training. For example, most of the contracting officer’s representatives we met with in Southwest Asia did not receive prior training. As a result, they had to learn on the job, taking several weeks before they could efficiently execute their responsibilities, which could lead to gaps in contractor oversight. Another area for improvement involved familiarization of commanders with using contractors. Several of the contracting officials we met with in the Balkans and Southwest Asia stated there was a lack of training or education for commanders and senior personnel on the use of contractors; particularly with regards to the directing of contractor activities and the roles of the contract monitors such as the Defense Contract Management Agency and contracting officer’s representatives, as illustrated in the following examples: An Air Force commander sent a contractor from Kuwait to Afghanistan without going through the appropriate contracting officer. The contractor was ultimately recalled to Kuwait because the contract contained no provision for support in Afghanistan. A Special Operations Command official told us commanders were unfamiliar with the Defense Contract Management Agency and believed that the agency represented the contractor and not the military. An Army official told us that commanders sometimes do not know that they are responsible for requesting and nominating a contracting officer’s representative for contracts supporting their command. Some efforts are being made to address this issue. For example, U.S. Army, Europe includes contract familiarization during mission rehearsal exercises for Balkan deployments. We also found that the frequent rotation of personnel into and out of a theater of operation (particularly in Southwest Asia) resulted in a loss of continuity in the oversight process as incoming oversight personnel had to familiarize themselves with their new responsibilities. We previously reported on the impact of frequent rotations in and out of the theater. In response to a recommendation made in our 2000 report, the Defense Contract Management Agency changed its rotation policy. According to officials whom we met with in the Balkans and Southwest Asia, the Defense Contract Management Agency now staggers the rotation of its contract administration officials at deployed locations such as the Balkans and Southwest Asia to improve continuity and oversight. However, the issue of personnel rotation and the impact on contractor oversight remains for other officials. For example, the program manager of a major Army contract in Qatar indicated that it would be beneficial if Army personnel overseeing the contract were deployed for a longer period of time in order to develop a more durable relationship. In addition, Air Force officials in Qatar indicated they were planning on increasing the number of longer-term deployments for key leadership positions, including contracting positions, to help alleviate some of their continuity issues. Some commands have established policies and procedures to provide additional tools to help manage contractors more efficiently, as in the following example: U.S. Army, Europe established a joint acquisition review board during contingency operations. This board validates requirements for all proposed expenditures over $2,500. The board also determines if the requirement is best met using contractor support, host nation support, or troop labor. The policy stipulates that U.S. Army, Europe headquarters must review expenditures over $50,000. U.S. Army, Europe has established standards for facilities and support to soldiers in contingency operations. These standards specify the level of quality of life support (i.e. type of housing, size of chapels, provision of recreational facilities, and other amenities) based on the number of U.S. troops at the deployed location. Variations from these standards have to be approved by the U.S. Army, Europe deputy commanding general. Officials told us these standards helped to limit the growth of contractor services. Limited awareness by service and combatant command officials of all contractor activity supporting their operations can hamper their oversight and management responsibilities with regards to contractors supporting deployed forces. This limited awareness is due to the fact that the decision to use contractors to provide support to a deployed location can be made by any number of requiring activities both within and outside of the area of operations. As discussed earlier, contracts to support deployed forces can be awarded by many organizations within DOD or by other federal agencies. Figure 3 illustrates the broad array of contractor services being provided in Bosnia and the government agency that awarded each contract. Bosnia is one of the few places we visited where contract information is collected centrally, giving the commander visibility over much of the contracting activity. Commanders at other locations we visited did not have this information readily available to them. Because the decision to use contractors is not coordinated at the regional combatant commands or the component commands other than in Bosnia, no one knows the totality of contractor support being provided to deployed forces in an area of operation. Despite the lack of visibility and involvement in decisions to use contractors, commanders are responsible for all the people in their area of responsibility, including contractor personnel. This lack of visibility over contractor personnel inhibits their ability to resolve issues associated with contractor support. Contractor visibility is needed to ensure that the overall contractor presence in a theater is synchronized with the combat forces being supported and that adjustments can be made to contractor support when necessary. Additionally, in order to provide operational support and force protection to participating contractors, DOD needs to maintain visibility of all contracts and contractor employees. When commanders lack visibility, problems can arise. For example, one contractor told us when his employees arrived in Afghanistan, shortly after the beginning of Operation Enduring Freedom, the base commander had not been informed that they were arriving and could not provide the facilities they needed to maintain the biological identification equipment that they were contracted to maintain. Also, the lack of visibility may inhibit a commander’s understanding of the impact of certain force protection decisions. For example, if there is an increased threat at a base and security is increased, third country nationals may be barred from entering the base. Third country nationals often provide services important to the quality of life of deployed soldiers, such as preparing and serving food and providing sanitation services. Without visibility over the totality of contractor support to his command, the commander may not know which support services rely heavily on third country nationals and is therefore less able to identify and mitigate the effects of losing that support. Limited visibility of all contractor activity can create a variety of problems for ground commanders. Commanders may not be aware of the total number of contractor personnel on their installations at any point in time or what they are doing there. In Southwest Asia this situation is further complicated by the fact that many of the contractor employees are third country nationals, which can increase security concerns. While many officials at sites we visited indicated that they maintain accountability for their contractors by tightly controlling the process by which contractors receive their identification badges, we found problems remained. As illustrated in the following examples: In Kosovo, we found that badges were issued at multiple locations and provided access to multiple bases. This situation means a contractor employee could receive a badge at one site and come onto a different base without the base commander knowing who they were or why they were there. Temporary badges (for visits of 30 days or less) at Eagle base in Bosnia have no pictures. The lack of photos means that anyone could use the badge to gain access to the base. The contracting officer’s representative for a forward base in Kuwait told us that contractor personnel have simply shown up without any advance notification and that he had to track down other officials to determine why the contractors were there. Commanders may also be responsible for providing contractor employees with certain benefits and entitlements included in their contracts. The commanders’ ability to meet these requirements (including providing chemical and biological protective gear, military escorts, billeting, and medical support) is hindered by their lack of visibility over the totality of contractor presence on their base. In addition, commanders may not be able to account for all their contractor personnel in the event of an attack on a base. Similarly, should issues such as those concerning “Gulf War Syndrome” arise, DOD may be unable to determine if contractor personnel were in a location where they might have been exposed to potentially harmful substances. As a result, DOD may have no way to verify the claims of contractor personnel of health effects resulting from such exposure. We also found that, at some bases, commanders do not have copies of all the contracts in effect on their base, as the following examples illustrate: U.S. Army Pacific Command officials told us it took several weeks for them to obtain the applicable contract terms to resolve questions regarding medical care for contractor employees in the Philippines because no one in the command had a copy of the contract. In the Balkans, some contractors and federal agencies refused to provide copies of their contracts to the task force officials. We first reported this problem in May 2002. At that time we recommended that the Secretary of Defense direct all components to forward to the executive agent for operations in a geographical area, such as the Balkans, a copy of all existing and future contracts and contract modifications. DOD concurred with this recommendation and agreed to modify its Financial Management Regulation to require that a biannual report outlining the contracts be provided to the area executive agent. The biannual report was limited, however, to contracts that used contingency appropriations for funding and did not include contracts that use a service’s base program funds. However, Balkans operations are no longer being funded using contingency funds and would therefore not be included under the new financial management regulation. As of April 15, 2003, the change to the Financial Management Regulation had not been implemented. In addition, as we reported in May 2002, lack of visibility over contracts hinders DOD’s ability to compare contracts and identify potential duplication of services or ensure that contractors are only receiving those services to which they are entitled. Risk is inherent when relying on contractors to support deployed forces. DOD recognized this risk when it issued DOD Instruction 3020.37, which requires the services to determine which contracts provide essential services and either develop plans for continued provision of those services during crises or assume the risk of not having the essential service. However, neither DOD nor the services have taken steps to ensure compliance with this instruction. While most contractors would likely deploy or remain in a deployed location if needed, there are many other reasons contractors may not be available to provide essential services. Without a clear understanding of the potential consequences of not having the essential service available, the risks associated with the mission increase. There are no DOD-wide policies on the use of contractors to support deployed forces. As a result there is little common understanding among the services as to the government’s responsibility to contractors and contractor personnel in the event of hostilities. This lack of understanding can cause confusion at the deployed location and makes managing contractors more difficult because commanders often have contractors from several services at their location with different requirements, understandings, and obligations. No standard contract language exists for inclusion in contracts that may involve contractors deploying to support the force. Therefore, we found that contracts have varying and sometimes inconsistent language addressing deployment requirements. For example, some contracts do not contain any language related to the potential requirement to deploy while others include only vague references to deployment. The lack of specific language can require adjustments to the contract when deployment requirements are identified. The need to negotiate contract adjustments in the face of an immediate deployment can result in increased costs to the government and may delay contractor support. The lack of contract training for commanders, senior personnel, and some contracting officer’s representatives can adversely affect the effectiveness of the use of contractors in deployed locations. Without training, many commanders, senior military personnel, and contracting officer’s representatives are not aware of their roles and responsibilities in dealing with contractors. Most commanders at the locations we visited had only limited visibility and limited understanding of the extent and types of services being provided by contractors. The lack of visibility over the types and numbers of contractors limits the contract oversight that can be provided and hampers the commander’s ability to maintain accountability of contractors. Without this visibility there is no assurance that commanders understand the full extent of their operational support, life support, and force protection responsibilities to contractors, and there is no way to assure that contractors do not receive services they are not entitled to receive. Additionally, without this visibility commanders cannot develop a complete picture of the extent to which they are reliant on contractors to perform their missions and build this reliance into their risk assessments. Moreover, while DOD agreed to provide executive agents with a biannual report outlining the contracts in use in a geographical location, it is not clear that these reports, which are required for contracts funded with contingency funds only, will provide sufficient information regarding the services that contractors are providing to deployed forces and the support and force protection obligations of the government to those contractors to improve commanders’ visibility and understanding of contractor services at their locations. To promote better planning, guidance, and oversight regarding the use of contractors to support deployed forces, we recommend that the Secretary of Defense take the following actions: Direct the heads of DOD components to comply with DOD instruction 3020.37 by completing the first review of contracts to identify those providing mission essential services. This review should be completed by the end of calendar year 2004. Direct the Undersecretary of Defense for Personnel and Readiness to develop procedures to monitor the implementation of DOD Instruction 3020.37. Develop DOD-wide guidance and doctrine on how to manage contractors that support deployed forces. The guidance should (a) establish baseline policies for the use of contractors to support deployed forces, (b) delineate the roles and responsibilities of commanders regarding the management and oversight of contractors that support deployed forces, and (c) integrate other guidance and doctrine that may affect DOD responsibilities to contractors in deployed locations into a single document to assure that commanders are aware of all applicable policies. Additionally, we recommend that the Secretary of Defense direct the service secretaries to develop procedures to assure implementation of the DOD guidance. Develop and require the use of standardized deployment language in contracts that support or may support deployed forces. The Defense Federal Acquisition Regulation Supplement should be amended to require standard clauses in such contracts that are awarded by DOD and to address deployment in orders placed by DOD under other agencies’ contracts. This language should address the need to deploy into and around the theater, required training, entitlements, force protection, and other deployment related issues. Develop training courses for commanding officers and other senior leaders who are deploying to locations with contractor support. Such training could provide information on the roles and responsibilities of the Defense Contract Management Agency and the contracting officer’s representative and the role of the commander in the contracting process and the limits of the commanders’ authority. Also, contracting officers should ensure that those individuals selected as contracting officer’s representatives complete one of the established contracting officer’s representative training courses before they assume their duties. To improve the commander’s visibility over, and understanding of, the extent and types of services being provided by contractors, the Secretary of Defense should direct the Under Secretary of Defense (Comptroller) to implement the changes to the department’s Financial Management Regulations previously agreed to with these modifications: (a) the Financial Management Regulations should specify that the biannual report include a synopsis of the services being provided and a list of contractor entitlements; (b) the report should include all contracts that directly support U.S. contingency operations including those funded by the services base program accounts; and (c) the changes should be finalized by January 1, 2004. In written comments on a draft of this report, DOD agreed fully with three of our recommendations and agreed in part with three others. The department’s comments are reprinted in appendix II. DOD agreed with our recommendations that it develop (1) procedures to monitor the implementation of DOD Instruction 3020.37, (2) DOD-wide guidance and doctrine on how to manage contractors that support deployed forces, and (3) standardized deployment language for contracts that support or may support deployed forces. Although DOD agreed with our recommendation regarding the need for the heads of DOD components to complete the first review of contracts to identify those providing mission essential services, it expressed concerns that the components might not be able to complete this review by the end of calendar year 2003. We amended out recommendation to incorporate this concern by extending the recommended completion date to the end of calendar year 2004. We believe a completion date is important to provide some sense of urgency. DOD also stated that the effort needed to obtain information on contracts currently in place may outweigh possible benefits and suggested alternative methods for conducting this review, including the possibility of only reviewing new contracts. However, DOD Instruction 3020.37 requires a review of all contracts, and we continue to believe that a review that fails to include all contracts would not adequately address the issues that the instruction was designed to resolve—identifying essential services provided by contractors to deployed forces and ensuring the continuation of those services should contractors not be available. DOD also agreed with our recommendation that appropriate training should be developed for commanding officers and other senior leaders who are deploying to locations with contractor support. However, DOD stated that while Web-based training may be the appropriate medium for such training, in some cases, alternative methods could be more beneficial. We accepted DOD’s suggestion and amended the recommendation accordingly. DOD agreed with our recommendation concerning changes to the department’s Financial Management Regulations. However, DOD questioned the utility of a part of this recommendation that called for the biannual report to include a list of contractor entitlements as well as all contracts that directly support U.S. contingency operations, including those funded by the services’ base program accounts. DOD stated that the costs of making these changes to the system and collecting additional information could outweigh the perceived benefits. Further, DOD stated that the lack of collecting this information has not jeopardized the operation of any DOD mission in recent memory. DOD stated that other, less burdensome ways to ensure combatant commanders have all the necessary information for contractors that are supporting them need to be fully explored before pursuing more burdensome means, such as a costly centralized database. DOD said it would review this issue with the military departments to determine if obtaining the recommended information would be cost effective. We do not believe this recommendation would be costly or burdensome to implement. As noted in the report, the Under Secretary of Defense (Comptroller) has already agreed to amend DOD’s Financial Management Regulations to require that the components provide a biannual report outlining the existing and future contracts and contract modifications to the executive agent for operations in a geographic area, including a synopsis of services being provided. We believe that since the components will already be asked to provide the biannual reports, asking them to provide additional information summarizing contractor entitlements specified under those contracts would not substantially increase the effort required to generate these reports. This additional information would facilitate DOD’s efforts to ensure that contractors receive only the services from the government to which they are contractually entitled. While DOD expressed concern about developing a costly centralized database to generate these reports, our recommendation contained no guidance on how the reports should be generated and makes no mention of a centralized database. We agree that DOD should look for the most cost- effective way to implement the recommendation. We also continue to believe that the biannual report should include information from contracts that directly support U.S. contingency operations but are funded from the services’ base program accounts. As noted in the report, this would include contracts supporting operations in the Balkans. We do not believe that these contracts should be excluded from the report. While we did not find evidence that any DOD missions were jeopardized by not having information summarizing contractor services and entitlements, our recommendation was based on concerns raised by field commanders about oversight of contractors and the appropriate provisioning of support to contractors. As noted in the report, several commanders in the field told us their limited visibility of the extent and types of services being provided by contractors created challenges for them. We continue to believe that without a more thorough understanding of contractor support, commanders will continue to face difficulties in identifying potential duplication of services or ensuring that contractors are only receiving those services to which they are entitled. Therefore, we still believe the recommendation in its entirety has merit. We are sending copies of this report to the Chairman and the Ranking Minority Member, Subcommittee on Readiness, House Committee on Armed Services; other interested congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me on (757) 552-8111 or by E-mail at curtinn@gao.gov. Major contributors to this report were Steven Sternlieb, Carole Coffey, James Reid, James Reynolds, and Adam Vodraska. To identify the types of services contractors provide to deployed U.S. forces we met with officials at the Department of Defense (DOD) who have responsibility for identifying contractor needs, issuing contracts, managing contracts once they are executed, and utilizing contractors to fulfill their missions. Because there was no consolidated list of contractors supporting deployed forces we asked DOD officials at the commands and installations we visited to identify their contractor support. These commands included the Central, European, and Pacific Commands and most of their service components and major installations in Bosnia, Kosovo, Kuwait, Qatar, and Bahrain. We focused our efforts in the Balkans and Southwest Asia because they provide a broad range of contractor support activities. We were completing our work as the 2003 war with Iraq began and so were unable to fully ascertain the extent of contractor support to U.S. forces inside Iraq. The scope of our review included system and theater support contracts. We also met with officials of selected contracting commands in the Air Force, Army, and Navy and at defense agencies including the Defense Logistics Agency. These officials included contracting officers and, where applicable, their representatives at deployed locations. We examined a wide range of contracts in order to assess the diversity of contractor support. While visiting deployed locations we met with representatives of the different DOD components and contractors stationed there to determine what contractor services are used to accomplish their missions. To assess why DOD uses contractors to support deployed forces, we reviewed DOD studies and publications and interviewed DOD and contractor officials. We met with unit commanders during our visits to deployed locations to discuss the effects using contractors had on military training. We did not, however, compare the cost of contractors versus military personnel; make policy judgments as to whether the use of contractors is desirable; or look at issues related to government liability to contractors. To assess DOD’s efforts to identify those contractors that provide mission essential services and to maintain essential services if contractors are unable to do so, we reviewed applicable DOD Inspector General reports as well as DOD and its components’ policies, regulations, and instructions for ensuring the continuation of essential services. In particular, we reviewed DOD Instruction 3020.37, which sets forth the policies and procedures for identifying mission essential services and the steps necessary to assure the continuation of such services. We held discussions with command, service, and installation officials on the extent to which the required review of contracts to identify mission essential services had been conducted and on their backup planning should contractors not be able to perform such services for any reason. We also met with officials of the office responsible for monitoring implementation to ascertain what efforts they have undertaken. We reviewed the pertinent unclassified sections, related to contractor support, of operations plans for Iraq and the Balkans. We also discussed with deployed contractor employees their opinions of the extent of their responsibilities to continue to support military forces in crisis situations. To assess the adequacy of guidance and oversight mechanisms in place to effectively manage contractors who support deployed forces we reviewed DOD’s and its components’ policies, regulations, and instructions that relate to the use of contractors that support deployed forces. We met with officials at all levels of command to gain an understanding of contracting and the contract management and oversight processes. At the locations we visited, we asked officials their opinions of the effectiveness of existing policy in helping them manage their contractor force and asked them for suggested areas of improvement. We also reviewed and discussed with them local policies and procedures for managing their contractors. We met with DOD’s contract management officials as well as other military members to obtain their opinions of the quality of contractor-provided services and the quality of contract oversight. We also met with contractor representatives to discuss contract oversight and contract management from their perspective. Finally, we reviewed contracts that support deployed forces to assess the existence and adequacy of deployment language. The DOD organizations we visited or contacted in the United States were Office of the Secretary of Defense Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Washington, D.C. Civilian Personnel Management Service, Arlington, Va. Chairman, Joint Chiefs of Staff J-4 Logistics, Washington, D.C. Headquarters, Washington, D.C. Assistant Secretary of the Army (Acquisition, Logistics, and Technology), Falls Church, Va. Office of the Judge Advocate General, Rosslyn, Va. Army Contracting Agency, Falls Church, Va. U.S. Army Forces Command, Headquarters, Ft McPherson, Ga. 3rd Army Headquarters, Ft McPherson, Ga. 4th Infantry Division, Ft. Hood Tex. Corps of Engineers, Headquarters, Washington, D.C. Corps of Engineers, Transatlantic Programs Center, Winchester, Va. Combined Arms Support Command, Ft. Lee, Va. Communications-Electronics Command, Ft. Monmouth, N.J. Training and Doctrine Command, Ft, Monroe, Va. Operations Support Command, Rock Island, Ill. Logistics Civil Augmentation Program, Program Office, Rock Island, Ill. Army Materiel Command, Alexandria, Va. Network Enterprise Technology Command, Ft. Huachuca, Ariz. Headquarters, Washington, D.C. Naval Air Systems Command, Patuxent River, Md. Naval Air Technical Data and Engineering Service Command, San Diego, Calif. Naval Sea Systems Command, Washington, D.C. Space and Naval Warfare Systems Command, San Diego, Calif. Department of the Air Force Office of the Assistant Secretary of the Air Force for Acquisition, Rosslyn, Va. Air Force Materiel Command, Dayton, Ohio F-117 Special Projects Office, Dayton, Ohio Air Force Civil Engineer Support Agency, Tilden Air Force Base, Fla. Defense Logistics Agency, Ft. Belvoir, Va. Defense Energy Support Center, Ft. Belvoir, Va. Defense Contract Management Agency, Alexandria, Va. Defense Contract Audit Agency, Ft. Belvoir, Va. The geographic combatant commands and component commands we visited or contacted were U.S. Army Forces Central Command U.S. Naval Forces Central Command U.S. Central Command Air Forces U.S. Marine Forces Central Command U.S. Army, Europe U.S. Air Forces in Europe U.S. Pacific Command U.S. Army Pacific Pacific Air Forces Special Operations Command Pacific U.S. Marine Forces Pacific U.S. Pacific Fleet Naval Surface Forces, U.S. Pacific Fleet Naval Air Forces, U.S. Pacific Fleet Submarine Force, U.S. Pacific Fleet The overseas activities and contractors we visited, by country, were Naval Support Activity Naval Regional Contracting Center USS Cardinal, MHC 60 Eagle Base, U.S. Army Task Force Eagle, Area Support Group Eagle Defense Contract Management Agency Defense Contract Audit Agency, Wiesbaden Defense Contract Management Agency, Stuttgart Defense Energy Support Center, Wiesbaden Defense Logistics Agency, Wiesbaden Army Materiel Command Europe, Heidelberg Serbia and Montenegro Province of Kosovo Camp Bondsteel, U.S. Army Task Force Falcon, Area Support Group Falcon Defense Contract Management Agency Army Materiel Command TRW Kellogg, Brown & Root Services Premiere Technology Group Engineering and Professional Services, Incorporated Camp Monteith, U.S. Army Camp Doha, U.S. Army U.S. Army Kuwait Army Corps of Engineers Army Materiel Command Defense Contract Management Agency Coalition Forces Land Component Command KGL Raytheon Aerospace British Link Kuwait CSA Ahmed Al Jaber Air Base, U.S Air Force Ahmed Al Jaber Air Base, Contractors RMS Dyncorp Vinnell ITT Mutual Telecommunications Services Ali Al Salem Air Base, U.S. Air Force Ali Al Salem Air Base, Contractors Dyncorp L3 Communications TRW General Atomics Litton Integrated Systems Anteon RMS U.S. Embassy, Doha, Qatar Camp As Sayliyah, U.S. Army U.S. Army Forces Central Command-Qatar U.S. Army Materiel Command Defense Contracting Audit Agency Camp As Sayliyah, Contractors Al Udeid Air Base, U.S. Air Force 379th Air Expeditionary Wing Air Force Civil Augmentation Program, Program Office Al Udeid Air Base, Contractors We conducted our review between August 2002 and April 2003 in accordance with generally accepted government auditing standards. | The Department of Defense (DOD) uses contractors to provide a wide variety of services for U.S. military forces deployed overseas. We were asked to examine three related issues: (1) the extent of contractor support for deployed forces and why DOD uses contractors; (2) the extent to which such contractors are considered in DOD planning, including whether DOD has backup plans to maintain essential services to deployed forces in case contractors can no longer provide the services; and (3) the adequacy of DOD's guidance and oversight mechanisms in managing overseas contractors efficiently. While DOD and the military services cannot quantify the totality of support that contractors provide to deployed forces around the world, DOD relies on contractors to supply a wide variety of services. These services range from maintaining advanced weapon systems and setting up and operating communications networks to providing gate and perimeter security, interpreting foreign languages, and preparing meals and doing laundry for the troops. DOD uses contractor services for a number of reasons. In some areas, such as Bosnia and Kosovo, there are limits on the number of U.S. military personnel who can be deployed in the region; contract workers pick up the slack in the tasks that remain to be done. Elsewhere, the military does not have sufficient personnel with the highly technical or specialized skills needed in-place (e.g., technicians to repair sophisticated equipment or weapons). Finally, DOD uses contractors to conserve scarce skills, to ensure that they will be available for future deployments. Despite requirements established in DOD guidance (Instruction 3020.37), DOD and the services have not identified those contractors that provide mission essential services and where appropriate developed backup plans to ensure that essential contractor-provided services will continue if the contractor for any reason becomes unavailable. Service officials told us that, in the past, contractors have usually been able to fulfill their contractual obligations and, if they were unable to do so, officials could replace them with other contractor staff or military personnel. However, we found that this may not always be the case. DOD's agencywide and servicewide guidance and policies for using and overseeing contractors that support deployed U.S. forces overseas are inconsistent and sometimes incomplete. Of the four services, only the Army has developed substantial guidance for dealing with contractors. DOD's acquisition regulations do not require any specific contract in deployment locations for contract workers. Of 183 contractor employees planning to deploy with an Army division to Iraq, for example, some did not have deployment clauses in their contracts. This omission can lead to increased contract costs as well as delays in getting contractors into the field. At the sites that we visited in Bosnia, Kosovo, and the Persian Gulf, we found that general oversight of contractors appeared to be sufficient but that broader oversight issues existed. These include inadequate training for staff responsible for overseeing contractors and limited awareness by many field commanders of all the contractor activities taking place in their area of operations. |
In the early 1980s, Congress had concerns about a lack of adequate oversight and accountability for federal assistance provided to state and local governments. Before passage of the Single Audit Act in 1984 (the act), the federal government relied on audits of individual grants to help gain assurance that state and local governments were properly spending federal assistance. Those audits focused on whether the transactions of specific grants complied with program requirements. The audits usually did not address financial controls and were, therefore, unlikely to find systemic problems with an entity’s fund management. Further, individual grant audits were conducted on a haphazard schedule, which resulted in large portions of federal funds being unaudited each year. In addition, the auditors conducting the individual grant audits did not coordinate their work with the auditors of other programs. As a result, some entities were subject to numerous grant audits each year, while others were not audited for long periods. In response to concerns that large amounts of federal financial assistance were not subject to audit and that agencies sometimes overlapped on oversight activities, Congress passed the Single Audit Act of 1984. The act stipulated that state and local governments that received at least $100,000 in federal financial assistance in a fiscal year have a single audit conducted for that year. The concept of a single audit was created to replace multiple grant audits with one audit of an entity as a whole. State and local governments which received between $25,000 and $100,000 in federal financial assistance had the option of complying with audit requirements of the act or the audit requirements of the federal program(s) that provided the assistance. The objectives of the Single Audit Act, as amended, are to promote sound financial management, including effective internal control, with respect to federal awards administered by nonfederal entities; establish uniform requirements for audits of federal awards administered promote the efficient and effective use of audit resources; reduce burdens on state and local governments, Indian tribes, and nonprofit organizations; and ensure that federal departments and agencies, to the maximum extent practicable, rely upon and use audit work done pursuant to the act. The Single Audit Act adopted the single audit concept to help meet the needs of federal agencies for grantee oversight as well as grantees’ needs for single, uniformly structured audits. Rather than being a detailed review of individual grants or programs, the single audit is an organizationwide financial statement audit that includes the audit of the Schedule of Expenditures of Federal Awards (SEFA) and also focuses on internal control and the recipient’s compliance with laws and regulations governing the federal financial assistance received. The act also required that grantees address material noncompliance and internal control weaknesses in a corrective action plan, which is to be submitted to appropriate federal officials. The act further required that single audits be performed in accordance with GAGAS issued by GAO. These standards provide a framework for conducting high-quality financial audits with competence, integrity, objectivity, and independence. The Single Audit Act Amendments of 1996 refined the Single Audit Act of 1984 and established uniform requirements for all federal grant recipients. The refinements cover a range of fundamental areas affecting the single audit process and single audit reporting, including provisions to extend the law to cover all recipients of federal financial assistance, including, in particular, nonprofit organizations, hospitals, and universities; ensure a more cost-beneficial threshold for requiring single audits; more broadly focus audit work on the programs that present the greatest financial risk to the federal government; provide for timely reporting of audit results; provide for summary reporting of audit results; promote better analyses of audit results through establishment of a federal clearinghouse and an automated database; and authorize pilot projects to further streamline the audit process and make it more useful. The 1996 amendments required the Director of OMB to designate a Federal Audit Clearinghouse (FAC) as the single audit repository, required the recipient entity to submit financial reports and related audit reports to the clearinghouse no later than 9 months after the recipient’s year-end, and increased the audit threshold to $300,000. The criteria for determining which entities are required to have a single audit are based on the total amount of federal awards expended by the entity. The initial dollar thresholds were designed to provide adequate audit coverage of federal funds without placing an undue administrative burden on entities receiving smaller amounts of federal assistance. When the act was passed, the dollar threshold criteria for the audit requirement were targeted toward achieving audit coverage for 95 percent of direct federal assistance to local governments. As part of OMB’s biennial threshold review required by the 1996 amendments, OMB increased the dollar threshold for requirement of a single audit to $500,000 in 2003 for fiscal years ending after December 31, 2003. Federal oversight responsibility for implementation of the Single Audit Act is currently shared among various entities—OMB, federal agencies, and their respective Offices of Inspector General (OIG). The Single Audit Act assigned OMB the responsibility of prescribing policies, procedures, and guidelines to implement the uniform audit requirements and required each federal agency to amend its regulations to conform to the requirements of the act and OMB’s policies, procedures, and guidelines. OMB issued Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations, which sets implementing guidelines for the audit requirements and defines roles and responsibilities related to the implementation of the Single Audit Act. The federal agency that awards a grant to a recipient is responsible for ensuring recipient compliance with federal laws, regulations, and the provisions of the grant agreements. The awarding agency is also responsible for overseeing whether the single audits are completed in a timely manner in accordance with OMB Circular No. A-133 and for providing annual updates of the Compliance Supplement to OMB. Some federal agencies rely on the OIG to perform quality control reviews (QCR) to assess whether single audit work performed complies with OMB Circular No. A-133 and auditing standards. The grant recipient (auditee) is responsible for ensuring that a single audit is performed and submitted when due, and for following up and taking corrective action on any audit findings. The auditor of the grant recipient is required to perform the audit in accordance with GAGAS. A single audit consists of (1) an audit and opinions on the fair presentation of the financial statements and the SEFA; (2) gaining an understanding of internal control over federal programs and testing internal control over major programs; and (3) an audit and an opinion on compliance with legal, regulatory, and contractual requirements for major programs. The audit also includes the auditor’s schedule of findings and questioned costs, and the auditee’s corrective action plans and a summary of prior audit findings that includes planned and completed corrective actions. Under GAGAS, auditors are required to report on significant deficiencies in internal control and on compliance associated with the audit of the financial statements. Recipients expending more than $50 million in federal funding ($25 million prior to December 31, 2003) are required to have a cognizant federal agency for audit in accordance with OMB Circular No. A-133. The cognizant agency for audit is the federal awarding agency that provides the predominant amount of direct funding to a recipient unless OMB otherwise makes a specific cognizant agency assignment. The cognizant agency for audit provides technical audit advice, considers requests for extensions to the submission due date for the recipient’s reports, obtains or conducts QCRs, coordinates management decisions for audit findings, and conducts other activities required by OMB Circular No. A-133. According to OMB officials, the FAC single audit database generates a listing of those agencies that should be designated cognizant agencies for audit based on information on recipients expending more than $50 million. The officials also stated that OMB is responsible for notifying both the recipient and cognizant agency for audit of the assignment. Federal award recipients that do not have a cognizant agency for audit are assigned an oversight agency for audit, which provides technical advice and may assume some or all of the responsibilities normally performed by a cognizant agency for audit. Federal grant awards to state and local governments have increased significantly since the Single Audit Act was passed in 1984. Because single audits represent the federal government’s primary accountability tool over billions of dollars each year in federal funds provided to state and local governments and nonprofit organizations, it is important that these audits are carried out efficiently and effectively. As shown in figure 1, the federal government’s use of grants to state and local governments has risen substantially, from $7 billion in 1960 to almost $450 billion budgeted in 2007. GAO supported the passage of the Single Audit Act, and we continue to support the single audit concept and principles behind the act as a key accountability mechanism over federal grant awards. However, the quality of single audits conducted under this legislation has been a longstanding area of concern since the passage of the Single Audit Act in 1984. During the 1980s, GAO issued reports that identified concerns with single audit quality, including issues with insufficient evidence related to audit planning, internal control and compliance testing, and the auditors’ adherence to GAGAS. The federal Inspectors General as well have found similar problems with single audit quality. The deficiencies we cited during the 1980s were similar in nature to those identified in the recent PCIE report. In June 2002, GAO and OMB testified at a House of Representatives hearing about the importance of single audits and their quality. In its testimony, OMB identified reviews of single audit quality performed by several federal agencies that disclosed deficiencies. However, OMB emphasized that an accurate statistically based measure of audit quality was needed, and should include both a baseline of the current status and the means to monitor quality in the future. We also recognized in our testimony the need for a solution or approach to evaluate the overall quality of single audits. To gain a better understanding of the extent of single audit quality deficiencies, OMB and several federal OIGs decided to work together to develop a statistically based measure of audit quality, known as the National Single Audit Sampling Project. The work was conducted by a committee of representatives from the PCIE, the Executive Council on Integrity and Efficiency (ECIE), and three State Auditors, with the work effort coordinated by the U.S. Department of Education OIG. The Project had two primary objectives: to determine the quality of single audits by performing QCRs of a statistical sample of single audits, and to make recommendations to address any audit quality issues noted. The project conducted QCRs of a statistical sample of 208 audits randomly selected from a universe of over 38,000 audits submitted and accepted for the period April 1, 2003, through March 31, 2004. The sample was split into two strata: Stratum 1: entities with $50 million or more in federal award expenditures, Stratum 2: entities with less than $50 million in federal award expenditures (with at least $500,000). The above split in the sample strata corresponds with the current threshold for designating a cognizant agency, which is for entities that expend more than $50 million in a year in federal awards. Table 1 shows the universe and strata used in the analysis and the reviews completed in the National Single Audit Sampling Project. The project covered portions of the single audit relating to the planning, conducting, and reporting of audit work related to (1) the review and testing of internal control and (2) compliance testing pertaining to compliance requirements for selected major federal programs. The scope of the project included review of audit work related to the SEFA and the content of all of the auditors’ reports on the federal programs. The project did not review the audit work and reporting related to the general purpose financial statements. The PCIE project team categorized the audits based on the results of the QCRs into the following three groups: Acceptable—No deficiencies were noted or one or two insignificant deficiencies were noted. This group also includes the subgroup, Accepted with Deficiencies, which is defined as one or more deficiencies with applicable auditing criteria noted that do not require corrective action for the engagement, but should be corrected on future engagements. Audits categorized into this subgroup have limited effect on reported results and do not call into question the auditor’s report. Examples of deficiencies that fall into this subgroup are (1) not including all required information in the audit findings; (2) not documenting the auditor’s understanding of internal control, but testing was documented for most applicable compliance requirements; and (3) not documenting internal control or compliance testing for a few applicable compliance requirements. Limited Reliability—Contains significant deficiencies related to applicable auditing criteria and requires corrective action to afford reliance upon the audit. Deficiencies for audits categorized into this group have a substantial effect on some of the reported results and raise questions about whether the auditors’ reports are correct. Examples of deficiencies that fall into this category are (1) documentation did not contain adequate evidence of the auditors’ understanding of internal control or testing of internal control for many or all compliance requirements; however, there was evidence that most compliance testing was performed; (2) lack of evidence that work related to the SEFA was adequately performed; and (3) lack of evidence that audit programs were used for auditing internal control, compliance, and/or the SEFA. Unacceptable—Substandard audits with deficiencies so serious that the auditors’ opinion on at least one major program cannot be relied upon. Examples of deficiencies that fall into this group are (1) no evidence of internal control testing and compliance testing for all or most compliance requirements for one or more major programs, (2) unreported audit findings, and (3) at least one incorrectly identified major program. As shown in table 2, the PCIE study estimated that, overall, approximately 49 percent of the universe of single audits fell into the acceptable group. This percentage also includes “accepted with deficiencies.” The remaining 51 percent had deficiencies that were severe enough to cause the audits to be classified as having limited reliability or being unacceptable. Specifically, for the 208 audits drawn from the universe, the statistical sample showed the following about the single audits reviewed in the PCIE study: 115 were acceptable and thus could be relied upon. This includes the category of “accepted with deficiencies.” Based on this result, the PCIE study estimated that 48.6 percent of the entire universe of single audits were acceptable. 30 had significant deficiencies and thus were of limited reliability. Based on this result, the PCIE study estimated that 16.0 percent of the entire universe of single audits was of limited reliability. 63 were unacceptable and could not be relied upon. Based on this result, the PCIE study estimated that 35.5 percent of the entire universe of single audits was unacceptable. It is important to note the significant difference in results in the two strata. Specifically, 63.5 percent of the audits of entities in stratum 1 (those expending $50 million or more in federals awards) were deemed acceptable, while 48.2 percent of audits in stratum 2 (those expending at least $500,000 but less than $50 million) were deemed acceptable. Because of these differences, it is also important to analyze the results in terms of federal dollars. For the 208 audits drawn from the entire universe, the statistical sample showed the following about the single audits reviewed in the PCIE study: The 115 acceptable audits represented 92.9 percent of the value of federal award amounts reported in all 208 audits the PCIE study reviewed. The 30 audits of limited reliability represented 2.3 percent of the value of federal award amounts reported in all 208 audits the PCIE study reviewed. The 63 unacceptable audits represented 4.8 percent of the value of federal award amounts reported in all 208 audits the PCIE study reviewed. The dollar distributions for the 208 audits reviewed in the study are shown in table 3. The most prevalent deficiencies related to the auditors’ lack of documenting an understanding of internal control over compliance requirements, testing of internal control of at least some compliance requirements, and compliance testing of at least some compliance requirements. The PCIE report states that for those audits not in the acceptable group, the project team believes that lack of due professional care was a factor for most deficiencies to some degree. The term due professional care refers to the responsibility of independent auditors to observe professional standards of auditing. GAGAS further elaborate on this concept in the standard on Professional Judgment. Under this standard, auditors must use professional judgment in planning and performing audits and in reporting the results, which includes exercising reasonable care and professional skepticism. Reasonable care concerns acting diligently in accordance with applicable professional standards and ethical principles. Using professional judgment in all aspects of carrying out their professional responsibilities—including following the independence standards, maintaining objectivity and credibility, assigning competent audit staff to the assignment, defining the scope of work, evaluating and reporting the results of the work, and maintaining appropriate quality control over the assignment process—is essential to performing a high quality audit. We previously noted similar audit quality problems in prior reports. In December 1985, we reported that problems found by OIGs in the course of QCRs mostly related to lack of documentation showing whether and to what extent auditors performed testing of compliance with laws and regulations. In March 1986, we reported that our own review of single audits showed that auditors performing single audits frequently did not satisfactorily comply with professional auditing standards. The predominant issues that we found in our previous reviews were insufficient audit work in testing compliance with governmental laws and regulations and evaluating internal controls. We also observed, through discussions with the auditors and reviews of their work, that many did not understand the nature and importance of testing and reporting on compliance with laws and regulations, or the importance of reporting on internal control and the relationship between reporting and the extent to which auditors evaluated controls. As a result, in 1986, we reported that the public accounting profession needed to (1) improve its education efforts to ensure that auditors performing single audits better understand the auditing procedures required, and (2) strengthen its enforcement efforts in the area of governmental auditing to help ensure that auditors perform those audits in a quality manner. Similar to our prior work, the PCIE report presents compelling evidence that a serious problem with single audit quality continues to exist. The PCIE study also reveals that the rate of acceptable audits for organizations with $50 million or more in federal expenditures was significantly higher than for audits for organizations with smaller amounts of federal expenditures. The results also showed that overall, a significant number of audits fell into the groups of limited reliability with significant deficiencies and unacceptable. In our view, the current status of single audit quality is unacceptable. We are concerned that audits are not being conducted in accordance with professional standards and requirements. These audits may provide a false sense of assurance and could mislead users of audit reports regarding issues of compliance and internal control over federal programs. The PCIE report recommended a three-pronged approach to reduce the types of deficiencies noted and improve the quality of single audits: 1. revise and improve single audit standards, criteria, and guidance; 2. establish minimum continuing professional education (CPE) as a prerequisite for auditors to be eligible to conduct and continue to perform single audits; and 3. review and enhance the disciplinary processes to address unacceptable audits and for not meeting training and CPE requirements. More specifically, to improve standards, criteria, and guidance, the PCIE report recommended revisions to (1) OMB Circular No. A-133, (2) the AICPA Statement on Auditing Standards (SAS) No. 74, Compliance Auditing Considerations in Audits of Governmental Entities and Recipients of Governmental Financial Assistance, and (3) the AICPA Audit Guide, Current AICPA Audit Guide, collectively to emphasize correctly identifying major programs for which opinions are make it clear when audit findings should be reported; include more detailed requirements and guidance for compliance testing; emphasize the minimal amount of documentation needed to document the auditor’s understanding of, and testing of, internal control related to compliance; provide specific examples of the kind of documentation needed for risk assessment of individual federal programs; present illustrative examples of properly presented findings; specify content and examples of SEFA and any effect on financial emphasize requirements for management representations related to federal awards, similar to those for financial statement audits; provide additional guidance about documenting materiality; and require compliance testing to be performed using sampling in a manner prescribed by the AICPA SAS No. 39, Audit Sampling, as amended, to provide for some consistency in sample sizes. The PCIE report recommendation called on OMB to amend its Circular No. A-133 to require that (1) as a prerequisite to performing a single audit, staff performing and supervising the single audit must have completed a comprehensive training program of a minimum specified duration (e.g., at least 16–24 hours); (2) every 2 years after completing the comprehensive training, auditors performing single audits complete a minimum specified amount of CPE; and (3) single audits may only be procured from auditors who meet the above training requirements. The PCIE report also recommends that OMB develop, or arrange for the development of, minimum content requirements for the required training, in consultation with the National State Auditors Association (NSAA), the AICPA and its Governmental Audit Quality Center (GAQC), and the cognizant and oversight agencies for audit. The report states that the minimum content should cover the essential components of single audits and emphasize aspects of single audits for which deficiencies were noted in this project. In addition, the report recommends that OMB develop, or arrange for the development of, minimum content requirements for the ongoing CPE and develop a process for modifying future content. The report further recommends that OMB encourage professional organizations, including the AICPA, the NSAA, and qualified training providers, to offer training that covers the required content. It also recommends that OMB encourage these groups to deliver the training in ways that enable auditors throughout the United States to take the training at locations near or at their places of business, including via technologies such as Webcasts, and that the training should be available at an affordable cost. The PCIE project report emphasizes that the training should be “hands on” and should cover areas where the project team specifically found weaknesses in the work or documentation in its statistical study of single audits. The report specifically stated that the training should cover requirements for properly documenting audit work in accordance with GAGAS and other topics related to the many deficiencies disclosed by the project, including critical and unique parts of a single audit, such as the auditors’ determination of major programs for testing, review and testing of internal controls over compliance, compliance testing, auditing procedures applicable to the SEFA, how to use the OMB Compliance Supplement, and how to audit major programs not included in the Compliance Supplement. The PCIE report concludes that such training would require a minimum of 16 to 24 hours, and that a few hours or an “overview” session will not suffice. We believe that the proposed training requirements would likely satisfy the criteria for meeting a portion of the CPE hours already required by GAGAS. This recommendation focuses on developing processes to address unacceptable audits and auditors not meeting the required training requirements. OMB Circular No. A-133 currently has sanctions that apply to an auditee (i.e., the entity being audited) for not having a properly conducted audit and requires cognizant agencies to refer auditors to licensing agencies and professional bodies in the case of major inadequacies and repetitive substandard work. The report noted that other federal laws and regulations do currently provide for suspension and debarment processes that can be applied to auditors of single audits. Some cognizant and oversight agency participants in the project team indicated that these processes are rarely initiated due to the perception that it is a large and costly effort. As a result, the report specifically recommends that OMB, with federal cognizant and oversight agencies, should (1) review the process of suspension and debarment to identify whether (and if so, how) it can be more efficiently and effectively applied to address unacceptable audits, and based on that review, pursue appropriate changes to the process; and (2) enter into a dialogue with the AICPA and State Boards of Accountancy to identify ways the AICPA and State Boards can further the quality of single audits and address the due professional care issues noted in the PCIE report. The report further recommends that OMB, with federal cognizant agencies, should also identify, review, and evaluate the potential effectiveness of other ways (both existing and new) to address unacceptable audits, including (but not limited to) (1) revising Circular No. A-133 to include sanctions to be applied to auditors for unacceptable work or for not meeting training and CPE requirements, and (2) considering potential legislation that would provide to federal cognizant and oversight agencies the authority to issue a fine as an option to address unacceptable audit work. While we support the recommendations made in the PCIE report, it will be important to resolve a number of issues regarding the proposed training requirement. Some of the unresolved questions involve the following: What are the efficiency and cost-benefit considerations for providing the required training to the universe of auditors performing the approximately 38,500 single audits? How can current mechanisms already in place, such as the AICPA’s Government Audit Quality Center (GAQC), be leveraged for efficiency and effectiveness purposes in implementing new training? Which levels of staff from each firm would be required to take training? What mechanisms will be put in place to ensure compliance with the training requirement? How will the training requirement impact the availability of sufficient, qualified audit firms to perform single audits? The effective implementation of the third prong, developing processes to address unacceptable audits and for auditors who do not meet professional requirements, is essential as the quality issues have been long-standing. We support the PCIE recommended actions to make the process more effective and efficient and to help ensure a consistent approach among federal agencies and their respective OIGs overseeing the single audit process. In addition to the findings and recommendations of the PCIE report, we believe there are two other critical factors that need to be considered in determining actions that should be taken to improving audit quality: (1) the distribution of unacceptable audits and audits of limited reliability across the different dollar amounts of federal expenditures by grantee, as found in the PCIE study; and (2) the distribution of single audits by size in the universe of single audits. These factors are critical in effectively evaluating the potential dollar implications and efficiency and effectiveness of proposed actions. The PCIE study found that rates of unacceptable audits and audits of limited reliability were much higher for audits of entities in stratum 2 (those expending less than $50 million in federal awards) than those in stratum 1 (those expending $50 million or more). Table 1 presented earlier in this testimony shows the data from the sample universe of single audits used by the PCIE. Analysis of the data shows that 97.8 percent of the total number of audits (37,671 of the 38,523 total) covered approximately 16 percent ($143.1 billion of the $880.2 billion) of the total reported value of federal award expenditures, indicating significant differences in distributions of audits by dollar amount of federal expenditures. At the same time, the rates of unacceptable audits and audits of limited reliability were relatively higher in these smaller audits. We believe that there may be opportunities for considering size characteristics when implementing future actions to improve the effectiveness and quality of single audits. For instance, there may be merit to conducting a more refined analysis of the distribution of audits to determine whether less-complex approaches could be used for achieving accountability through the single audit process for a category of the smallest single audits. Such an approach may provide sufficient accountability for these smaller programs. An example of a less-complex approach consists of requirements for a financial audit in accordance with GAGAS, that includes the higher level reports on internal control and compliance along with an opinion on the SEFA and additional, limited or specified testing of compliance. Currently, the compliance testing in a single audit is driven by compliance requirements under OMB Circular No. A-133 as well as program-specific requirements detailed in the compliance supplement. A less-complicated approach could be used for a category of the smallest audits to replace the current approach to compliance testing, while still providing a level of assurance on the total amount of federal grant awards provided to the recipient. Another consideration for future actions is strengthening the oversight of the cognizant agency for audit with respect to auditees expending $50 million or more in federal awards. As shown in the data from the sample universe of single audits used by the PCIE, 852 audits (or 2.2 percent) of the total 38,523 audits covered $737.2 billion (or 84 percent) of the reported federal award expenditures. This distribution suggests that targeted and effective efforts on the part of cognizant agencies aimed at improving audit quality for those auditees that expend greater than $50 million could achieve a significant effect in terms of dollars of federal expenditures. We continue to support the single audit concept and principles behind the act as a key accountability mechanism over federal awards. It is essential that the audits are done properly in accordance with GAGAS and OMB requirements. The PCIE report presents compelling evidence that a serious shortfall in the quality of single audits continues to exist. Many of these quality issues are similar in nature to those reported by GAO and the Inspectors General since the 1980s. We believe that actions must be taken to improve audit quality and the overall accountability provided through single audits for federal awards. Without such action, we believe that substandard audits may provide a false sense of assurance and could mislead users of audit reports. While we support the recommendations made in the PCIE report, we believe that a number of issues regarding the proposed training requirements need to be resolved. The PCIE report results also showed a higher rate of acceptable audits for organizations with larger amounts of federal expenditures and showed that the vast majority of federal dollars are being covered by a small percentage of total audits. We believe that there may be opportunities for considering size characteristics when implementing future actions to improve the effectiveness and quality of single audits as an accountability mechanism. Considering the recommendations of the PCIE within this larger context will also be important to achieve the proper balance between risk and cost-effective accountability. In addition to the considerations surrounding the specific recommendations for improving audit quality, a separate effort taking into account the overall framework for single audits may be warranted. This effort could include answering questions such as the following: What types of simplified alternatives exist for meeting the accountability objectives of the Single Audit Act for the smallest audits and what would the appropriate cutoff be for a less-complex audit requirement? Is the current federal oversight structure for single audits adequate and consistent across federal agencies? What alternative federal oversight structures could improve overall accountability and oversight in the single audit process? Are federal oversight processes adequate and are sufficient resources being dedicated to oversight of single audits? What role can the auditing profession play in increasing single audit quality? Do the specific requirements in OMB Circular No. A-133 and the Single Audit Act need updating? Mr. Chairman, we would be pleased to work with the subcommittee as it considers additional steps to improve the single audit process and federal oversight and accountability over federal grant funds. Mr. Chairman and members of this subcommittee, this concludes my statement. I would be happy to answer any questions that you or members may have at this time. For information about this statement, please contact Jeanette Franzel, Director, Financial Management and Assurance, at (202) 512-9471 or franzelj@gao.gov. Individuals who made key contributions to this testimony include Marcia Buchanan (Assistant Director), Robert Dacey, Abe Dymond, Heather Keister, Jason Kirwan, David Merrill, and Sabrina Springfield (Assistant Director). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Federal government grants to state and local governments have risen substantially, from $7 billion in 1960 to almost $450 billion budgeted in 2007. The single audit is an important mechanism of accountability for the use of federal grants by nonprofit organizations as well as state and local governments. However, the quality of single audits conducted under the Single Audit Act, as amended, has been a longstanding area of concern since the passage of the act in 1984. The President's Council on Integrity and Efficiency (PCIE) recently issued its Report on National Single Audit Sampling Project, which raises concerns about the quality of single audits and makes recommendations aimed at improving the effectiveness and efficiency of those audits. This testimony provides (1) GAO's perspective on the history and importance of the Single Audit Act and the principles behind the act, (2) a preliminary analysis of the recommendations made by the PCIE for improving audit quality, and (3) additional considerations for improving the quality of single audits. In the early 1980s, Congress had concerns about a lack of adequate oversight and accountability for federal assistance provided to state and local governments. In response to concerns that large amounts of federal financial assistance were not subject to audit and that agencies sometimes overlapped on oversight activities, Congress passed the Single Audit Act of 1984. The act adopted the single audit concept to help meet the needs of federal agencies for grantee oversight as well as grantees' needs for single, uniformly structured audits. GAO supported the passage of the Single Audit Act, and continues to support the single audit concept and principles behind the act as a key accountability mechanism for federal grant awards. However, the quality of single audits has been a longstanding area of concern since the passage of the act in 1984. In its June 2007 Report on National Single Audit Sampling Project, the PCIE found that, overall, approximately 49 percent of single audits fell into the acceptable group, with the remaining 51 percent having deficiencies severe enough to classify the audits as limited in reliability or unacceptable. PCIE found a significant difference in results by audit size. Specifically, 63.5 percent of the large audits (with $50 million or more in federal award expenditures) were deemed acceptable compared with only 48.2 percent of the smaller audits (with at least $500,000 but less than $50 million in federal award expenditures). The PCIE report presents compelling evidence that a serious problem with single audit quality continues to exist. GAO is concerned that audits are not being conducted in accordance with professional standards and requirements. These audits may provide a false sense of assurance and could mislead users of the single audit reports. The PCIE report recommended a three-pronged approach to reduce the types of deficiencies found and to improve the quality of single audits: (1) revise and improve single audit standards, criteria, and guidance; (2) establish minimum continuing professional education (CPE) as a prerequisite for auditors to be eligible to be able to conduct and continue to perform single audits; and (3) review and enhance the disciplinary processes to address unacceptable audits and for not meeting training and CPE requirements. In this testimony, GAO supports PCIE's recommendations and points out issues that need to be resolved regarding the proposed training and other factors that merit consideration when determining actions to improve audit quality. GAO believes that there may be opportunities for considering size when implementing future actions to improve the effectiveness and quality of single audits. In addition, a separate effort considering the overall framework for single audits could answer such questions as whether simplified alternatives can achieve cost-effective accountability in the smallest audits; whether current federal oversight processes for single audits are adequate; and what role the auditing profession can play in increasing single audit quality. |
The United States taxes domestic corporations on their worldwide income, regardless of where it is earned, and provides credits for foreign income taxes paid. A U.S. parent corporation may directly or indirectly own multiple corporations, including both domestic and foreign subsidiaries. The U.S. taxes the worldwide income of U.S. corporations, whether earned domestically or abroad. However, the active business income earned by foreign subsidiaries is generally eligible for deferral from U.S. tax until it is distributed, usually in the form of dividends, to the When income is U.S. parent corporation or other U.S. shareholders.repatriated in this way, it may have already been taxed in the foreign country where it was earned. To avoid taxing foreign source income twice, the federal tax code allows U.S. parent corporations a foreign tax credit (FTC) for taxes paid to other countries. A U.S. corporation would pay U.S. tax on foreign-source income only to the extent that the U.S. tax on that income exceeds the FTC. Figure 1 shows how deferral affects how a dividend payment from a foreign subsidiary to its U.S. parent corporation is generally taxed under the U.S. worldwide tax approach. Passive income, such as dividends, interest, rental income, and royalties received by controlled foreign corporations, and certain types of easily manipulated active income, is not subject to deferral.for deferral is defined under Subpart F of the Internal Revenue Code (IRC). Unlike the United States, most developed countries do not tax corporations on their worldwide income. Instead, these countries use a territorial tax system that taxes only the income earned within a country’s physical borders, and exempts from tax dividends received from foreign subsidiaries on their foreign earnings as well as gains realized on the sale of foreign subsidiaries. There has been a trend of developed countries moving towards territorial tax systems. As of 2012, 28 of the 34 current member countries of the Organisation for Economic Co-operation and Development have adopted some form of a territorial tax system. However, most countries generally do not use a pure form of either the worldwide (also known as a full-inclusion system) or territorial tax system. Rather, countries tend to use a hybrid system that contains some features of both systems. For example, the deferral provisions in the U.S. worldwide system delay the taxation of foreign source income, whereas a purer form of the worldwide system would tax this income as it is earned. As we have previously found, countries using the territorial approach do not exempt all foreign source income from taxation, but have exceptions for certain types of passive income. The income of controlled foreign corporations that is generated through the primary business activities related to financial-services is excepted from Subpart F’s anti-deferral regime. Interest income, for example, which would typically fall under the Subpart F definition, and thus would be taxed currently whether or not it is repatriated to a U.S. parent corporation, is permitted to be deferred under this active financial- services income provision. This tax expenditure is an exception to Subpart F because it treats what would otherwise be considered passive income as active business income that can be deferred since it was earned through the primary business activities of financial-services companies. The effect of the exception is to include financial-services companies among the U.S. corporations that can defer taxation on their business income earned abroad. The United States taxes all foreign and domestic corporate income using a graduated corporate income tax rate structure. Corporations with less than $10 million in net taxable corporate income are subject to different tax rates, depending on the amount of income earned. As seen in table 1, income is taxed at graduated rates of 15, 25, 39, and 34 percent for various income levels up to $10 million, and 35 and 38 percent for income up to $18,333,333. The 38- and 39-percent rates reduce the benefits provided by the lower graduated rates. Finally, for corporations with taxable income higher than $18,333,333, a flat rate of 35 percent applies to all taxable income. Treasury and JCT designate the two deferral provisions and the graduated corporate income tax rate schedule as tax expenditures because they are special tax provisions that are exceptions to the normal structure of the corporate income tax system. The deferral provisions are designated as tax expenditures because they deviate from the baseline case of a pure worldwide tax system in which U.S. corporations would be taxed on their worldwide income whether or not the income is repatriated to the United States. The graduated rate provision is designated a tax expenditure because it is an exception to the normal structure of a flat corporate income tax rate. All three tax expenditures reduce revenue received by the federal government below what it would be under the normal structure established by Treasury and JCT. Treasury and JCT each compile an annual list of tax expenditures by budget function with estimates of the corporate and individual income tax revenue losses, also known as tax expenditure estimates. separately calculate the estimated revenue losses for each tax expenditure under the assumptions that all other tax expenditures remain in the tax code, and taxpayer behavior remains constant.expenditure estimates do not represent the amount of revenue that would be gained if a particular tax expenditure was repealed, since repeal would probably change taxpayer behavior in some way that would affect revenue. Office of Management and Budget (OMB), Analytical Perspectives, Budget of the United States Government, Fiscal Year 2014 (Washington, D.C.: 2013); and JCT, Estimates of Federal Tax Expenditures for Fiscal Years 2012-2017, JCS-1-13 (Washington, D.C.: Feb. 1, 2013). Deferral has long been a part of the tax code, and views of its purpose have changed over time. Currently, it is often viewed by tax experts and in the research that we reviewed as promoting competitiveness. However, deferral’s effect on competitiveness depends on how competitiveness is defined. If competitiveness refers to the ability of U.S. multinational corporations to operate successfully in foreign markets through their subsidiaries, then deferral, which increases after tax returns by delaying tax payments, provides a benefit that may enhance competitiveness.that, operating under a territorial tax system in their own countries, pay In foreign markets, U.S. corporations face competitors tax only in the foreign country. U.S. corporations under the worldwide system must pay the foreign tax plus any U.S. tax on the same income. By delaying this U.S. tax, deferral is said to move U.S. corporations closer to having a “level playing field” with their foreign competitors. Whether the tax benefit provided by deferral results in net positive economic effects for the United States is the subject of debate. Some research has found that investments U.S. multinationals make abroad, due in part to the incentives provided by deferral, lead to positive economic effects for employment and wages in the United States, while others have questioned the magnitude of these effects. Treasury officials noted that in some instances U.S. foreign direct investment may be associated with increased investment in the United States. In other instances, it may be associated with decreased U.S. investment, meaning that the effect on employment and wages in the United States would be uncertain. However, whether this definition of competitiveness that focuses on multinationals is appropriate has been a subject of debate among experts. corporations to operate successfully in domestic markets, and to export products into foreign markets. Deferral provides no benefit to these purely domestic or exporting U.S corporations. Rather than leveling the playing field, deferral benefits U.S. multinationals over other types of U.S. corporations. Competitiveness has also been defined as the ability of U.S. See Nicola Satori and Reuven S. Avi-Yonah, “Symposium on International Taxation and Competitiveness: Foreword,” Tax Law Review, vol. 65 (Spring 2012). This paper summarizes a variety of viewpoints on competitiveness. of U.S. international corporate taxation allow U.S. multinational corporations to pay a much smaller U.S. effective tax rate on foreign source income than domestic income. If these multinationals earn income in countries that, on average, have lower corporate tax rates than the United States, they have an advantage over purely domestic U.S. corporations because the average effective tax rate on the multinationals’ worldwide income may be lower than the rate paid by the purely domestic corporations on their U.S. income. However, there is some research that has found that multinationals and domestic-only firms face similar effective tax rates. Other tax experts argue that the appropriate definition of competitiveness should focus on broader industry or national purposes rather than corporations. For example, a U.S. industry is said to be more competitive by attracting more investment and resources than foreign industries. For others, competitiveness is a more general concept, referring to the set of institutions, policies, and human and natural endowments that make a country productive. A tax policy that promotes competitiveness under this definition would try to assure that the tax system does not prevent a country’s resources from being put to their most productive uses. Countries that meet this standard can engage most effectively in international trade that can be mutually beneficial. Tax benefits for only certain corporations or industries may not meet this criterion. Finally, some experts note that the concept of competitiveness is the wrong concept to focus on when formulating tax policy and that efficiency, which we discuss in the following section, is the appropriate concept. They argue that simply using the wrong concept in this way leads to bad policy outcomes. Tax differences between countries can affect decisions made by multinational corporations, including where to invest in operations, where to locate their corporate residences, when to repatriate income from foreign subsidiaries, and whether to acquire foreign or domestic corporations. Their decisions are said to be distorted when the corporations respond to tax differences by putting resources into less productive activities because these activities are taxed less heavily than more productive uses. As we stated in our guide, when this happens, the economy is not as productive as it could be, and society does not achieve as high a standard of living as it would if the distortion did not exist. As mentioned above, the United States uses a hybrid form of the worldwide tax system where deferral delays but does not eliminate the U.S. tax on foreign source income. In this way, the U.S. worldwide system has some features less like a pure worldwide system and more like a territorial system. Moving in either direction would affect deferral. On one hand, the United States could eliminate deferral if it moved towards a more pure worldwide system by adopting a full inclusion system where foreign source income is taxed by the United States as it is earned rather than when it is repatriated. On the other hand, the United States could move towards a more territorial system by exempting foreign source income from U.S. taxation, or, in effect, making deferral permanent.effect on corporations’ decision making and ultimately on efficiency will depend on which way of ending deferral is adopted. The following discusses some of the decisions that have been identified in the literature where deferral could increase or decrease distortions. Some tax experts argue that deferral may distort decisions about where U.S. corporations invest, compared to a full-inclusion system, if the U.S. tax rate is higher than foreign tax rates, as is often the case. This difference in tax rates, combined with the ability to defer paying the higher U.S. tax until income is repatriated, could mean that a U.S. corporation earns more after taxes from a less productive investment abroad than from a more productive investment at home. The efficiency loss is the loss of income (or product) that results when the corporation chooses the less productive foreign investment because it produces a higher after-tax return. Some research has shown that differences in tax burden do affect corporations’ real investment decisions, which could lead to these efficiency losses. this distortion affecting the allocation of investment domestically or abroad could potentially be reduced because the tax incentive to invest abroad would be eliminated, since corporations would pay the same U.S. tax on their worldwide income, whether it comes from foreign or domestic investments. Under a territorial system, these investment location distortions may increase relative to the current system because the tax incentive to invest in low-tax countries may be enhanced when the differences in tax rates are made permanent. However, the responsiveness of investment decisions to tax rate differences indicates that U.S. corporations could be at a disadvantage under a full inclusion system as they could be competing against foreign companies which would likely be taxed under a territorial tax system and at a lower tax rate. This disadvantage would be removed under a territorial system where U.S. corporations would face the same tax rate as foreign competitors also operating in those countries. Organisation for Economic Co-operation and Development, Tax Effects on Foreign Direct Investment: Recent Evidence and Policy Analysis: Tax Policy Study No. 17 (2007). Some experts and research that we reviewed argue that deferral may reduce distortions, compared to a full inclusion system, about where businesses decide to incorporate, and whether U.S. corporations choose to change their country of incorporation to a foreign country (so-called corporate inversions). By choosing not to have its corporate residence in the U.S., a corporation could permanently avoid U.S. tax on income earned abroad, and whatever income it is able to shift out of the U.S. Deferral may somewhat reduce this distortion by allowing corporations to defer from tax income earned abroad. Some research has indicated that taxing income, once it is repatriated, affects decisions of where to incorporate, or to change incorporation from one country to another. Some research has also shown that most inversions that occur for tax reasons are to avoid U.S. tax on income earned in the U.S. by increasing the scope for income shifting rather than to avoid U.S. tax on foreign- source income. However, recent research has found mixed results on trends in the number of inversions. Some have found that only a small number of U.S. corporations that conduct initial public offerings have reincorporated in low-tax countries, while others have recently highlighted an increase in inversions. In 2004, legislation was passed to limit the ability of U.S. corporations to change their country of incorporation to a foreign country. This corporate residence distortion could be increased by full inclusion, which would raise the effective U.S. tax on income earned abroad, and encourage companies to avoid this tax by moving their residence abroad. A territorial system would eliminate this incentive by removing the U.S. tax on foreign source income. It is also argued that deferral, compared to a full inclusion system, improves economic efficiency by removing distortions that affect decisions about which subsidiaries and other assets corporations own. Some corporate groups may be able to use foreign subsidiaries and assets more productively because of synergies that result from ownership within the corporate group, while another corporate group that acquired these subsidiaries and assets would not have these synergies and therefore would not be able to use them as productively. For this reason, tax differences could lead to productivity loss when a corporation without those synergies, but with more favorable tax treatment is able to outbid for ownership of those assets a corporation with those synergies. In this case, the use of deferral or a territorial system makes inefficiency less likely and the move to full inclusion makes it more likely. However, others argue that ownership synergies do not have significant effects on productivity because there are numerous ways for corporations to use assets as productively without owning them, such as leasing, contract manufacturing, or licensing of trademarks or technology.deferral or a move to a territorial system would not produce significant efficiency gains. Some research has also shown that deferral can distort decision making by affecting the timing of repatriations, referred to as the “lockout” effect. The distortion would happen if deferral incentivizes corporations to keep income abroad rather than repatriating it to the higher tax country. This income may be more productive if repatriated and reinvested at home rather than retained (or “locked out”) abroad for tax reasons. Although estimates have varied over time, they consistently show that the lockout effect does have efficiency costs. Estimates from 2001 of the efficiency cost of U.S. multinational corporations from the lockout effect put the size However, the of the loss at about 1 percent of foreign pretax income.large repatriations under the 2004 tax holiday have suggested to some researchers that these earlier estimates may be too small. More recent estimates have shown that the efficiency loss increases with the amount of earnings accumulated abroad, and could be as high as 7 percent of foreign pretax income by 2015.systems eliminate the lockout effect. The territorial system makes foreign earnings tax free whether or not they are repatriated, and the full inclusion system makes foreign earnings taxable without repatriation. Both the territorial and full inclusion The extensive literature on deferral disagrees on its overall impact on efficiency, or whether a movement toward a full inclusion or territorial system would improve efficiency. Deferral’s effect on the decisions just discussed, where to invest, where to locate headquarters, whether to make an acquisition, and when to repatriate income can depend on factors such as the location of the market (domestic or foreign) and the source of investment capital (again, domestic or foreign). In addition, there may be empirical disagreement about the size of an effect. Without agreement on the separate effects on efficiency, there is no agreement about how to add them up to get an overall effect. We were unable to find any studies that specifically estimate the distribution of the benefits from the two deferral tax expenditures. Treasury, CBO, and the Tax Policy Center have developed estimates of the distribution of the corporate tax burden as a whole. However, these studies may not indicate who ultimately benefits from deferral and, further, whether deferral is fair and equitable. The distribution of ultimate beneficiaries, referred to as the economic incidence of the tax benefit, depends on the extent that the tax provision leads to changes in the prices of goods or services. For example, the tax benefit for corporations from deferral may be passed on to consumers through lower prices, to employees through higher wages, or to investors through higher returns. Economic incidence is difficult to determine due to the complexity of the interactions that produce these price and income changes. Studies of the distribution of burdens and benefits usually base their estimates of economic incidence on empirical studies of how prices in relevant markets, including markets for goods and services or labor and capital markets, respond to changes in certain tax provisions. The studies of the corporate tax burden that we identified did not estimate the effect of deferral and their methods may require adjustments before such an estimate can be made. Without these estimates, informed judgments about deferral’s fairness will be hard to draw because such judgments depend on knowing who receives the benefit of the tax expenditure. Equally, the distributional effects of the territorial and full inclusion alternatives to deferral are also unknown, and informed judgments about the fairness of the alternatives cannot be made. Although the ultimate beneficiaries are unknown, there is some evidence that certain industries benefit more from deferral than others. An IRS study found that during the one-time U.S. repatriation tax holiday in 2004, certain industries, such as companies involved in pharmaceutical manufacturing and computer and electronic equipment manufacturing, benefited disproportionately, as they repatriated significantly more income in the form of dividends relative to the size of the tax filers. There is widespread agreement among tax experts that the U.S. system for taxing foreign source income is complex and adds burden for IRS and taxpayers. Deferral contributes to this complex system by enhancing the incentive for corporations to shift income abroad to be taxed at lower rates. Deferral further adds complexity by interacting with a number of tax provisions designed to limit income shifting. One of those provisions, Subpart F of the IRC, creates an exception to the general rule of deferral by defining certain types of passive income, such as interest and royalties, as well as certain other easily manipulated income, as ineligible for deferral. These types of income are viewed as subject to greater manipulation to reduce taxes because they can be artificially shifted between related parties.F. As noted previously, interest income that is generated through the primary business activities of financial-services companies of controlled foreign corporations is eligible for deferral. These various provisions add complexity as taxpayers must determine which income can be deferred. Moreover, there are also exceptions to Subpart Deferral also affects complexity by interacting with the foreign tax credit and transfer pricing. Our prior work has highlighted these areas as major sources of compliance risk and burden. Deferral allows corporations to time their repatriations of foreign source income for periods when they have excess foreign tax credits, which can be used to lower the amount of U.S. tax they pay. In these cases, complex rules for determining the source of income are required to ensure that the foreign tax credits are applied only against the portion of the corporation’s worldwide taxable income attributable to foreign sources.limit income shifting by requiring that related corporations charge prices for the goods and services they sell to each other that are comparable to market prices. Identifying and evaluating these transfer prices can be difficult for IRS and taxpayers when, as often is the case with intangible property, limited information exists on comparable market prices. Our prior work on corporate tax expenditures identified no related federal activities sharing the same reported purpose as the two deferral tax expenditures.as an area of potential duplication and overlap, these programs are focused primarily on small companies rather than U.S. multinational Although we have highlighted export promotion programs corporations.multinational corporations, and interacts with a number of other tax provisions, such as Subpart F. When considering reform to this system, changes to deferral would need to be coordinated with changes to other tax provisions. According to JCT estimates produced in 2011 and reported by CBO, ending deferral by moving to a full inclusion worldwide system, where foreign source income is taxed whether or not repatriated, would increase federal revenues by $4.7 billion in 2012. According to the same estimates, exempting active foreign dividends from U.S. tax, similar to that of a territorial tax system, and changing the tax treatment of overhead expenses would increase revenues by $3.3 billion in 2012. These estimates are based on specific proposals to change the tax code, and include behavioral responses by taxpayers to the tax change. The revenue estimate for exempting active foreign dividends shows an increase in revenue chiefly because the expense allocation rules under this option reduced the expenses that can be deducted from U.S. income relative to the current system’s expense allocation rules. The effect on U.S. tax revenue of full inclusion and territoriality depends on the incentives the alternatives provide to shift income out of the United States and its taxing authority and the specifics of the alternatives’ design. The incentive to locate income in low-tax countries may be less under full inclusion and higher under the territorial system, thereby eroding the U.S. corporate tax base. However, territorial systems in practice include design features, such as a minimum tax on foreign source income, that are intended to limit these losses. JCT and Treasury also make tax expenditure estimates on a regular basis that do not account for how taxpayer behavior may change when a tax expenditure is altered. Although these estimates do not represent the amount of revenue that would be gained if deferral were eliminated, they can indicate how revenue losses may be changing over time. The tax expenditure estimates show that revenue losses from the general deferral tax expenditure have increased significantly. These estimates of increasing tax revenue losses are consistent with changes in the location of earnings of U.S. corporations. During this period, U.S. corporations were earning an increasing share of their profits from foreign sources, likely increasing the amount of income deferred abroad. As seen in figure 2, U.S. corporate profits earned abroad, compared to total U.S. corporate profits, have increased moderately since 1997. In addition, a number of legislative changes may have affected the revenue losses from deferral by making it easier to shift or keep income abroad. These include the look-through rule exception from Subpart F. This rule provides that dividends, interest, rents, and royalties received or accrued by one controlled foreign corporation from a related controlled foreign corporation are not treated as Subpart F income, and are eligible for deferral. Finally, some have suggested that in light of the U.S. tax repatriation holiday in 2004, which allowed U.S. corporations to exempt most dividends from tax on a one-time basis, U.S. multinational corporations may have accumulated foreign earnings abroad in anticipation of another repatriation holiday. No federal agency has been tasked to evaluate deferral. Since 1994, we have recommended greater scrutiny of tax expenditures, as periodic reviews could help determine how well specific tax expenditures work to achieve their goals, and how their benefits and costs compare to those of programs with similar goals. However, as we reported in June 2013, the Office of Management and Budget (OMB) has not developed a framework for reviewing tax expenditure performance. We made a number of recommendations to OMB, including that it provide guidance to agencies to identify tax expenditures that contribute to each appropriate agency goal. In July 2013, OMB released guidance that directs agencies to identify tax expenditures that contribute to their goals. The purpose of the graduated corporate income tax rate schedule has generally been described in the academic literature and by tax experts as supporting small businesses by reducing their tax burden. The tax expenditure benefits businesses that organize under Subchapter C of the IRC, “C corporations,” by taxing their income at reduced tax rates when the income falls beneath certain limits. To the extent that small corporations have income beneath these limits, they could benefit from the reduced rates. Some rationales for providing this tax benefit to smaller businesses include encouraging entrepreneurship, innovation, and small business growth and employment. It has been argued in some academic literature that the greater after-tax income may make the small businesses more attractive to investors, and may alleviate a lack of access to capital that small businesses experience due to limited information on their business model or profit potential. Similar justifications have been made for providing benefits through other federal programs, such as federal small business loan programs. CRS’ tax expenditure compendium details how the graduated corporate income tax rate schedule has developed and changed legislatively over time. Evidence is mixed on whether the lower corporate tax rates provided by the graduated rate schedule increases business formation. research shows that, although only a small number of start-up companies initially form as C corporations, when and if these businesses generate profits, they have an incentive to incorporate so that these profits are taxed at lower corporate tax rates. However, other research has also indicated that incentives can produce the opposite effect. Businesses are less likely to incorporate if corporate tax rates are high, compared to individual tax rates, and instead may choose another form of business entity that is taxed under the individual tax rates. In contrast, businesses with losses will typically prefer not to incorporate so that these losses can be deducted from other higher taxed personal income. Research has also shown that in 2007 a majority of unincorporated small businesses faced a marginal tax rate of 10 to 25 percent, making the rates they paid comparable to those of the graduated corporate income tax rates. IRS data have also shown that the number of businesses that organize themselves as C corporations has declined, while those organizing as S corporations and partnerships have been rising in the past decade. See figure 3. See Simeon Djankov, Tim Ganser, Caralee McLiesh, Rita Ramalho, and Andrei Shleifer, “The Effect of Corporate Taxes on Investment and Entrepreneurship,” American Economic Journal: Macroeconomics, vol. 2 (July 2010); and Julie Berry Cullen and Roger H. Gordon, “Taxes and entrepreneurial risk-taking: Theory and evidence for the U.S,” Journal of Public Economics, vol. 91 (2007). Some academic literature has suggested that the graduated rates can cause inefficiency by providing relief to some corporations and not others depending on their taxable income. The economy is less efficient if the rates divert resources from one type of corporation to another based on tax considerations, rather than how productively the corporations use the resources. Other sources of potential inefficiency include the incentive provided by the graduated rates for small businesses to form as C corporations to take advantage of lower corporate rates, compared to those of individual tax rates. A Treasury study found that higher differentials between corporate and non-corporate tax rates increased the likelihood that a firm will convert from C to S corporation status after the Tax Reform Act of 1986. In this case, small businesses may choose an organizational form that they would not have selected without the tax incentive, suggesting that this may not be the most productive way for them to organize their operations. The graduated rates could be justified on efficiency grounds if, from society’s point of view, without the incentive too few small businesses are formed given their potential for profit and innovation. It has been argued by some research that small businesses need support because they provide a disproportionate share of innovation and net job creation. However, more recent research has shown that a small number of new businesses may generate most of the innovation and net job creation. If this is the case, targeted federal support for certain small businesses may be more effective than graduated rates that apply to all corporations with less than a certain amount taxable income. The magnitude of the efficiency effects of the graduated rates has not been estimated, but experts agree that the effect of reducing or eliminating the rates will depend on how the change is implemented. For example, reducing or eliminating the rates without making similar changes to individual tax rates may motivate companies to change organizational form—from C corporations to “pass-through” entities like S corporations and partnerships—to take advantage of the differences between corporate and individual income tax rates. As with the deferral tax expenditures, studies that specifically estimate the distribution of the benefits from the graduated corporate income tax rate schedule are unavailable. Without these estimates, conclusions about the fairness of the tax expenditures will be hard to draw because such judgments depend on who bears the burden of the tax or receives the benefit of the tax expenditure. The ultimate beneficiaries depend on the extent that the tax provision leads people to make decisions that change the prices of goods or services. Just as in the case of the deferral tax benefit, the benefit of graduated rates may be passed on to consumers through lower prices, employees through higher wages, or to investors through higher returns. Although we did not find any estimates that isolate the compliance and administrative costs associated with the graduated rates, both costs are likely to be relatively low. IRS officials could not highlight any administrative or compliance issues involved with administering the graduated corporate income tax rate schedule. They said that applying the graduated corporate income rate schedule for a particular taxpayer is primarily a computational issue, and does not present much uncertainty to taxpayers in determining their tax liabilities. However, IRS research has found some evidence that corporations’ taxable income tends to cluster below rate changes introduced by the tax rate brackets. The research found that if the tax net income of corporations in their sample of Schedule M-3 filers (generally those with assets of at least $10 million) from tax years 2004 through 2008 rose 5 percent, a substantial number of corporations would face higher marginal tax rates. This clustering of filers around certain tax rates may be the result of tax planning that increases compliance costs. The compliance and administrative costs of the graduated rates have not been estimated separately from the costs of complying with and administering all the provisions of the corporate income tax. However, estimates of the total compliance burden of small businesses may give some context to the compliance costs associated with graduated rates. A 2007 study of businesses with assets of less than $10 million in 2002 found that small businesses initially face significant fixed compliance costs, which increase at a decreasing rate as the business grows.However, the specific administrative and compliance costs of the graduated rates may not be a large part of these costs compared with other, more complicated provisions of the tax code. Our prior work on corporate tax expenditures found no related federal spending program sharing the same reported purpose as the graduated rates of supporting small businesses that adopt the corporate form of legal organization. However, there are federal spending programs that share, at least in part, the similar purpose of supporting entrepreneurs and small businesses. In prior work, we identified 52 programs at the U.S. Departments of Agriculture, Commerce, and Housing and Urban Development, and the Small Business Administration, which all overlap with at least one other program in terms of the type of assistance they are authorized to offer, and the type of entrepreneur they are authorized to serve. Changes in the graduated rates are generally part of proposals to reduce the overall corporate tax rate which are discussed in the context of tax reform. The issue of whether the tax expenditure could be better designed to target small business or spending or non-tax policies that support small businesses may be preferable but has not been part of the discussion. According to JCT estimates produced in 2011 and reported by CBO, moving to a single corporate rate of 35 percent would have raised $1.5 This estimate is based on a specific proposal to change billion in 2012.the tax code, and includes behavioral responses by taxpayers to the tax change. JCT and Treasury also annually calculate tax expenditure estimates that do not account for how taxpayer behavior may change when a tax expenditure is altered. Because they do not account for these behavioral changes or interactions with other tax provisions, the tax expenditure estimates available for the graduated rates do not represent the amount of revenue that would be gained if these rates were repealed. However, as mentioned above in the case of deferral, these estimates can indicate how revenue losses may be changing over time. As shown in figure 4, estimated tax revenue losses from the graduated corporate income tax rate schedule have decreased in the past decade. From 1998 through 2012, estimated tax revenue losses fell from $7.3 billion to $4.3 billion in constant 2012 dollars. The estimated fiscal year 2012 loss was 3 percent of all estimated revenue losses from corporate tax expenditures ($147 billion). The estimate was equal to 1.8 percent of corporate tax revenue in 2012. The decrease in figure 4 may be due, in part, to less companies incorporating as C corporations, which we highlighted above. As in the case of the deferral tax expenditures, no agency has been tasked with evaluating the graduated corporate income tax rate schedule. In June 2013, we made a number of recommendations to OMB, including that it should provide guidance to agencies to identify tax expenditures that contribute to each appropriate agency goal. In July 2013, OMB released guidance that directs agencies to identify tax expenditures that contribute to their goals. We provided a draft of this report to the Secretary of the Treasury and the Commissioner of Internal Revenue for comment. We also asked the Joint Committee on Taxation (JCT) and all external experts we interviewed to review a draft of this report. Treasury, IRS, JCT, and external experts provided technical comments that were incorporated, as appropriate. We sent copies of this report to the Secretary of the Treasury, to the Commissioner of Internal Revenue, and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you have any questions on this report, please contact me at (202) 512- 9110 or whitej@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this are listed in appendix IV. This report uses our tax expenditures evaluation guide to determine what is known about following three tax expenditures: (1) the deferral of income for controlled foreign corporations; (2) deferred taxes for certain financial firms on income earned overseas; and (3) the graduated corporate income tax rate. Because deferred taxes for certain financial firms on income earned overseas is a special case of the treatment of foreign source income of all controlled foreign corporations, our first section of the report focuses our discussion on the more general case of all controlled foreign corporations using deferral. Our second section of the report covers the graduated corporate income tax rate schedule. For both sections of the report, we cover the five questions outlined in our guide and listed below to determine what is known about each tax expenditure. We highlight what questions we are answering for each tax expenditure at the beginning of each section of the report. To evaluate the three tax expenditures above, we applied our tax expenditure evaluation guide, which was issued in November 2012.guide outlines a series of questions and sub-questions that can be used to evaluate tax expenditures. The five primary questions and sub- questions outlined in the guide are: 1. What is the tax expenditure’s purpose and is being achieved? What is the tax expenditure’s intended purpose? Have performance measures been established to monitor success in achieving the tax expenditure’s intended purpose? Does the tax expenditure succeed in achieving its intended purpose? 2. Even if its purpose is achieved, is the tax expenditure good policy? Does the tax expenditure generate net economic benefits for society? Is the tax expenditure fair? Is the tax expenditure simple, transparent, and administrable? 3. How does the tax expenditure relate to other federal programs? Does the tax expenditure contribute to a designated cross-agency priority goal? Does the tax expenditure duplicate or overlap with another federal effort? Is the tax expenditure being coordinated with other federal activities? Would an alternative to the tax expenditure more effectively achieve its intended purpose? 4. What are the consequences for the federal budget of the tax expenditure? Are there budget effects not captured by Treasury’s or the Joint Committee on Taxation’s tax expenditure estimates? Are there options for limiting the tax expenditure’s revenue loss? 5. How should evaluation of the tax expenditure be managed? What agency or agencies should evaluate the tax expenditure? When should the tax expenditure be evaluated? What data are needed to evaluate the tax expenditure? The guide’s questions cover a number of different policy objectives. Sometimes, these objectives compete. This report provides information responsive to the questions, but does not attempt to balance the different objectives or make recommendations. Rather, policymakers are better positioned to judge how competing policy objectives should be weighed. As we note in our tax expenditure evaluation guide, it is not a “one size fits all” framework for evaluating tax expenditures. We used reasonable judgment in applying the guide’s questions and concepts to evaluate the three tax expenditures. In some instances, we focused our discussion on certain questions in the guide because they were more relevant to the tax expenditures we were evaluating, while devoting less discussion to others that were more technical in nature. Question 1 above covers the tax expenditure’s intended purpose and if it is being achieved. Since the purpose of the deferral tax expenditures is unclear, we did not address the sub-questions related to whether the deferral tax expenditures achieve their intended purpose, and if performance measures have been established. For question 2 above, our discussion of the deferral tax expenditures and criteria for good policy also includes question 3 above which covers alternatives to the tax expenditures, as there is a natural relation to alternative proposals and how they may affect the criteria outlined in question 2. To the extent that we use our tax expenditure evaluation guide in the future on other tax expenditures, the structure and focus of future reports may differ from how it is presented in this report. To determine what is known about the deferral tax expenditures and the graduated corporate income tax rate schedule by answering the five questions listed above, we reviewed the following sources: Our previous work on tax expenditures, tax reform, tax policy and administration, duplication, overlap, and fragmentation, and results- oriented government and program evaluation. Previous work by the Congressional Research Service (CRS), the Congressional Budget Office (CBO), the Joint Committee on Taxation (JCT), the Department of the Treasury (Treasury), and the Internal Revenue Service (IRS). Legislation, statutes, and regulations. Academic and scholarly research on the tax expenditures and corporate taxation. To identify academic literature, we searched terms and certain authors in a number of academic literature databases, such as ProQuest, Econlit, and Social SciSearch. We reviewed and identified academic literature cited in CRS’ tax expenditure compendium, and a comprehensive study by JCT on foreign direct investment. We reviewed articles published in the National Tax Journal and asked Treasury, IRS, and the external experts we interviewed for recommendations on articles to review. We also interviewed Treasury and IRS officials, and external experts affiliated with CRS and two universities, who specialize in the U.S. corporate income tax system. The results from these interviews are not generalizable. Treasury tax expenditure estimates for fiscal years 1998 through 2012, and estimates by JCT on the revenue effects of making changes to the tax expenditures in our scope. To identify how the deferral and graduated corporate income tax rate schedule tax expenditures have changed in terms of their aggregate estimated revenue losses, we analyzed tax expenditure estimates developed by the Treasury and reported by the Office of Management and Budget in the Federal Budget’s Analytical Perspectives for fiscal years 1998 through 2012. We converted all tax expenditure estimates for each fiscal year into 2012 constant dollars to adjust for inflation. We did so by using the chain price indexes reported in the fiscal year 2014 federal budget. While sufficiently reliable as a gauge of general magnitude, summing tax expenditure estimates do not take into account any interactions between tax expenditures. In addition, tax expenditure estimates do not incorporate any behavioral responses. Thus, they do not represent the revenue amount that would be gained if a specific tax expenditure was repealed. To identify JCT estimates of the revenue effects of making changes to the tax expenditures in our scope, we reviewed CBO’s latest report that outlines spending and revenue options. These options outline a number of changes to the tax expenditures in our scope with accompanying estimates from JCT. Its revenue-effect estimates take into account a number of behavioral changes, unlike the tax expenditure estimates that Treasury and JCT complete. These include possible behavioral changes in: (1) corporate dividends and retained earnings; (2) the corporate capital structure; (3) corporate equity valuations; (4) repatriations of deferred foreign income; and (5) business entity choice. Data estimates from IRS Statistics of Income (SOI) Corporate Tax File, 2010, the most recent year available at the time of our work, and Bureau of Economic Analysis (BEA) data on corporate profits from 1987 through 2012. We requested estimates from the IRS SOI 2010 Corporate Tax File on the number of C corporations by their corporate income tax bracket, and a measurement of their size—in this case— business receipts. Data compiled by IRS SOI are based on a stratified random sample of 63,630 corporate income tax returns for 2010 from corporations that end their corporate year from July 1, 2010, through June 30, 2011. These estimates are subject to sampling errors. The margin of error is based on a 95-percent confidence interval. For our report, IRS SOI provided data on C corporations, which include active corporations filing tax forms 1120, 1120-F, 1120-L, and 1120-PC. Data are not included for “pass-through” entities, which file on forms 1120S, 1120-REIT, and 1120-RIC. We used business receipts as our measurement of size and used IRS’ breakout for the different sizes of business receipts. We also obtained data from IRS on the number of different types of business form entities (C corporations, S corporations, and partnerships) from 1986 to 2008. We also analyzed data from BEA on corporate profits by industry from 1987 to 2012. To determine how the composition of U.S. corporate profits have changed over time, we took a ratio of the amount of profit earned by U.S. corporations abroad compared to the total amount of profits earned by U.S. corporations. This analysis was based on a similar analysis used in academic literature. To assess the reliability of the data and estimates, we reviewed agency documentation, interviewed agency officials, and reviewed our prior reports that have used the data and estimates. We determined that Treasury, JCT, BEA, and IRS data and estimates were sufficiently reliable for our purposes. However, the IRS SOI corporate sample may not provide a precise estimate of the number of taxpayers claiming a tax expenditure when the number of taxpayers is very small. We conducted our work from April to September 2013 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives, and to discuss any limitations in our work. We believe that the information and data obtained and the analysis conducted provide a reasonable basis for any findings and conclusions in this product. Many proposals for changing the way the current U.S. system taxes foreign source income are detailed and complex. In general, however, they involve redesigning the current system to more resemble a pure worldwide or a pure territorial system. The basic designs of the three systems being considered as alternatives are the following: The current worldwide system with deferral. Foreign active business income is taxed when repatriated as dividends to the U.S. This system has a foreign tax credit limited to the U.S. tax liability on foreign source income, and certain anti-deferral provisions like Subpart F. A territorial system that uses a dividend exemption. The dividends derived from foreign active business income can be repatriated without U.S. tax. This system would continue to tax Subpart F income as do most countries with territorial systems. A worldwide system with full inclusion. The current worldwide system is retained, but deferral of foreign active business income is eliminated. The current system serves as a benchmark against which to compare the alternatives. When applying the criteria of a good tax system, the territorial and worldwide systems with full inclusion are examined for their effect on efficiency, equity, and complexity, and their relativity to the effects of the current system. Figure 5 illustrates how the basic design of the full inclusion worldwide system and the territorial system affects the taxes that corporations pay. As shown in figure 5, Country A has a worldwide tax system that taxes income of its domestic corporations, and that of foreign corporations earned within its borders at the same 35-percent rate. The domestic corporation and the subsidiary of the foreign corporation each pay $35 in taxes to Country A. Additionally, Country A taxes the income of the foreign subsidiaries of its corporations at the same 35-percent rate. However, in this case, it provides a credit for taxes paid to the country in which the subsidiary operates. The subsidiary gets a $15 credit for the tax it pays to Country B, and subtracts this amount from the $35 tax liability that it owes its home Country A. The total tax paid by the subsidiary is $15 to Country B plus the $20 net tax that it pays at home for an overall tax of $35. For the worldwide system, taxes paid are the same for corporations operating within Country A, and for its corporations operating abroad. In a territorial system, income is taxed only by the country in which it is earned. In figure 5, Country B has a territorial system that imposes a 15- percent tax on corporations that operate within its borders. The domestic corporation and the subsidiary of the foreign corporation remit the same tax payment of $15 to Country B on $100 of income earned there. Unlike the worldwide system, the territorial system imposes no tax on the income of the foreign subsidiaries of its own corporations. For the territorial system, taxes paid are equal for corporations operating within Country B, but differ for corporations operating across borders. As discussed above, the experts we interviewed agreed and economic theory suggests that any corporate tax system’s overall effect on efficiency depends on its relative effect on different types of investment decisions. The full inclusion system is likely to increase investment location efficiency relative to the current and territorial systems. Under the current system, investment abroad has a lower tax cost when repatriation of the foreign source income is deferred. Full inclusion would eliminate this tax advantage, and would be more consistent with efficient location decisions. When the foreign tax rate is lower than the U.S. tax rate, domestic corporations under full inclusion pay taxes on income earned at home and abroad at the same rate—the U.S. tax rate. The system does not provide incentives either to invest abroad or at home (i.e. it is neutral The territorial system, on the with respect to the location of investment).other hand, may increase investment location inefficiencies by making the location incentives that arise when countries adopt different tax rates permanent. However, as previously noted in this report, these inefficiencies may be offset to some extent by improved ownership efficiencies and reduced incentives to move corporate residences abroad. Corporations operating under a territorial system pay the same tax rate on income from their operations in each country. This would eliminate any tax advantage that would allow a less efficient owner in that country to acquire the more productive corporation. While the full inclusion system may not distort location decisions, it may distort decisions about who owns the foreign subsidiaries. Some advocates of the territorial system argue that efficiency gains from eliminating this ownership distortion can offset any efficiency losses from distorted location decisions. The experts whom we interviewed and the research that we reviewed agreed that both the territorial and full inclusion systems eliminate efficiency cost from the lockout effect that exists under the current system. Full inclusion eliminates the lockout effect by making foreign- earned income taxable without repatriation. The territorial system eliminates the lockout effect by making foreign-earned income tax free, whether or not they are repatriated. The effect of the lockout’s elimination could be significant because of possibly large efficiency costs due to growing accumulations of income abroad. Some research has suggested that income-shifting incentives should be significantly reduced under full inclusion, but may be increased under the territorial system. Researchers have found evidence of extensive income shifting under the current system. Full inclusion nearly eliminates incentives for income shifting because corporations pay the same rate under full inclusion, regardless of where the income is located and the timing of its repatriation.likely increase the incentive to shift income to lower-tax countries because income earned abroad would be exempt from being taxed, even when repatriated. The territorial system, on the other hand, would The relative effects of the alternatives on compliance and administrative burden depend on the specifics of their design. As described above, the current system is complex, and imposes compliance and administrative burden by requiring extensive calculations and adjustments involving foreign tax credits, sourcing rules for income and expenses, and transfer- pricing rules to limit income shifting. Some research has shown that, while a full inclusion system would reduce the benefits and scope for income shifting, it would also retain some of the current system’s burden, such as the foreign tax credit and sourcing rules for income and expenses. The territorial system, by increasing income-shifting incentives, may require provisions to protect the tax base that can add considerable complexity to the tax code. The degree of complexity relative to the current system will depend in part on how much of the current rules are maintained or expanded in the new system. However, because both the territorial and full inclusion systems remove any incentive to delay repatriation, they would eliminate compliance and administrative burden due to the repatriation tax planning that occurs under the current system. Based on the research and revenue estimates we reviewed, the revenue raised by each system depends on the specifics of its design. How much of the potential worldwide tax revenue a country gets depends on its tax system’s incentives to relocate income to a lower tax rate country. Some research has noted that under full inclusion, the corporation has no incentive to relocate income to a lower tax country unless it has excess foreign tax credits. However, under a territorial tax system, the corporation has an incentive to move income to a lower tax rate foreign country. Based on these incentives, it would appear that revenue for the home country is likely to decrease when a country moves from a worldwide to a territorial system. However, the relative effects on revenue ultimately depend on the details of the design. For example, some features of a territorial system’s design are implemented specifically to limit revenue losses. Under its territorial system, Japan imposes a per- country minimum tax, which means that corporations will lack incentives Other to locate income in a country with a tax rate below the minimum.features of a tax system that affect revenues include changes in income and expense allocation rules that would increase foreign source income attributed to the home country under a territorial system. Appendix III: IRS Statistics of Income Data on the Number of C Corporations by Taxable Income and Size of Business Receipts, 2010 (25 percent) $75,000 - $100,000 (34 percent) (39 percent) $335,000 - $10,000,000 (34 percent) $15,000,000 (35 percent) $18,333,333 (38 percent) Over $18,333,333 (35 percent) ** Data have been combined with data in a lower size class to avoid disclosure for specific corporations. $75,000 - $100,000 25 percent) (34 percent) (39 percent) $335,000 - $10,000,000 (34 percent) $10,000,000- $15,000,000 (35 percent) $15,000,000- $18,333,333 (38 percent) Over $18,333,333 (35 percent) — The sample has no observations in this cell. The population may have corporations in this cell. However, we do not have sufficient information to calculate an appropriate margin of error. James R. White, (202) 512-9110 or whitej@gao.gov. In addition to the contact name above, Kevin Daly (Assistant Director), Jason Vassilicos (Analyst-in-Charge), Elwood D. White, JoAnna Berry, Robert Gebhart, Eric Gorman, Lois Hanshaw, Benjamin Licht, Ed Nannenhorn, Karen O’Conor, Kathleen Padulchick, Robert Robinson, Stewart W. Small, Anne Stevens, and Jim Wozny all made contributions to this report. | Congress and the administration are reexamining tax expenditures used by corporations as part of corporate tax reform. These tax expenditures-- special exemptions and exclusions, credits, deductions, deferrals, and preferential tax rates--support federal policy goals, but result in revenue forgone by the federal government. GAO was asked to examine issues related to certain tax expenditures. This report uses GAO's tax expenditures evaluation guide to determine what is known about: (1) the deferral of income for controlled foreign corporations; (2) deferred taxes for certain financial firms on income earned overseas; and (3) the graduated corporate income tax rate. GAO combined the two deferral provisions for evaluation purposes. GAO's guide suggests using five questions to evaluate a tax expenditure: (1) what is its purpose and is the purpose being achieved; (2) does it meet the criteria for good tax policy; (3) how is it related to other federal programs; (4) what are its consequences for the federal budget; and (5) how is its evaluation being managed? To address these questions, GAO reviewed the legislative history and relevant academic and government studies, analyzed 2010 Internal Revenue Service (IRS) data, and interviewed agency officials and tax experts. Deferral : Both deferral tax expenditures confer the benefit of effectively reducing taxes by delaying the taxation of certain income of foreign subsidiaries of U.S. corporations until it is repatriated to the U.S. parent as dividends. 1. While views on the purpose of deferral have changed over time, currently, it is often viewed by experts as promoting the competitiveness of U.S. multinational corporations. Some experts argue that this view is too narrow. For example, this definition of competitiveness ignores the effect on other corporations that cannot use deferral, such as those that are purely domestic or export without foreign subsidiaries. Further, it ignores impacts on the wider economy. 2. Good tax policy has several dimensions. By delaying the tax on foreign source income, deferral could distort corporate investment and location decisions in a way that lower taxes, but favor less productive activities over more productive ones. Informed judgments about deferral's effect on the fairness of the tax system cannot be made because who benefits from deferral, after accounting for such factors as changes in prices and wages, has not been determined. However, there is widespread agreement among experts and the Internal Revenue Service (IRS) that deferral adds complexity to the tax code. 3. GAO did not identify other federal spending programs that provide similar support to U.S. multinational corporations. 4. Joint Committee on Taxation (JCT) 2011 estimates show relatively modest consequences for the federal budget. 5. No federal agency has been tasked with evaluating deferral. Graduated corporate income tax rate schedule : The graduated tax rates lower tax rates for corporations with less than $10 million in taxable income. 1. The purpose of the graduated corporate income tax rate schedule is viewed by the sources GAO reviewed as supporting small businesses. However, evidence is mixed on whether it achieves this purpose. The tax rates may not be well targeted toward supporting small businesses because corporations that are large in terms of assets and gross receipts may have taxable income that is small enough to qualify for the rates. 2. The economic efficiency of the graduated rates depends on whether they correct for a market failure. This includes too few small businesses forming, given their potential for profit and innovation, which offsets the possible distortions from its advantaging one type of business organization over others. GAO did not identify any studies of the efficiency effects or those that specifically estimate the distribution of the benefits from the graduated rates. According to IRS staff, while the graduated rates present little complexity, some evidence of tax planning to avoid higher rates has been found. 3. The graduated rates may be related to a number of federal spending programs also targeted to small businesses. 4. JCT 2011 estimates also show modest consequences for the federal budget. 5. No federal agency has been tasked with evaluating the graduated rates. GAO made no recommendations. Treasury, IRS, the Joint Committee on Taxation, and external experts provided technical comments that were incorporated, as appropriate. |
The tax gap is an estimate of the difference between the taxes—including individual income, corporate income, employment, estate, and excise taxes—that should have been paid voluntarily and on time and what was actually paid for a specific year. The estimate is an aggregate of estimates for the three primary types of noncompliance: (1) underreporting of tax liabilities on tax returns; (2) underpayment of taxes due from filed returns; and (3) nonfiling, which refers to the failure to file a required tax return altogether or on time. IRS’s tax gap estimates for each type of noncompliance include estimates for some or all of the five types of taxes that IRS administers. As shown in table 1, underreporting of tax liabilities accounted for most of the tax gap estimate for tax year 2001. IRS has estimated the tax gap on multiple occasions, beginning in 1979, relying on its Taxpayer Compliance Measurement Program (TCMP). IRS did not implement any TCMP studies after 1988 because of concerns about costs and burdens on taxpayers. Recognizing the need for current compliance data, in 2002 IRS implemented a new compliance study called the National Research Program (NRP) to produce such data for tax year 2001 while minimizing taxpayer burden. IRS has concerns with the certainty of the tax gap estimate for tax year 2001 in part because some areas of the estimate rely on old data, IRS has no estimates for other areas of the tax gap, and it is inherently difficult to measure some types of noncompliance. IRS used data from NRP to estimate individual income tax underreporting and the portion of employment tax underreporting attributed to self-employed individuals. The underpayment segment of the tax gap is not an estimate, but rather represents the tax amounts that taxpayers reported on time but did not pay on time. Other areas of the estimate, such as corporate income tax and employer-withheld employment tax underreporting, rely on decades-old data. Also, IRS has no estimates for corporate income, employment, and excise tax nonfiling or for excise tax underreporting. In addition, it is inherently difficult for IRS to observe and measure some types of underreporting or nonfiling, such as tracking cash payments that businesses make to their employees, as businesses and employees may not report these payments to IRS in order to avoid paying employment and income taxes, respectively. IRS’s overall approach to reducing the tax gap consists of improving service to taxpayers and enhancing enforcement of the tax laws. IRS seeks to improve voluntary compliance through efforts such as education and outreach programs and tax form simplification. IRS uses its enforcement authority to ensure that taxpayers are reporting and paying the proper amounts of taxes through efforts such as examining tax returns and matching the amount of income taxpayers report on their tax returns to the income amounts reported on information returns it receives from third parties. IRS reports that it collected over $48 billion in fiscal year 2006 from noncompliant taxpayers it identified through its various enforcement programs. In spite of IRS’s efforts to improve taxpayer compliance, the rate at which taxpayers pay their taxes voluntarily and on time has tended to range from around 81 percent to around 84 percent over the past three decades. Any significant reduction of the tax gap would likely depend on an improvement in the level of taxpayer compliance. Congress has been encouraging IRS to develop an overall tax gap reduction plan or strategy that could include a mix of approaches like simplifying code provisions, increased enforcement, and reconsidering the level of resources devoted to enforcement. Some progress has been made toward laying out the broad elements of a plan or strategy for reducing the tax gap. On September 26, 2006, the U.S. Department of the Treasury (Treasury), Office of Tax Policy, released “A Comprehensive Strategy for Reducing the Tax Gap.” However, the document generally does not identify specific steps that Treasury and IRS will undertake to reduce the tax gap, the related time frames for such steps, or explanations of how much the tax gap would be reduced. Furthermore, the document mentioned the importance of establishing benchmarks against which progress on each step under the strategy could be measured. It said that after the fiscal year 2008 budget request was released, Treasury and IRS would issue more details in March or April 2007 about the steps they would take to reduce opportunities for evasion and address the tax gap. The 2008 budget request issued on February 5, 2007, suggested 16 legislative changes to expand or improve information reporting, improve compliance by businesses, strengthen tax administration, and expand penalties. It also proposed additional funding for new initiatives aimed at reducing the tax gap. No single approach is likely to fully and cost-effectively address noncompliance and therefore multiple approaches are likely to be needed. The tax gap has multiple causes; spans five types of taxes; and is spread over several types of taxpayers including individuals, corporations, and partnerships. Thus, for example, while simplifying laws should help when noncompliance is due to taxpayers’ confusion, enforcement may be needed for taxpayers who understand their obligations but decline to fulfill them. Similarly, while devoting more resources to enforcement should increase taxes assessed and collected, too great an enforcement presence likely would not be tolerated. Simplifying or reforming the tax code, providing IRS more enforcement tools, and devoting additional resources to enforcement are three major tax gap reduction approaches discussed in more detail below, but providing quality services to taxpayers plays an important role in improving compliance and reducing the tax gap. IRS taxpayer services include education and outreach programs, simplifying the tax process, and revising forms and publications to make them electronically accessible and more easily understood by diverse taxpayer communities. For example, if tax forms and instructions are unclear, taxpayers may be confused and make unintentional errors. Quality taxpayer services would also be a key consideration in implementing any of the approaches for tax gap reduction. For example, expanding enforcement efforts would increase interactions with taxpayers, requiring processes to efficiently communicate with taxpayers. Also, changing tax laws and regulations would require educating taxpayers about the new requirements in a clear, timely, and accessible manner. In 2006, we reported that IRS improved its two most commonly used services—telephone and Web site assistance— for the 2006 filing season. Increased funding financed some of the improvements, but a significant portion has been financed internally by efficiencies gained from increased electronic filing of tax returns and other operational improvements. Although quality service helps taxpayers comply, showing a direct relationship between quality service and compliance levels is very challenging. As required by Congress, IRS is in the midst of a study that is to result in a 5-year plan for taxpayer service activities, which is to include long-term quantitative goals and to balance service and enforcement. Part of the study focuses on the effect of taxpayer service on compliance. A Phase I report was issued in April 2006 and a Phase II report should be completed in fiscal year 2007, which is to include, among other things, a multiyear plan for taxpayer service activities and improvement initiatives. However, in deciding on the appropriate mix of approaches to use in reducing the tax gap, many factors or issues could affect strategic decisions. Among the broad factors to consider are the likely effectiveness of any approach, fairness, enforceability, and sustainability. Beyond these, our work points to the importance of the following: Measuring compliance levels periodically and setting long-term goals. A data-based plan is one key to closing the tax gap. To the extent that IRS can develop better compliance data, it can develop more effective approaches for reducing the gap. Regularly measuring the magnitude of, and the reasons for, noncompliance provides insights on how to reduce the gap through potential changes to tax laws and IRS programs. In July 2005, we recommended that IRS periodically measure tax compliance, identify reasons for noncompliance, and establish voluntary compliance goals. IRS agreed with the recommendations and established a voluntary tax compliance goal of 85 percent by 2009. Furthermore, we have identified alternative ways to measure compliance, including conducting examinations of small samples of tax returns over multiple years, instead of conducting examinations for a larger sample of returns for 1 tax year, to allow IRS to track compliance trends annually. The administration’s fiscal year 2008 budget proposal offers this idea by requesting funds to annually study compliance based on a smaller sample size than the 2001 NRP study. Considering the costs and burdens. Any action to reduce the tax gap will create costs and burdens for IRS; taxpayers; and third parties, such as those who file information returns. For example, withholding and information reporting requirements impose some costs and burdens on those who track and report information. These costs and burdens need to be reasonable in relation to the improvements expected to arise from new compliance strategies. Evaluating the results. Evaluating the actions taken by IRS to reduce the tax gap would help maximize IRS’s effectiveness. Evaluations can be challenging because it is difficult to isolate the effects of IRS’s actions from other influences on taxpayers’ compliance. Our work has discussed how to address these challenges, for example by using research to link actions with the outputs and desired effects. Optimizing resource allocation. Developing reliable measures of the return on investment for strategies to reduce the tax gap would help inform IRS resource allocation decisions. IRS has rough measures of return on investment based on the additional taxes it assesses. Developing such measures is difficult because of incomplete data on the costs of enforcement and collected revenues. Beyond direct revenues, IRS’s enforcement actions have indirect revenue effects, which are difficult to measure. However, indirect effects could far exceed direct revenue effects and would be important to consider in connection with continued development of return on investment measures. In general though, the effects of tax gap reduction by improving voluntary tax compliance can be quite large. For example, if the estimated 83.7 percent voluntary compliance rate that produced a gross tax gap of $345 billion in tax year 2001 had been 85 percent, this tax gap would have been about $28 billion less; if it had been 90 percent, the gap would have been about $133 billion less. Leveraging technology. Better use of technology could help IRS be more efficient in reducing the tax gap. IRS is modernizing its technology, which has paid off in terms of telephone service, resource allocation, electronic filing, and data analysis capability. However, this ongoing modernization will need strong management and prudent investments to maximize potential efficiencies. The administration’s fiscal year 2008 budget proposal requests additional funds under its Business Systems Modernization initiatives. Tax law simplification and reform both have the potential to reduce the tax gap by billions of dollars. The extent to which the tax gap would be reduced depends on which parts of the tax system would be simplified and in what manner, as well as how any reform of the tax system is designed and implemented. Neither approach, however, will eliminate the gap. Further, changes in the tax laws and system to improve tax compliance could have unintended effects on other tax system objectives, such as those involving economic behavior or equity. Simplification has the potential to reduce the tax gap for at least three broad reasons. First, it could help taxpayers to comply voluntarily with more certainty, reducing inadvertent errors by those who want to comply but are confused because of complexity. Second, it may limit opportunities for tax evasion, reducing intentional noncompliance by taxpayers who can misuse the complex code provisions to hide their noncompliance or to achieve ends through tax shelters. Third, tax code complexity may erode taxpayers’ willingness to comply voluntarily if they cannot understand its provisions or they see others taking advantage of complexity to intentionally underreport their taxes. Simplification could take multiple forms. One form would be to retain existing laws but make them simpler. For example, in our July 2005 report on postsecondary tax preferences, we noted that the definition of a qualifying postsecondary education expense differed somewhat among some tax code provisions, for instance with some including the cost to purchase books and others not. Making definitions consistent across code provisions may reduce taxpayer errors. Although we cannot say the errors were due to these differences in definitions, in a limited study of paid preparer services to taxpayers, we found some preparers claiming unallowable expenses for books. Further, the Joint Committee on Taxation suggested that such dissimilar definitions may increase the likelihood of taxpayer errors and increase taxpayer frustration. Another tax code provision in which complexity may have contributed to the individual tax gap involves the earned income tax credit, for which IRS estimated a tax loss of up to about $10 billion for tax year 1999. Although some of this noncompliance may be intentional, we and the National Taxpayer Advocate have previously reported that confusion over the complex rules governing eligibility for claiming the credit could cause taxpayers to fail to comply inadvertently. The administration’s fiscal year 2008 budget proposes legislative language to simplify eligibility requirements for the credit as well as to clarify the uniform definition of a qualifying child. Another form of simplification could be to broaden the tax base while reducing tax rates, which could minimize incentives for not complying. This base-broadening could include a review of whether existing tax expenditures are achieving intended results at a reasonable cost in lost revenue and added burden and eliminating or consolidating those that are not. Among the many causes of tax code complexity is the growing number of preferential provisions in the code, defined in statute as tax expenditures, such as tax exemptions, exclusions, deductions, credits, and deferrals. The number of these tax expenditures has more than doubled from 67 in 1974 to 161 in 2006, and the sum of tax expenditure estimates rose to nearly $847 billion. Tax expenditures can contribute to the tax gap if taxpayers claim them improperly. For example, IRS’s recent tax gap estimate includes a $32 billion loss in individual income taxes for tax year 2001 because of noncompliance with these provisions. Simplifying these provisions of the tax code would not likely yield $32 billion in revenue because even simplified provisions likely would have some associated noncompliance. Nevertheless, the estimate suggests that simplification could have important tax gap consequences, particularly if simplification also accounted for any noncompliance that arises because of complexity on the income side of the tax gap for individuals. Despite the potential benefits that simplification may yield, these credits and deductions serve purposes that Congress has judged to be important to advance federal goals. Eliminating them or consolidating them likely would be complicated, and would likely create winners and losers. Elimination also could conflict with other objectives such as encouraging certain economic activity or improving equity. Similar trade-offs exist with possible fundamental tax reforms that would move away from an income tax system to some other system, such as a consumption tax, national sales tax, or value added tax. Fundamental tax reform would most likely result in a smaller tax gap if the new system has few tax preferences or complex tax code provisions and if taxable transactions are transparent. However, these characteristics are difficult to achieve in any system and experience suggests that simply adopting a fundamentally different tax system may not by itself eliminate any tax gap. Any tax system could be subject to noncompliance, and its design and operation, including the types of tools made available to tax administrators, will affect the size of any corresponding tax gap. Further, the motivating forces behind tax reform likely include factors beyond tax compliance, such as economic effectiveness, equity, and burden, which could in some cases carry greater weight in designing an alternative tax system than ensuring the highest levels of compliance. Changing the tax laws to provide IRS with additional enforcement tools, such as expanded tax withholding and information reporting, could also reduce the tax gap by many billions of dollars, particularly with regard to underreporting—the largest segment of the tax gap. Tax withholding promotes compliance because employers or other parties subtract taxes owed from a taxpayer’s income and remit them to IRS. Information reporting tends to lead to high levels of compliance because income taxpayers earn is transparent to them and IRS. In both cases, high levels of compliance tend to be maintained over time. Also, withholding and information reporting help IRS to better identify noncompliant taxpayers and prioritize contacting them, which enables IRS to better allocate its resources. However, designing new withholding or information reporting requirements to address underreporting can be challenging given that many types of income are already subject to at least some form of withholding or information reporting, underreporting exists in varied forms, and the requirements could impose costs and burdens on third parties. Figure 1 shows how much voluntary reporting compliance improves for income subject to withholding or information reporting. Once withholding or information reporting requirements are in place for particular types of income, compliance tends to remain high over time. For example, for wages and salaries, which are subject to tax withholding and substantial information reporting, the percentage of income that taxpayers misreport has consistently been measured at around 1 percent over time. In the past, we have identified a few specific areas where additional withholding or information reporting requirements could serve to improve compliance: Require more data on information returns dealing with capital gains income from securities sales. Recently, we reported that an estimated 36 percent of taxpayers misreported their capital gains or losses from the sale of securities, such as corporate stocks and mutual funds. Further, around half of the taxpayers who misreported did so because they failed to report the securities’ cost, or basis, sometimes because they did not know the securities’ basis or failed to take certain events into account that required them to adjust the basis of their securities. When taxpayers sell securities like stock and mutual funds through brokers, the brokers are required to report information on the sale, including the amount of gross proceeds the taxpayer received; however, brokers are not required to report basis information for the sale of these securities. We found that requiring brokers to report basis information for securities sales could improve taxpayers’ compliance in reporting their securities gains and losses and help IRS identify noncompliant taxpayers. However, we were unable to estimate the extent to which a basis reporting requirement would reduce the capital gains tax gap because of limitations with the compliance data on capital gains and because neither IRS nor we know the portion of the capital gains tax gap attributed to securities sales. Requiring tax withholding and more or better information return reporting on payments made to independent contractors. Past IRS data have shown that independent contractors report 97 percent of the income that appears on information returns, while contractors that do not receive these returns report only 83 percent of income. We have also identified other options for improving information reporting for independent contractors, including increasing penalties for failing to file required information returns, lowering the $600 threshold for requiring such returns, and requiring businesses to report separately on their tax returns the total amount of payments to independent contractors. Requiring information return reporting on payments made to corporations. Unlike payments made to sole proprietors, payments made to corporations for services are generally not required to be reported on information returns. IRS and GAO have contended that the lack of such a requirement leads to lower levels of compliance for small corporations. Although Congress has required federal agencies to provide information returns on payments made to contractors since 1997, payments made by others to corporations are generally not covered by information returns. Information reporting helps IRS to better allocate its resources to the extent that it helps IRS better identify noncompliant taxpayers and the potential for additional revenue that could be obtained by contacting these taxpayers. For example, IRS officials told us that receiving information on basis for taxpayers’ securities sales would allow IRS to determine more precisely taxpayers’ income for securities sales through its document matching programs and would allow it to identify which taxpayers who misreported securities income have the greatest potential for additional tax assessments. Similarly, IRS could use basis information to improve both aspects of its examination program—examinations of tax returns through correspondence and examinations of tax returns face to face with the taxpayer. Currently, capital gains issues are too complex and time consuming for IRS to examine through correspondence. However, IRS officials told us that receiving cost basis information might enable IRS to examine noncompliant taxpayers through correspondence because it could productively select tax returns to examine. Also, having cost basis information could help IRS identify the best cases to examine face to face, making the examinations more productive while simultaneously reducing the burden imposed on compliant taxpayers who otherwise would be selected for examination. Withholding and information reporting lead to high levels of compliance. Designing new requirements to address underreporting would need to address the challenge that many types of income, including wages and salaries, dividend and interest income, and income from pensions and Social Security are already subject to withholding or substantial information reporting. Also, challenges arise in establishing new withholding or information reporting requirements for certain other types of income that are extensively underreported. Such underreporting may be difficult to determine because of complex tax laws or transactions or the lack of a practical and reliable third-party source to provide information on the taxable income. For example, while withholding or information reporting mechanisms on nonfarm sole proprietor and informal supplier income would likely improve their compliance, comprehensive mechanisms that are practical and effective are difficult to identify. As shown in figure 1, this income is not subject to information reporting, and these taxpayers misreported about half of the income they earned for tax year 2001. Informal suppliers by definition receive income in an informal manner through services they provide to a variety of individual citizens or small businesses. Whereas businesses may have the capacity to perform withholding and information reporting functions for their employees, it may be challenging to extend withholding or information reporting responsibilities to the individual citizens that receive services, who may not have the resources or knowledge to comply with such requirements. Finally, implementing tax withholding and information reporting requirements generally imposes costs and burdens on the businesses that must implement them, and, in some cases, on taxpayers. For example, expanding information reporting on securities sales to include basis information will impose costs on the brokers who would track and report the information. Further, trying to close the entire tax gap with these enforcement tools could entail more intrusive recordkeeping or reporting than the public is willing to accept. The administration’s proposed budget for fiscal year 2008 has 16 legislative proposals on tax gap reduction of which 7 relate to expanded information reporting. Two of these proposals involve information reporting on payments to corporations and on the cost basis of security sales, which we discussed earlier in this section of the testimony. The administration also proposes requiring a certified tax identification number from nonemployee service providers (contractors), increased information reporting for certain government payments for property and services, and increased information return penalties. We have done past work related to these proposals and suggested them as options for reducing the tax gap. The remaining 2 proposals would expand broker information reporting and require information reporting on merchant card payment reimbursements. The 7 proposals relating to information reporting account for virtually all the revenue that the budget request’s 16 tax gap legislative proposals are projected to raise. The 16 proposals are expected to raise about $29 billion over 10 years, or about 1 percent per year of the 2001 net tax gap amount of $290 billion. About 98 percent of the $29 billion would come from the information reporting proposals. About 85 percent would come from 3 of them—those relating to payments to corporations, basis reporting on security sales, and merchant payment card reimbursements. Devoting more resources to enforcement has the potential to help reduce the tax gap by billions of dollars, as IRS would be able to expand its enforcement efforts to reach a greater number of potentially noncompliant taxpayers. However, determining the appropriate level of enforcement resources to provide IRS requires taking into account many factors, such as how effectively and efficiently IRS is currently using its resources, how to strike the proper balance between IRS’s taxpayer service and enforcement activities, and competing federal funding priorities. If Congress were to provide IRS more enforcement resources, the amount of the tax gap that could be reduced depends in part on the size of any increase in IRS’s budget, how IRS would manage any additional resources, and the indirect increase in taxpayers’ voluntary compliance that would likely result from expanded IRS enforcement. Given resource constraints, IRS is unable to contact millions of additional taxpayers for whom it has evidence of potential noncompliance. With additional resources, IRS would be able to assess and collect additional taxes and further reduce the tax gap. In 2002, IRS estimated that a $2.2 billion funding increase would allow it to take enforcement actions against potentially noncompliant taxpayers it identifies but cannot contact and would yield an estimated $30 billion in revenue. For example, IRS estimated that it contacted about 3 million of the over 13 million taxpayers it identified as potentially noncompliant through its matching of tax returns to information returns. IRS estimated that contacting the additional 10 million potentially noncompliant taxpayers it identified, at a cost of about $230 million, could yield nearly $7 billion in potentially collectible revenue. We did not evaluate the accuracy of the estimate, and as will be discussed below, many factors suggest that it is difficult to estimate reliably net revenue increases that might come from additional enforcement efforts. Although additional enforcement funding has the potential to reduce the tax gap, the extent to which it would help depends on several factors. First, and perhaps most obviously, the amount of tax gap reduction would depend in part on the amount of additional resources. The degree to which revenues would increase from expanded enforcement depends on many variables, such as how quickly IRS can ramp up efforts, how well IRS selects the best cases to be worked, and how taxpayers react to enforcement efforts. Estimating those revenue increases would require assumptions about these and other variables. Because actual experience is likely to diverge from those assumptions, the actual revenue increases are likely to differ from the estimates. The lack of reliable key data compounds the difficulty of estimating the likely revenues. To the extent possible, obtaining better data on key variables would provide a better understanding of the likely results with any increased enforcement resources. With additional resources for enforcement, IRS would be able to assess and collect additional taxes, but the related tax gap reductions may not be immediate. If IRS uses the resources to hire more enforcement staff, the reductions may occur gradually as IRS is able to hire and train the staff. Also, several years can elapse after IRS assesses taxes before it actually collects these taxes. Similarly, the amounts of taxes actually collected can vary substantially from the related tax amounts assessed through enforcement actions by the type of tax or taxpayer involved. In a 1998 report, we found that 5 years after taxes were assessed against individual taxpayers with business income, 48 percent of the assessed taxes had been collected, whereas for the largest corporate taxpayers, 97 percent of assessed taxes had been collected. These various factors need to be taken into account in estimating revenue to be obtained from increased funding. In doing such estimates for its fiscal year 2007 budget, IRS accounted for several factors, including opportunity costs because of training, which draws experienced enforcement personnel away from the field; differences in average enforcement revenue obtained per full-time employee by enforcement activity; and differences in the types and complexity of cases worked by new hires and experienced hires. IRS forecasted that in the first year after expanding enforcement activities, the additional revenue to be collected is less than half the amount to be collected in later years. This example underscores the logic that if IRS is to receive a relatively large funding increase, it likely would be better to provide it in small but steady amounts. The amount of tax gap reduction likely to be achieved from any budget increase also depends on how well IRS can use information about noncompliance to manage the additional resources. Because IRS does not have compliance data for some segments of the tax gap and others are based on old data, IRS cannot easily track the extent to which compliance is improving or declining. IRS also has concerns with its information on whether taxpayers unintentionally or intentionally fail to comply with the tax laws. Knowing the reasons for taxpayer noncompliance can help IRS decide whether its efforts to address specific areas of noncompliance should focus on nonenforcement activities, such as improved forms or publications, or enforcement activities to pursue intentional noncompliance. To the extent that compliance data are outdated and IRS does not know the reason for taxpayer noncompliance, IRS may be less able to target resources efficiently to achieve the greatest tax gap reduction at the least taxpayer burden. IRS has taken important steps to better ensure efficient allocation and use. For example, the NRP study has provided better data on which taxpayers are most likely to be noncompliant. IRS is using the data to improve its audit selection processes in hopes of reducing the number of audits that result in no change, which should reduce unnecessary burden on compliant taxpayers and increase enforcement staff productivity (as measured by direct enforcement revenue). As part of an effort to make the best use of its enforcement resources, IRS has developed rough measures of return on investment in terms of tax revenue that it assesses from uncovering noncompliance. Generally, IRS cites an average return on investment for enforcement of 4:1, that is, IRS estimates that it collects $4 in revenue for every $1 of funding. Where IRS has developed return on investment estimates for specific programs, it finds substantial variation depending on the type of enforcement action. For instance, the ratio of estimated tax revenue gains to additional spending for pursuing known individual tax debts through phone calls is 13:1, versus a ratio of 32:1 for matching the amount of income taxpayers report on their tax returns to the income amounts reported on information returns. In addition to returns on investment estimates being rough, IRS lacks information on the incremental returns on investment from pursuing the “next best case” for some enforcement programs. It is the marginal revenue gain from these cases that matters in estimating the direct revenue from expanded enforcement. Developing such measures is difficult because of incomplete information on all the costs and all the tax revenue ultimately collected from specific enforcement efforts. Because IRS’s current estimates of the revenue effects of additional funding are imprecise, the actual revenue that might be gained from expanding different enforcement efforts is subject to uncertainty. Given the variation in estimated returns on investment for different types of IRS compliance efforts, the amount of tax gap reduction that may be achieved from an increase in IRS’s resources would depend on how IRS allocates the increase. Although it might be tempting to allocate resources heavily toward areas with the highest estimated return, allocation decisions must take into account diverse and difficult issues. For instance, although one enforcement activity may have a high estimated return, that return may drop off quickly as IRS works its way through potential noncompliance cases. In addition, IRS dedicates examination resources across all types of taxpayers so that all taxpayers receive some signal that noncompliance is being addressed. Further, issues of fairness can arise if IRS focuses its efforts only on particular groups of taxpayers. Beyond direct tax revenue collection, expanded enforcement efforts could reduce the tax gap even more, as widespread agreement exists that IRS enforcement programs have an indirect effect through increases in voluntary tax compliance. The precise magnitude of the indirect effects of enforcement is not known with a high level of confidence given challenges in measuring compliance; developing reasonable assumptions about taxpayer behavior; and accounting for factors outside of IRS’s actions that can affect taxpayer compliance, such as changes in tax law. However, several research studies have offered insights to help better understand the indirect effects of IRS enforcement on voluntary tax compliance and show that they could exceed the direct effect of revenue obtained. As table 2 shows, the administration’s budget request for fiscal year 2008 proposes additional revenue-producing initiatives, legislative proposals, and non-revenue-producing initiatives. The revenue-producing initiatives generally would fund additional staff to enforce tax laws; the legislative proposals include, for example, new information return requirements that would increase revenue; and the non-revenue-producing initiatives generally would fund infrastructure and Business Systems Modernization changes to support IRS operations. Over the 3 years, the requested funding decreases while the estimated resulting revenue increases. About $410 million is requested for fiscal year 2008 to fund all of these initiatives, which are estimated to bring in about $695 million in increased revenue that year. The estimated cost for the initiatives declines to $355 million in fiscal years 2009 and 2010 and the projected revenues increase to about $2.6 billion in 2010. Costs decline due to start up costs applying only in fiscal year 2008. Revenues increase in part due to improved annual returns from the hiring, training, and deployment of additional staff, but more so due to the phase in of the legislative proposals, particularly the information reporting requirements. The legislative proposals alone are estimated to produce $1.9 billion of the $2.6 billion total additional revenues expected to come from the administration’s budget proposals in fiscal year 2010. The revenue effects of the revenue-producing initiatives exclude the likely deterrent effect from IRS enforcement programs as well as any improvement in voluntary compliance due to improved taxpayer services. The revenues expected from these initiatives are small compared to the estimated $290 billion net tax gap for tax year 2001. For instance, all of the revenue-producing initiatives coming largely from additional enforcement staffing are expected to yield about $699 million in fiscal year 2010, or about one-quarter of 1 percent of the tax year 2001 net tax gap. In 2010, the total estimated increased revenues from both the revenue-producing and legislative initiatives, or about $2.6 billion, is about 0.9 percent of the 2001 net tax gap. When taxpayers do not pay all of their taxes, honest taxpayers carry a greater burden to fund government programs and the nation is less able to address its long-term fiscal challenges. Thus, reducing the tax gap is important, even though closing the entire tax gap is neither feasible nor desirable because of costs and intrusiveness. All of the approaches I have discussed have the potential to reduce the tax gap alone or in combination, and no single approach is clearly and always superior to the others. As a result, IRS needs a strategy to attack the tax gap on multiple fronts with multiple approaches. The various proposals in the administration’s budget request raise modest dollar amounts compared to the size of the tax gap. This underscores the likelihood that a wide variety of efforts will be needed to make significant progress in addressing the tax gap. We look forward to seeing the administration’s expanded outline of steps it will be taking. Mr. Chairman and Members of the Committee, this concludes my testimony. I would be happy to answer any question you may have at this time. For further information on this testimony, please contact Michael Brostek on (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include Thomas Short, Assistant Director; Jeffrey Arkin; Elizabeth Fan; Ronald Jones; Lawrence Korb; and Ellen Rominger. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The tax gap--the difference between the tax amounts taxpayers pay voluntarily and on time and what they should pay under the law--has been a long-standing problem in spite of many efforts to reduce it. Most recently, the Internal Revenue Service (IRS) estimated a gross tax gap for tax year 2001 of $345 billion and estimated it would recover $55 billion of this gap, resulting in a net tax gap of $290 billion. When some taxpayers fail to comply, the burden of funding the nation's commitments falls more heavily on compliant taxpayers. Reducing the tax gap would help improve the nation's fiscal stability. For example, each 1 percent reduction in the net tax gap would likely yield $3 billion annually. GAO was asked to discuss the tax gap, various approaches to reduce it, and what the proposed budget for fiscal year 2008 says about it. This testimony discusses the need for taking multiple approaches and to what extent the tax gap could be reduced through three overall approaches--simplifying or reforming the tax system, providing IRS with additional enforcement tools, and devoting additional resources to enforcement. This statement is based on prior GAO work. Multiple approaches are needed to reduce the tax gap. No single approach is likely to fully and cost-effectively address noncompliance since, for example, it has multiple causes and spans different types of taxes and taxpayers. Simplifying or reforming the tax code, providing IRS more enforcement tools, and devoting additional resources to enforcement are three major approaches. Moreover, providing quality services to taxpayers is a necessary foundation for voluntary compliance. Such steps as periodically measuring noncompliance and its causes, setting tax gap reduction goals, optimizing the allocation of IRS's resources, and leveraging technology to enhance IRS's efficiency would also contribute to tax gap reduction. Simplifying the tax code or fundamental tax reform has the potential to reduce the tax gap by billions of dollars. IRS has estimated that errors in claiming tax credits and deductions for tax year 2001 contributed $32 billion to the tax gap. Thus, considerable potential exists. However, these provisions serve purposes Congress has judged to be important and eliminating or consolidating them could be complicated. Fundamental tax reform would most likely result in a smaller tax gap if the new system has few, if any, exceptions (e.g., few tax preferences) and taxable transactions are transparent to tax administrators. These characteristics are difficult to achieve, and any tax system could be subject to noncompliance. Withholding and information reporting are particularly powerful tools to reduce the tax gap. They could help reduce the tax gap by billions of dollars, especially if they make underreported income transparent to IRS. These tools have led to high, sustained levels of taxpayer compliance and improved IRS resource allocation by helping IRS identify and prioritize its contacts with noncompliant taxpayers. As GAO previously suggested, reporting the cost, or basis, of securities sales is one option to improve taxpayers' compliance. However, designing additional withholding and information reporting requirements may be challenging given that many types of income are already subject to reporting, underreporting exists in many forms, and withholding and reporting requirements impose costs on third parties. Devoting additional resources to enforcement has the potential to help reduce the tax gap by billions of dollars. However, determining the appropriate level of IRS enforcement resources requires considering such factors as how well IRS uses its resources and the proper balance between taxpayer service and enforcement activities. If Congress provides IRS more enforcement resources, the amount of tax gap reduction would depend on factors such as the size of budget increases and the indirect increase in taxpayers' voluntary compliance resulting from expanded enforcement. The recent budget request for fiscal year 2008 proposes legislation and new initiatives to reduce the tax gap but expected dollar gains are modest. Further reductions likely would require many more such changes. |
On May 4, 2000, the National Park Service initiated a prescribed burn on federal land at Bandelier National Monument, New Mexico, in an effort to reduce the threat of wildfires in the area. The plan was to burn up to 900 acres. On May 5, 2000, the prescribed burn exceeded the capabilities of the National Park Service, spread to other federal and nonfederal land, and was characterized as a wildfire. On May 13, 2000, the President issued a major disaster declaration, and subsequently, the Secretary of the Interior and the National Park Service assumed responsibility for the fire and the loss of federal, state, local, tribal, and private property. The fire, known as the Cerro Grande fire, burned approximately 48,000 acres in four counties and two Indian pueblos, destroyed over 200 residential structures, and forced the evacuation of more than 18,000 residents. On July 13, 2000, the President signed CGFAA into law. Under CGFAA, each claimant is entitled to be compensated by the United States government for certain injuries and damages that resulted from the Cerro Grande fire. CGFAA required that FEMA promulgate and publish implementing regulations for the Cerro Grande program within 45 days of enactment of the law. On August 28, 2000, FEMA published Disaster Assistance: Cerro Grande Fire Assistance: Interim Final Rule in the Federal Register (Interim Rule). FEMA modified the Interim Rule with a set of implementing policies and procedures on November 13, 2000. FEMA updated these policies and procedures in January and March 2001. After reviewing public comments on the Interim Rule, FEMA finalized and published Disaster Assistance: Cerro Grande Fire Assistance Final Rule (Final Rule) on March 21, 2001. The Congress initially appropriated $455 million to FEMA for the payment of such claims and $45 million for the administration of the Cerro Grande program. In March 2002, FEMA requested, but did not receive, additional appropriated funding of $80 million to cover additional claims and administrative costs. In December 2002, FEMA revised its estimate and requested additional appropriated funding of $155 million, including $5 million for administrative costs. The revised estimate was based on more complete claim information since the final date to submit claims had passed on August 28, 2002. In February 2003, FEMA was appropriated an additional $90 million, of which up to $5 million may be made available for administrative purposes. FEMA stated that only $2 million was used in fiscal year 2003 for administrative purposes. In October 2003, FEMA received an additional appropriation of $38.062 million, of which 5 percent may be made available for administrative costs. After FEMA allocated a specific amount for administrative costs, it had a maximum of $578.6 million available for the payment of claims under CGFAA. During the audit, FEMA provided revised claim data that reflected the amounts shown in table 1. The claimed amounts that FEMA approved for payment through September 9, 2003, included $51.5 million of approved subrogation claims. Pending claims included expected payments for individual, business, governmental, and pueblo claims, and projected liabilities consisted of potential future appeals, potential arbitrations, and contingency for judicial review. CGFAA requires that FEMA submit an annual report to the Congress that provides information about claims submitted under the act. This annual report is to include the amounts claimed, a description of the nature of the claims, and the status or disposition of the claims, including the amounts paid. FEMA’s report is to be issued annually by August 28. CGFAA, as amended, requires that we conduct annual audits on the payment of all claims made and report the results of the audits to the Congress within 120 days of FEMA’s issuance of its annual report. The act also requires that our report include a review of all subrogation claims for which insurance companies have been paid. In May 2003, we issued our second report on the audit of Cerro Grande claim payments made from inception through August 28, 2002. The report stated that FEMA properly processed and paid claims but overstated amounts paid in its report to the Congress and made two recommendations regarding the reconciliation of the approved and paid amounts in its payment approval and accounting systems. FEMA issued its most recent annual report on August 28, 2003, with claim amounts approved for payment through June 30, 2003. In performing our review, we considered the Standards for Internal Control in the Federal Government. To reaffirm our understanding of the claim review and payment process established by OCGFC and to follow up on the changes made to this process since our last report, we interviewed FEMA officials and analyzed data used in FEMA’s annual report to the Congress and data used by FEMA to determine the estimated claim liability. We also reviewed the following: the requirements of CGFAA; the final regulations published in the Federal Register; FEMA’s policies and procedures manual; a summary of FEMA’s unpaid claim liability estimates as of September 9, 2003; FEMA’s fiscal year 2002 audited financial statements; and the fiscal year 2003 approvals, payments, and other documentation concerning the Cerro Grande program and submitted claims. Finally, we selected a statistical sample from the population of all partial and final claimed amounts approved for payment from August 28, 2002, through June 30, 2003, to determine whether FEMA processed, approved, and paid the Cerro Grande fire claims in accordance with its applicable policies and procedures. We selected a dollar unit (statistical) sample of 99 intervals representing 84 claims totaling $21,000,928 that were approved for payment from a population of 1,868 reported partial and final claim amounts that had been approved for payment from August 28, 2002, through June 30, 2003 (FEMA’s cutoff date for the annual report to the Congress), to test specific control activities, such as adequacy of supporting documentation, evidence of claims manager and approving official review, and actual payment by FEMA. We obtained and reviewed related supporting documentation for the approved claim payments that were selected from OCGFC’s payment approval system. In order to follow up on FEMA’s corrective actions to address our prior year recommendations, as well as determine whether FEMA properly reported claim payment information to the Congress, we reviewed OCGFC’s reconciliations of claimed amounts that were approved by OCGFC for payment from its payment approval system and the actual claim payments made by FEMA’s Disaster Finance Center (DFC) and reported in FEMA’s accounting system. Our work was conducted in Denton, Texas, and Washington, D.C., from September 2003 through October 2003 in accordance with generally accepted government auditing standards. We requested agency comments on a draft of this report from the Under Secretary of the Department of Homeland Security’s Emergency Preparedness and Response Directorate or his designee. The Director of the Recovery Division of the Department of Homeland Security’s Emergency Preparedness and Response Directorate provided written comments on our draft, which are reprinted in appendix I. We discuss the written comments in the “Agency Comments and Our Evaluation” section of this report. Based on the results of our statistical testing, FEMA processed, approved, and paid its claims in accordance with its guidelines that were established and in place at the time the claims were reviewed and processed. FEMA’s guidelines for the approval and payment of claims specify the following steps. An injured party submits a Notice of Loss (NOL) to OCGFC to initiate the claim payment process. Upon receipt of the NOL, a claim reviewer contacts the claimant to discuss the claim, explain the claims process, and determine the best means to substantiate the loss or damages. The claim reviewer then assigns a claim number and enters the information into OCGFC’s claim-processing database, the Automated Claim Information System, and begins the process of verifying the victim’s claim. Upon completion of its review, the claim reviewer prepares a claim payment recommendation package, which specifies that a claimant’s injuries or damages occurred as a result of the Cerro Grande fire and that claimed amounts are eligible for compensation under CGFAA. The claim reviewer also inputs reserve amounts equal to the total claimed amounts that he/she expects to be paid into the claim- processing database. A claim supervisor reviews to ensure that a proper investigation of the claim occurred and that the proper documentation exists, among other things, and approves each recommendation package. Upon approval of the claim payment recommendation package, an Approval for Payment form is completed and sent to an OCGFC authorizing official for review and approval. The Comptroller receives a Schedule of Payments, consisting of amounts approved for payment, and reviews a sample of requested and approved payments. The Comptroller then approves the Schedule of Payments, records the approved amounts in OCGFC’s payment approval system, and sends the schedules to FEMA’s DFC for additional manual processing. FEMA records all payments in its accounting system, the Integrated Financial Management Information System, which is not linked to OCGFC’s payment approval system, before funds are disbursed by Treasury. In addition to the above steps from the claim review process, which is used for both partial payments and final payments, the claim reviewer prepares a Proof of Loss (POL) form prior to processing a final payment. This form summarizes all amounts recommended for payment, including those amounts previously paid through a partial payment. The POL form must be signed by the claimant subject to the provisions of 18 U.S.C. §1001, which establishes criminal penalties for false statements. Once a signed POL form is received, an OCGFC authorized official sends a Letter of Final Determination to tell the claimant the total amount of compensation being offered under CGFAA. Accompanying this letter is a Release and Certification form that the claimant signs if he or she accepts the OCGFC compensation determination, thereby releasing the federal government from any additional claims arising from the Cerro Grande fire. Upon receipt of the signed Release and Certification form, FEMA processes and mails a claimant’s final payment. Our review of the statistically selected sample of approved claims determined that the above steps were performed in accordance with FEMA’s established approval and payment process. For the period from August 29, 2002, through June 30, 2003, FEMA approved claims in the amount of $40 million for payment, including the partial payment of approved subrogation claims filed by insurance companies. FEMA addressed our prior recommendations related to reconciling the reported amounts approved for payment to amounts actually paid. FEMA reconciled the total amounts approved for payment to the amounts actually paid as of June 30, 2003, which was the cutoff date for its annual report, and as of August 31, 2003. FEMA officials stated they would continue to perform these reconciliations periodically. In reviewing FEMA’s annual report to the Congress required by CGFAA, we found that while improvements were made in how FEMA represented the detailed claim information in its report, claim information was no longer summarized in the report, making it less useful and transparent. In our May 2003 report, we recommended that FEMA reconcile the amounts approved for payment in its payment approval system to amounts actually paid in its accounting system and correct all identified errors in its payment approval system. We also recommended that FEMA perform monthly reconciliations of the approved and paid amounts for as long as both systems are used to track and report paid amounts or request additional funding. In response to our recommendations, OCGFC implemented a detailed process that consisted of compiling the claimed amounts by individual claims from both the accounting system and the payment approval system and matching the amounts approved and paid for each claim. It also performed an overall reconciliation of the total amounts approved and paid through June 30, 2003. In performing the detailed matching process, OCGFC compared the individual claim approval data and claim payment data from August 28, 2000, through February 19, 2003, and performed periodic comparisons of individual claims from February 20, 2003, to May 31, 2003, and from June 1 to June 30, 2003. When OCGFC identified differences between amounts reported as paid by FEMA’s accounting system and OCGFC’s payment approval system, it recorded the adjustments and corrections into its payment approval system to eliminate duplication and made note of data entry errors, which did not require adjustments. The identified differences generally consisted of the following: Duplicate approved amounts that represented a claimed amount that was approved for payment more than once. For example, in one case OCGFC approved a final payment of $934,802. Since the claimant appealed, the amount was not paid. While the claim was being appealed, OCGFC approved a partial payment of $919,802. Both of these amounts appeared as approved payments in its payment approval system. After we brought this duplicate approval to OCGFC’s attention during our prior review, it subsequently removed the original unpaid amount from its system during its claims comparison process later in fiscal year 2003. Data entry errors by claim reviewers when entering data into the payment approval or accounting systems. For example, if an incorrect identification number was used to process a payment, no comparison could be made between the approved amount recorded by claim number in the approval system and the paid amount recorded by claimant’s identification number in the accounting system. To resolve the situation, OCGFC listed the approved amount as an unmatched item until it located the matching payment that was recorded to a different identification number. In addition to these comparisons of individual claim amounts, FEMA’s Financial and Acquisition Management Division (FAMD) performed a complete reconciliation of the total amounts that were approved for payment and paid as of June 30, 2003. We reviewed this overall reconciliation of the total claim amounts approved and paid under CGFAA, from inception through June 30, 2003, and found that all corrections and adjustments identified during OCGFC’s comparison were made. FAMD also performed a similar reconciliation of total approved and paid amounts through August 31, 2003, and stated that it plans to perform this overall reconciliation more regularly, at least quarterly. In our prior report, we noted that FEMA improperly reported unreconciled amounts that were approved for payment in its prior annual reports to the Congress as amounts actually paid. In its 2003 annual report to the Congress, FEMA continued to include claimed amounts approved for payment in a detailed schedule of claim information, but it properly identified the amounts as such instead of as paid or expended amounts. However, FEMA’s report no longer summarized information on the amounts claimed, amounts approved, and the estimated liabilities for the remainder of the program. In addition, FEMA did not include any information on claims paid as is required under CGFAA. Without this information, FEMA’s annual report is less useful to the Congress and other stakeholders. FEMA strengthened its internal controls over its claim approvals and payments by implementing a process to reconcile its detailed approval information to its payment information. This helps improve the accuracy of claims information included in its systems and reports to the Congress. The usefulness and transparency of the report would be further improved if it included summary claim activity information. We recommend that the Secretary of Homeland Security direct the Under Secretary of the Emergency Preparedness and Response Directorate to include summary-level claim information in its annual report to the Congress, including amounts claimed, approved, and paid and remaining estimated program liabilities. FEMA, in a letter from the Director of the Recovery Division of the Department of Homeland Security’s Emergency Preparedness and Response Directorate, agreed with our recommendation to include summary-level claim information on amounts claimed, approved, and paid and remaining estimated program liabilities, if any, in its next annual report to the Congress. The Department of Homeland Security’s comments are reprinted in appendix I. We are sending copies of this report to the congressional committees and subcommittees responsible for issues related to FEMA and the Department of Homeland Security, the Secretary of the Department of Homeland Security, the Under Secretary of the Department of Homeland Security’s Emergency Preparedness and Response Directorate, and the Inspector General of the Department of Homeland Security. Copies will also be made available to others upon request. In addition, this report is available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 9508 or Steven Haughton, Assistant Director, at (202) 512-5999. The other key contributors to this assignment were Christine Fant and Estelle Tsay. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The Cerro Grande Fire Assistance Act (CGFAA) mandated that GAO annually audit all claim payments made to compensate the victims of the Cerro Grande Fire in northern New Mexico. For this third report on this topic, GAO determined whether the Federal Emergency Management Agency (FEMA), which is now a part of the Emergency Preparedness and Response Directorate of the Department of Homeland Security, (1) paid fire claims in accordance with applicable guidance and (2) implemented corrective actions to address prior GAO recommendations, including determining if FEMA properly reported claim payments to the Congress. FEMA processed and paid its claims in accordance with its policies and procedures that were established and in place at the time the claims were reviewed and processed. For the period from August 29, 2002, to June 30, 2003, FEMA approved $40 million in claims for payment, including the initial payment of most of the subrogation claims made by insurance companies. In response to GAO's May 2003 report, FEMA established processes to address GAO's recommendations related to reconciling claimed amounts approved with actual amounts paid. These processes included comparing the individual amounts approved for payment to the amounts actually paid from inception through June 30, 2003, by claim, as well as performing complete reconciliations of total amounts that were approved and paid for the same period and as of August 31, 2003. In addition, in our prior report, we noted that FEMA improperly reported unreconciled claim amounts approved for payment as amounts actually paid. In its 2003 annual report, FEMA properly identified claimed amounts as approved amounts for payment rather than actual amounts paid in a schedule that was included in its annual report to the Congress. However, FEMA no longer provided summary information on amounts claimed, amounts approved, and its remaining estimated liabilities in its report to the Congress and did not include any information on claims paid as is required by CGFAA. Without this information, the report is less useful to the Congress and other stakeholders. |
The Resource Conservation and Recovery Act (RCRA) requires EPA to identify which wastes should be regulated as hazardous waste under subtitle C and establish regulations to manage them. For example, hazardous waste landfills, such as those used for disposing ash from hazardous waste incinerators, generally must comply with certain technological requirements. These requirements include having double liners to prevent groundwater contamination as well as groundwater monitoring and leachate collection systems. In 1980 the Congress amended RCRA to, among other things, generally exempt cement kiln dust from regulation under subtitle C, pending EPA’s completion of a report to the Congress and subsequent determination on whether regulations under subtitle C were warranted. The Congress required that EPA’s report on cement kiln dust include an analysis of (1) the sources and the amounts of cement kiln dust generated annually, (2) the present disposal practices, (3) the potential danger the disposal of this dust poses to human health and the environment, (4) the documented cases of damage caused by this dust, (5) the alternatives to current disposal methods, (6) the costs of alternative disposal methods, (7) the impact these alternatives have on the use of natural resources, and (8) the current and potential uses of cement kiln dust. As of May 1994, there were about 115 cement kiln facilities operating in 37 states and Puerto Rico. Of these, 24 were authorized to burn hazardous waste to supplement their normal fuel. Even with the 1980 exemption, certain aspects of cement kilns’ operations must comply with some environmental controls. Under the Clean Air Act, EPA requires cement kiln facilities to comply with ambient air quality standards for particulate matter. Under the Clean Water Act, EPA regulates the discharge of wastewater and storm water runoff from cement kiln facilities. Under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA or Superfund), EPA can require cement kiln facilities to clean up contamination resulting from cement kiln dust. In August 1991, EPA’s regulations for boilers and industrial furnaces that burn hazardous waste took effect. While every cement kiln that burns hazardous waste is subject to these regulations, its dust is not classified as hazardous waste if at least 50 percent (by weight) of the materials the kiln processes is normal cement-production raw materials and the kiln’s owner or operator demonstrates that burning hazardous waste does not significantly affect the toxicity of the dust. According to EPA Office of Solid Waste officials, of the 24 cement kilns authorized to burn hazardous waste, they are not aware of any that are required to manage the dust as a hazardous waste. Despite these existing controls, in making its regulatory determination in February 1995, EPA stated that additional controls over cement kiln dust are warranted under RCRA because of its potential to harm human health and the environment. EPA also determined that existing regulations, such as those under the Clean Air Act, may also need to be improved because they are not tailored to cement kiln dust or because their implementation is inconsistent among the states. As partial justification, EPA cited 14 cases in which cement kiln dust has damaged groundwater and/or surface water and 36 cases in which cement kiln dust has damaged the air. EPA also cited the general lack of groundwater monitoring systems around dust management units at cement kiln facilities and the current lack of federal regulations to protect groundwater from the risks posed by cement kiln dust. Furthermore, after collecting and analyzing site-specific information, EPA concluded that potential risks did exist at some facilities. Although in 1980 the Congress directed EPA to complete its report on cement kiln dust by 1983 and to determine within 6 months thereafter whether regulations were warranted, EPA did not do so. It completed its report in December 1993 and issued its determination in February 1995.EPA officials said that the agency did not meet these statutory deadlines because, at that time, EPA viewed completing its report on cement kiln dust as a lower priority than other work. According to EPA’s Acting Chief of the Special Wastes Branch, the agency ranked completing its report and determination on cement kiln dust a low priority because cement facilities were considered to pose minimal risk because of the very small proportion of them on EPA’s National Priorities List. In addition, cement kiln dust exists in smaller volumes in comparison to other high-volume wastes that EPA was required to study, such as wastes from mining for ores and minerals and exploring for oil and gas. EPA wanted to complete studies of these high-volume, temporarily exempt wastes prior to completing its study on cement kiln dust. For example, EPA estimated that the mining industry generated 1.3 billion metric tons of waste in 1982, and it completed its study on these wastes in 1985. EPA officials said that they also needed to meet other statutory time frames for completing standards for other wastes that the agency placed a higher priority on, such as treatment standards for land disposal of hazardous waste. In settlement of a 1989 lawsuit filed against EPA because of its failure to comply with the statutory time frames, EPA entered into a consent decree to publish a report to the Congress on cement kiln dust on or before December 31, 1993. This decree also called for EPA to make a regulatory determination on cement kiln dust by January 31, 1995. RCRA specifically authorizes EPA to modify several requirements that apply to hazardous waste in regulating cement kiln dust. EPA is authorized to modify those requirements that would impose minimum technological standards on new landfills or expansions of existing landfills as well as those that impose corrective action to clean up releases of wastes from units used to dispose of cement kiln dust. EPA is authorized to modify these requirements to accommodate practical difficulties associated with implementing them when disposing of cement kiln dust as well as such site-specific characteristics as the area’s climate, geology, hydrology, and soil chemistry. However, any modifications must ensure the protection of human health and the environment. Although RCRA allows EPA to modify several requirements and thus propose different standards for cement kiln dust than those for hazardous waste, EPA has not yet determined which standards might differ and how they might differ. For example, according to Office of Solid Waste officials, it is not clear whether EPA will include a corrective action requirement to clean up releases from cement kiln dust disposal units that is similar to its corrective action requirement to clean up hazardous waste disposal units. These officials said that EPA will likely focus its management standards on dust generated in the future, as opposed to dust that already exists at cement kiln facilities, because RCRA allows EPA to consider several factors in developing standards for cement kiln dust management, including the impact or cost any management standard may have on the cement kiln industry. Furthermore, these officials said that EPA has to be sensitive to the Congress’s regulatory reform efforts as well as the agency’s goal of taking a more common sense approach to regulating industry. Even though EPA has determined that additional controls are warranted over dust from cement kilns burning hazardous waste as well as dust from those kilns that do not, it has not determined if it will impose the same standards or controls over dust from both types of kilns. EPA’s analysis found that concentrations of 12 metals in dust from both types of cement kilns were at higher than normally occurring levels. Dust from cement kilns burning hazardous waste had concentrations of nine of these metals that were the same or lower than dust from cement kilns that did not burn hazardous waste. Conversely, EPA found that concentrations of three metals—cadmium, chromium, and lead—were higher in dust from cement kilns that burn hazardous waste. (See app. I.) Even though the concentrations of these three metals were higher, EPA found that these increases did not result in discernible differences in risk estimates between dust generated by cement kilns that burn hazardous waste and those that do not. EPA also analyzed the extent to which these metals leached, or washed, out of the dust and found no significant difference between cement kilns that burn hazardous waste and those that do not burn this waste. Although EPA has not yet determined what management standards it will impose on cement kiln dust, Office of Solid Waste officials said that the agency may regulate air emissions from cement kilns burning hazardous waste differently from those that do not burn hazardous waste. According to these officials, because dioxins and furans were found in dust from cement kilns burning hazardous waste, EPA is considering revising its regulations for boilers and industrial furnaces to control their emissions. Even though the levels of these hazardous wastes were generally low, EPA believes their presence warrants concern. Even though EPA did not conclude that cement kiln dust should be classified as a hazardous waste, EPA did conclude that some facilities (in addition to those where damage to surface and/or groundwater and the air has been found) do have the potential to pose a threat to human health and the environment. While EPA plans to propose a program to control cement kiln dust within 2 years, if the agency proceeds with developing federal regulations, it could be several more years after that until cement kilns are required to implement these controls. Interim and possible final actions to reduce the current threat that cement kiln dust may pose at some facilities include requiring the cement kiln industry to adopt dust control standards without EPA’s first having to proceed through a lengthy regulatory development process and making greater use of existing regulatory authority to control cement kiln dust. One action EPA is considering to control this dust is the use of a cement kiln industry proposal called an enforceable agreement. After drafting the general terms of the agreement, the cement kiln industry has been working with EPA and other interested parties to negotiate what controls would be needed to protect human health and the environment. Some possible industry controls are to require landfills used to dispose of cement kiln dust to have such site-specific features as hydrogeological assessments, groundwater monitoring, surface water management, and measures to control emissions of cement kiln dust. The agreement would also specify that EPA would not impose subtitle C regulations on cement kiln dust. EPA is currently analyzing the agreement’s general terms to determine if it is allowable under RCRA and whether it would sufficiently protect human health and the environment. EPA’s consideration of this enforceable agreement to manage cement kiln dust has triggered a negative response from environmental groups. For example, the Environmental Defense Fund has questioned EPA’s authority to enter into these agreements and their enforceability if EPA does not first develop regulations that contain specific standards. In addition, the Fund questions whether these agreements would provide the same level of protection as federal regulations and whether they would allow for the public involvement that occurs in developing regulations. The Fund also questions how these agreements would affect the citizens’ ability to sue and to obtain information through the Freedom of Information Act and whether these agreements would limit federal and state criminal and civil enforcement authorities. Finally, the Fund questions whether these agreements would limit the development of state programs to control cement kiln dust. According to an Office of Solid Waste official, EPA intends to decide by late September 1995 whether it will pursue developing enforceable agreements to control cement kiln dust. Should this approach be challenged in the courts, however, controls over cement kiln dust could be further delayed. A second action under consideration is for EPA and the states to make greater use of existing regulatory authority to control cement kiln dust. Although EPA has determined that current regulations need to be improved for the proper management of cement kiln dust, in the past EPA regional offices and the states have used existing authorities at some facilities to control surface water runoff, emissions from dust piles, and groundwater contamination (i.e., the damage cases mentioned earlier). For example, according to an environmental inspector in Ohio, the state used an enforcement authority under its Remedial Response Act to better control runoff from waste piles that was contaminating a nearby stream. According to a waste management official in Michigan, the state used enforcement authority under its Air Pollution Control Act to better control emissions from dust piles. EPA has also used the Superfund program to clean up groundwater contamination at two facilities. In the course of completing its regulatory determination, EPA’s Office of Solid Waste collected information on 83 cement kiln facilities and conducted a series of studies on risk-screening and site-specific risk-modeling that could be used to determine whether existing regulatory authority should be used to control cement kiln dust at particular cement kilns. On the basis of the information collected and analyzed, EPA projected that several cement kiln facilities may be posing a high risk because of such factors as the amount of metals that may exist in dust disposed at those facilities, the lack of dust management controls at those facilities, and other facility-specific factors, such as proximity to agricultural lands. However, EPA’s Office of Solid Waste has not provided the results of its risk-screening and risk-modeling studies to other EPA offices or the states that are responsible for investigating facilities and taking necessary enforcement actions. (See app. II for additional information on the results of these studies.) According to Office of Solid Waste officials, much of this information is available in the public docket and EPA’s contractor has the computer tapes that were used to develop the risk estimates. However, because they did not believe that most facilities posed the degree of risk that warranted emergency action, they did not provide this information directly to EPA’s Office of Enforcement and Compliance Assurance, its regional officials, or state enforcement officials. EPA’s RCRA officials in four regions with cement kilns whose dust potentially poses a risk to groundwater said they would be interested in having the facility-specific information EPA’s Office of Solid Waste developed to prepare its report and determination. They said that they could provide the information to state environmental officials for the states’ use or could take enforcement action themselves if the regions believed the situation warranted it. In those instances in which EPA or the states lack clear enforcement authority, other actions, such as assessing facilities to better understand the risks and working cooperatively with cement kiln owners/operators to reduce these risks, could be taken. Similarly, EPA air and water officials said they would be interested in having facility-specific information for these purposes. It may be several years before EPA completes its management control program for cement kiln dust regardless of whether it decides to issue new regulations or adopt the use of an enforceable agreement to control this dust. EPA obtained information on 83 cement kiln facilities that it used to conduct a series of risk-screening and site-specific risk-modeling studies. While this information is readily available and much of it is in the public docket, EPA has not distributed it to EPA’s regional or state enforcement officials because the agency did not believe that the estimated risks warranted emergency action. Even so, EPA believes that some facilities, because of the manner in which their cement kiln dust is managed, could pose a risk. EPA regional and state enforcement officials believe that this information could assist them in determining if action should be taken at some facilities prior to EPA’s finalizing its management program to control cement kiln dust. We recommend that the Administrator, EPA, provide to EPA’s regional officials and state enforcement officials the risk-screening and site-specific risk-modeling information developed during its study of cement kiln dust so they can use this information to determine whether interim actions are needed to protect human health and the environment. We provided a draft of this report to EPA for its comments. We met with EPA officials, including the Acting Director, Waste Management Division, Office of Solid Waste, who generally concurred with the information presented in this report. They agreed that it would be appropriate for them to provide EPA’s regional officials and state enforcement officials information that may be useful to determine whether action should be taken to reduce the risks posed at cement kiln facilities prior to the agency’s finalizing its management program to control dust from cement kilns. Office of Solid Waste officials also suggested we clarify certain technical points. We have revised the report accordingly. To determine what priorities EPA set for making its regulatory determination on cement kiln dust, we interviewed officials from EPA’s Special Wastes Branch in its Waste Management Division, Office of Solid Waste. To determine if EPA is authorized to modify hazardous waste management requirements in regulating cement kiln dust, we reviewed RCRA and EPA’s regulatory determination on cement kiln dust. To determine whether EPA believes that dust from cement kilns that burn hazardous waste should be regulated the same as dust from those not burning such waste, we reviewed EPA’s Report to Congress on Cement Kiln Dust, its regulatory determination, and public comments received on that report as well as on other documents. We also discussed the basis for EPA’s determination with its Special Wastes Branch officials as well as officials representing the hazardous waste industry, the cement kiln industry, and environmental groups. To determine whether interim actions could be taken to control cement kiln dust while EPA is developing its management control program, we reviewed EPA’s legal authority for taking action at facilities that may pose a threat to human health and the environment, reviewed cases in which EPA or the states have used this authority in the past, and discussed EPA’s risk-screening and risk-modeling results with Office of Solid Waste officials. We also discussed options EPA and the states have with Special Wastes Branch officials in the Office of Solid Waste, Office of Enforcement and Compliance Assurance officials, EPA attorneys, and EPA and state environmental enforcement officials. We conducted our review between March and June 1995 in accordance with generally accepted government auditing standards. As discussed with your office, this report does not address new information that you provided us recently relating to metals in cement kiln dust. We agreed that we will address that information separately. As arranged with your office, unless you publicly announce this report’s contents earlier, we plan no further distribution until 30 days after its publication. At that time, we will send copies of this report to the Administrator of EPA and make copies available to others upon request. Please contact me at (202) 512-6112 if you or your staff have any questions. Major contributors to this report are listed in appendix III. EPA used a model to analyze the effect cement kiln dust could have at 52 facilities if they did not have adequate dust suppression controls for their waste piles. EPA’s model projected that over half of these facilities would exceed EPA’s health standards for fine particulate matter at plant boundaries and, potentially, at nearby residences. Although almost all of these facilities have some controls to suppress cement kiln dust, EPA does not have information on the adequacy of these controls and EPA officials also noted that they saw cement kiln dust blowing during some visits to 20 facilities. EPA used the same model to analyze the effects of water running off of dust piles at 83 of the facilities. The model projected that 25 facilities could pose higher than acceptable cancer risks or noncancer threats to subsistence farmers and fishermen. Seven of these facilities did not have runoff controls. EPA also estimated that 19 facilities could pose a risk because of dioxins and furans. EPA cautioned, however, that these risk results were based on very limited sampling and modeled worst-case scenarios of unusually high dioxin and furan levels. EPA further cautioned that all of the results from its analyses of indirect exposure risks should be carefully interpreted because its model was still under peer review. Even so, Office of Solid Waste officials said that the results of all of EPA’s analyses were cause for concern. EPA’s analysis of the effects of cement kiln dust on groundwater found that about half of the cement kiln facilities were built on bedrock having characteristics that allow for the direct transport of groundwater offsite. In its analysis of 31 of these facilities, EPA found that dust from 13 of them could contaminate groundwater at levels that could exceed health standards. None of these 13 facilities had installed man-made liners under their dust piles and 11 lacked leachate collection systems. EPA also found that groundwater at three of these facilities was within 10 feet of the bottom of their dust piles; EPA did not have information on the depth to groundwater at the remaining 10 facilities. In addition, some facilities managed cement kiln dust in quarries that could subsequently fill with water; if this occurs, leachate could more readily contaminate groundwater. In addition to the potential risks from the disposal of cement kiln dust, EPA is concerned over the use of this dust as a substitute for lime to fertilize agricultural fields. According to EPA, this use of cement kiln dust could pose cancer risks and noncancer threats for subsistence farmers if that dust contains relatively high levels of metals and dioxins. Richard P. Johnson, Attorney Gerald E. Killian, Assistant Director Marcia B. McWreath, Evaluator-in-Charge Rita F. Oliver, Senior Evaluator Mary D. Pniewski, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) decisionmaking process with respect to regulating cement kiln dust, focusing on: (1) EPA priorities in making its kiln dust determination; (2) whether EPA is authorized to modify hazardous waste management requirements in regulating cement kiln dust; (3) whether EPA believes that cement kilns burning hazardous waste should be regulated the same as those not burning hazardous waste; and (4) whether interim actions can be taken to control cement kiln dust. GAO found that EPA: (1) does not give as high a priority to making a cement kiln dust determination as developing standards for other wastes considered to be of higher risk; (2) has the statutory authority to modify its hazardous waste regulations to control cement kiln dust as long as the regulations adequately protect human health and the environment; (3) believes that cement kiln dust from both types of kilns could adversely affect human health and the environment, if improperly managed; (4) has not yet determined whether it will subject the dust from the two types of kilns to the same regulations; and (5) is considering interim actions to control cement kiln dust, such as making greater use of existing regulatory authority to enforce controls over the dust and entering into an agreement with the cement kiln industry to impose additional controls over the dust. |
Congress established and chartered the enterprises—Fannie Mae and Freddie Mac—as government-sponsored enterprises (GSE) that are privately owned and operated. Their mission is to enhance the availability of mortgage credit across the nation during both good and bad economic times by purchasing mortgages from lenders (banks, thrifts, and mortgage bankers) that use the proceeds to make additional mortgage loans to home buyers. The enterprises issue debt to finance some of the mortgage assets they retain in their portfolios. Most mortgages purchased by the enterprises are conventional mortgages, which have no federal insurance or guarantees. Enterprise purchases are subject to a conforming loan limitthat currently stands at $300,700 for a single-unit home. The debt and mortgage assets in the enterprises’ portfolios are on-balance sheet obligations (liabilities) and assets, respectively. A majority of the mortgages, however, are placed in mortgage pools to support mortgage- backed securities (MBS) that may be sold to investors or repurchased by the enterprises and held in their portfolios. MBS are conduits for collecting principal and interest payments from mortgages in the mortgage pools and passing payments onto MBS investors. The enterprises charge fees for guaranteeing the timely payment of principal and interest on MBS held by investors. MBS held by investors other than the enterprises are off- balance sheet obligations of the enterprises. The federal government’s creation of and continued relationship with Fannie Mae and Freddie Mac have created the perception in financial markets that the government will not allow the enterprises to default on their debt and MBS obligations, although no such legal requirement exists. As a result, Fannie Mae and Freddie Mac can borrow money in the capital markets at lower interest rates than comparably creditworthy private corporations that do not enjoy federal sponsorship. At least a portion of the financial benefits that accrue to the enterprises have been passed along to homeowners in the form of lower mortgage interest rates. During the 1980s, the government did provide limited regulatory and financial relief to Fannie Mae when the enterprise was experiencing significant financial difficulties; and in 1987, Congress authorized $4 billion to bail out the Farm Credit System, another GSE. Recognizing the potentially large costs that Fannie Mae and Freddie Mac pose to taxpayers, Congress passed the act, which established OFHEO as an independent regulator within HUD—tasked with ensuring the enterprises’ safety and soundness. OFHEO’s director has broad independent authority to ensure that OFHEO fulfills its safety and soundness mission. For example, the director has the authority to take supervisory and enforcement actions regarding the safety and soundness of the enterprises without the review and approval of the Secretary of HUD. The act requires OFHEO to carry out its oversight function both by establishing and enforcing minimum capital standards (including the risk- based capital standard) and by conducting annual on-site safety and soundness examinations of the enterprises to assess their operations and financial condition. The act established the broad outlines of a stress test and mandated that OFHEO develop its stress test within those parameters to serve as the basis for the risk-based capital standards. The act requires the stress test to simulate situations that expose the enterprises to extremely adverse credit and interest rate scenarios over a 10-year period and to calculate the cash flows and the amount of capital the enterprises would need to continue to operate for the entire period. The stress test model must include upward and downward interest rate movements of up to 6 percentage points and assume a high level of credit risk (based on the worst cumulative credit loss for not less than 2 consecutive years in contiguous states encompassing at least 5 percent of the U.S. population). As required in the act, the stress test model that the agency developed estimates credit and interest rate risks, among other factors, and includes an additional 30 percent of that amount for management and operations risk. Also as required, it does not include new business assumptions. The act set a December 1, 1994, deadline for completion of the stress test and risk-based capital standards. HUD is the mission regulator of the enterprises. The Secretary of HUD has general regulatory power over the enterprises to ensure that they carry out their mission as stated in their charters. The act requires the Secretary to establish annual goals for purchases of mortgages on low- and moderate- income housing, special affordable housing, and housing in central cities, rural areas, and other under-served areas. When HUD establishes housing goals, it must look at several factors, including the need for the enterprises to remain in sound financial condition. For more information about HUD’s mission regulation, see appendix I. The enterprises are in the business of buying and holding mortgages and insuring mortgage cash flows to investors. New business accounts for a large share of the enterprise’s on- and off-balance sheet holdings and thus has a major impact on their activities and financial health. The financial health of the enterprises and their ability to survive a future stressful economic period depend on the level of risk in both their existing and new business; the amount of capital that is available to them to absorb any losses—their capital adequacy; and the business decisions they make during the stressful period. The enterprises continually acquire mortgages originated by lenders for home purchases and for refinancing existing mortgages (see table 1). Such mortgage acquisitions are a large share of total originations in any year and are often large relative to mortgage holdings at the end of the prior year. For example, the enterprises’ purchases equaled between 32 and 51 percent of total single-family originations in each year between 1991 and 2000. In addition, their yearly purchases between 1991 and 2000 ranged from about 17 to 51 percent of their total mortgage portfolios in the prior year. At the end of 2001, Fannie Mae’s total mortgage portfolio was $1.56 trillion and Freddie Mac’s $1.14 trillion, of which $705 billion and $492 billion, respectively, represented on-balance sheet mortgages. For more information about the financial performance of the enterprises, see appendix II. The enterprises face two primary risks, which are modeled in OFHEO’s stress test: interest rate risk and credit risk. The degree of risk depends on the enterprises’ operating and managerial decisions as well as on future economic factors such as interest rates, unemployment, inflation, and economic growth. Although the enterprises can take risk-management efforts to limit their exposure, the costs of these efforts can reduce profits. Thus, risk management is usually associated with both a lower expected or average profit and reduced variability in profits and losses. Interest rate risk reflects both movements in interest rates and management decisions about how to fund mortgage acquisitions. In general for the enterprises, when market interest rates decline, mortgage purchases increase as homeowners move and pay off or refinance existing mortgages. Declining rates may also lower the enterprises’ funding costs. In contrast, rising market interest rates create higher interest expenses for the enterprises as debt turns over. Prolonged periods of rising interest rates typically lead to a slowdown in prepayments and refinancing activity, because interest rates on new mortgages are higher than those on most of the previously originated mortgages. If funding costs rise and existing on- balance sheet mortgages at old, lower interest rates remain on the books, prolonged periods of losses and capital erosion can occur. Enterprise management can use callable debt and other financial instruments or strategies to mitigate interest rate risk and other potential losses. However, such managerial decisions will tend to lower future expected profits. OFHEO’s current stress test, which assumes no new business over a 10- year period, simulates the impact of interest rate movements and economic conditions on the behavior of borrowers whose mortgages are held by the enterprises and therefore affect the enterprises’ cash flows. The model requires that the mortgage holdings wind down over the 10- year period. The extent to which existing business winds down shows the importance of new business, because in practice the enterprises would be acquiring new mortgages to replace lost mortgages during the period. In the stress test model, the remaining mortgage balance (and existing business) depends on scheduled payments of principal, other prepayments, and defaults. Prepayments are sensitive to interest rate changes because lower rates accelerate prepayments and higher rates depress them. Default losses are sensitive to economic conditions, including loan-to-value ratios and seasoning, because loans with lower loan-to-value ratios and seasoned loans are less likely to default. OFHEO ran its model for us for a portfolio of newly originated 30-year fixed-rate single-family mortgages with 95 percent loan-to-value ratios. About 4.7 percent of a portfolio of newly originated loans would still be on the enterprises’ books after 10 years in the declining interest rate environment mandated by the act. In the increasing rate environment mandated by the act, about 57.3 percent of a portfolio of newly originated loans would still be on the enterprises’ books after 10 years. OFHEO also ran the model on other portfolios with different loan-to-value ratios and degrees of seasoning, with similar results in both the declining and increasing rate environments. As previously stated, in actual practice the enterprises would replace many of the lost loans; and the types of mortgages acquired and types of financial instruments actually used to fund these new mortgages would significantly impact returns and risks. Credit risk reflects both economic conditions and management decisions about mortgage acquisitions. Deteriorating economic conditions can lower home values and reduce homeowners’ incomes, therefore increasing credit risk. Likewise, management decisions to increase loan-to-value ratios or otherwise ease underwriting standards can raise credit risk. The OFHEO simulations also showed how the credit stresses in the mandated increasing and decreasing rate environments could affect credit- related losses. The simulations showed that, for a portfolio of newly originated single-family mortgages with a 95 percent loan-to-value ratio, defaults would account for a decline of about 19.5 percent of the original mortgage balances in the decreasing rate environment and a decline of about 16.2 percent of the original mortgage balances in the increasing rate environment. The enterprises could limit credit risk in several ways. For example, they could use more stringent underwriting standards, although such standards could limit the dollar volume of mortgages they are able to purchase and possibly affect their ability to support the residential mortgage market. For instance, requiring homeowners to make large down payments or purchase private mortgage insurance could make acquiring a mortgage more difficult for potential homeowners. Second, they could more aggressively monitor loans and work out problems with troubled loans. Third, the enterprises could mitigate the economic costs of defaults by raising the management guarantee fee they charge mortgage pools for providing credit insurance and managing the pools. Ultimately, however, management actions to limit credit risk might create expenses that would curtail expected future profits. Actual decisions about underwriting, mortgage monitoring, and guarantee fees would affect the returns and risks associated with the acquisition of new mortgages by the enterprises. Under the risk-based capital test, capital adequacy measures the amount of capital an enterprise needs to ensure that it can continue to operate during a stressful period and is based on expected profits and the risks taken to generate those profits. Generally speaking, risk and profit are positively related. For example, an enterprise can increase its expected profits by taking more risks, but taking greater risks can increase the possibility that the enterprise will not survive a stressful economic period due to losses incurred. An enterprise can increase its chances of surviving a stressful period by increasing its capital level, but such increases raise funding costs and reduce future expected profits. Capital adequacy also depends on the extent and duration of the economic stress that the enterprise might encounter. The greater the levels of stress an enterprise must endure and the longer the exposure to stress, the less likely it is that the enterprise will survive the stress period with a given level of capital. Incorporating new business assumptions into long-term financial planning models is difficult, primarily because doing so is inherently speculative. Incorporating such assumptions would require OFHEO to develop plausible scenarios for the future behavior of the enterprises—for example, the types of mortgages they might acquire, their future funding strategies, and other managerial decisions. In addition, OFHEO would have to consider HUD’s regulatory actions and their effect on the enterprises during the stress period. The difficulty of incorporating new business assumptions into a stress test is reflected in the fact that the enterprises do not include such assumptions in their own long-term capital adequacy models. The enterprises generally use new business assumptions only in models with relatively short time frames (up to 4 years). Finally, OFHEO’s stress test is already highly complex. Adding new business assumptions would increase its complexity and make the legal requirement that it be replicable more difficult to meet. An OFHEO stress test with new business assumptions would have to include explicit assumptions about the enterprises’ strategic managerial behavior in a stressful economic environment. Management’s behavior would have to be linked to hedging, which affects interest rate risk; underwriting, which affects credit risk; and setting guarantee fees, which affect earnings. Specifying management’s behavior would be speculative, unlike the modeling of borrowers’ behavior in the stress test. Borrowers’ behavior related to mortgage prepayment and default can be and is predicted in the stress test, based on statistical techniques that are applied to historic data. Although this prediction is subject to statistical measurement errors, these techniques can be used to extrapolate borrower behavior in a stressful environment. However, because managerial behavior is idiosyncratic, such techniques cannot be used to extrapolate managerial behavior in a stressful environment from behavior in more normal economic environments. For instance, overall management strategies dealing with both interest rate and credit risks could either exacerbate risk exposures or mitigate such risks to various degrees. OFHEO would lack criteria, both statistical and theoretical, to justify assumptions about these strategies. Therefore, OFHEO would have to speculate about managerial behavior to develop new business assumptions. In addition, because a significant proportion of the enterprises’ mortgages either prepay or default over a 10-year period and are replaced by new business, the assumptions about new business could easily dominate the cash and capital flows in the stress test over the 10- year period. These assumptions could also determine whether the enterprises met or failed to meet the risk-based capital requirement. In the legislative history of the act, Congress recognized that OFHEO would have to hypothesize about any new business assumptions that might be included in a stress test. Language in a Senate committee report explicitly recognized that incorporating new business assumptions during a stressful period would require speculating about enterprise behavior. The report recognized that any assumptions addressing new business in the stress test would also have to incorporate further assumptions about enterprise management’s capacity to make suitable adjustments. The act requires the director to assume that the new business the enterprises conduct during the stress period will be consistent with either historic or recent experience and with the economic characteristics of the stress period. In particular, the director must make specific assumptions about five factors: the amounts and types of business, losses, interest rate risk, and reserves. These restrictions limit OFHEO’s modeling assumptions, allowing for managerial response only after the advent of the stressful condition and requiring that the responses be consistent with the prior behavior of the enterprise. In other words, for purposes of the stress test OFHEO cannot assume that management will take actions in anticipation of stressful conditions, that management will be able to respond differently than it has previously under similar circumstances, or that management will respond promptly and effectively to stressful situations to maintain adequate capital. In addition to speculating about the behavior of the enterprises’ management, OFHEO would need to consider HUD’s regulatory response to a stressful environment. HUD regulates the enterprises in terms of housing goals and other charter requirements not directly concerned with safety and soundness (see app. I for a detailed description of HUD’s regulatory responsibilities and powers). However, when HUD establishes housing goals, it must look at several factors, including the need for the enterprises to remain in sound financial condition. If either enterprise’s financial condition should falter, the Secretary of HUD would likely take regulatory actions to help the enterprise rather than allow it to withdraw entirely from the secondary mortgage market or from segments of the market governed by HUD’s numeric goals. For modeling purposes, OFHEO would have to consider both the regulatory actions HUD might take to ensure that the enterprises continue to comply with the housing goals and the effects of such actions on management’s approach to new business and risks. HUD’s regulatory response could have a further effect on the model by constraining the enterprises and thus affecting managerial decisions at the enterprises. The enterprises do not include new business in their long-term financial models (Fannie Mae) or in their capital adequacy models (Freddie Mac) because they believe that such assumptions would be speculative. Fannie Mae officials told us that it would not be reasonable to make new business assumptions beyond a window of several years, and the results of such a modeling approach might be more reflective of the assumptions themselves than of the actual risks faced by Fannie Mae. Freddie Mac’s interest rate risk exposure is stated in terms of portfolio market value sensitivity, or the estimated percentage decline in Freddie Mac’s market value of equity that results from a change in interest rates. Freddie Mac officials told us that another reason they do not include new business in their risk models is that they want to focus on the risks of the current book of business and not the profitability of new business. Although OFHEO probably would not rely solely on the enterprises’ assumptions about new business, in the absence of such assumptions, OFHEO would still have to make plausible assumptions about the enterprises’ behavior. The enterprises do include new business assumptions in the short-term models used to manage business on a day-to-day basis and for planning. The enterprises’ short-term planning models typically focus on business strategies during time periods of no more than 4 years under economic stresses that are relatively normal compared with those in OFHEO’s stress test. Some of these models used to analyze interest rate risk include new business assumptions. Our review of information from regulators and three rating agencies (Standard & Poor’s, Moody’s, and Fitch) indicates that these entities do not use new business assumptions when evaluating the capital adequacy of financial institutions. For example, the Federal Housing Finance Board (FHFB) has issued risk-based capital standards for the other housing GSE, the Federal Home Loan Bank (FHLBank) System. FHFB developed an approach based on the existing balance sheet that estimates the market value of the FHLBank’s portfolio at risk under financial stress scenarios and thus does not require an assumption about new business. As another example, depository institution regulators have established capital requirements for credit risk that assign all existing assets and off-balance sheet items to broad categories of relative risk; and they do not incorporate new business assumptions. In addition, Standard & Poor’s, Moody’s, and Fitch use no new business in their stress tests for rating private mortgage insurers. During our review, we identified one instance in which new business assumptions are included in a risk-based capital stress test. The Farm Credit Administration (FCA) has established risk-based capital standards for Farmer Mac, a relatively small GSE operating in the secondary market for agricultural mortgages. FCA includes replacement of paid-off agricultural mortgages in its 10-year stress test model. The unique nature of OFHEO’s risk-based standard, particularly the 10- year time frame and the specificity regarding stresses, makes the OFHEO stress test complex. Adding new business assumptions would increase this complexity by introducing more factors that could affect the behaviors modeled within the stress test and requiring that more behaviors be modeled. Adding more complexity would further limit the ability of analysts and others to understand and replicate the test, as the 1992 act requires. For example, the test would likely require refinements to take into account the dynamic behaviors of borrowers, investors in enterprise debt and MBS, and the enterprises over the 10-year period. These behaviors include shifts in borrower demand for fixed- and adjustable-rate mortgages, investors’ willingness to take on the risk of alternative funding sources for the enterprises, and the enterprises’ mortgage purchase and funding strategies. According to enterprise officials, the ability of individuals outside of OFHEO to understand and replicate the current stress test is already strained, even without new business assumptions. The inclusion of new business assumptions to predict the behaviors of parties such as the mortgage originators, mortgage borrowers, investors, and the enterprises would exacerbate this situation. To incorporate new business assumptions into its stress test, OFHEO could develop plausible scenarios for how enterprise management and the market might respond in a stressful environment, but depending on the assumptions, the capital requirement could be increased or decreased. The assumptions could dominate the capital requirement. In the early 1990s, for example, prior to the creation of OFHEO, HUD analyzed each enterprise’s capitalization using a stress test that incorporated a Depression scenario. HUD’s analyses showed that incorporating new business resulted in higher capital requirements for the enterprises. HUD made two major assumptions that affected the result. First, HUD assumed that the enterprises would have difficulty determining exactly when a downturn would begin and projecting its length and severity. This assumption limited management’s ability to mitigate risk. Second, HUD assumed that the enterprises would be required to provide ongoing and meaningful support to the secondary mortgage market during a prolonged period of severe economic conditions; and therefore, the enterprises could not stop purchasing mortgages that might generate losses in a stressful environment. Other plausible scenarios could lead to assumptions showing that incorporating new business might mitigate risk and improve capital adequacy. According to Fannie Mae officials, for example, including new business using the enterprises’ current underwriting standards and guarantee fees would result in a lower capital requirement. The officials pointed out that in a falling interest rate environment, the credit quality of an existing mortgage portfolio would typically decline as the less risky mortgages are refinanced and the more risky ones remain. Including new business that encompasses the newly refinanced mortgages would lower credit risk and thus result in a lower capital requirement. An alternative plausible set of assumptions showing that incorporating new business might mitigate risk and improve capital adequacy could presume that the enterprises would change their business practices to reduce risks in a stressful environment. For example, during a stressful period, the enterprises might implement stricter underwriting standards and increase their guarantee fees in reaction to possible declines in mortgage credit quality. While OFHEO’s risk-based capital requirement is a key element in ensuring the enterprises’ financial safety and soundness, other mechanisms also exist to limit risk taking by the enterprises. The proposed Basle Accord revisions, which address banking supervision, list the three “mutually reinforcing pillars” that help ensure the financial safety and soundness of banks. These pillars—risk-based capital requirements (discussed in this report), market discipline, and supervisory review— should also be used to address safety and soundness oversight of the enterprises. Based on our work on bank and GSE safety and soundness supervision and our review of the proposed Basle Accord revisions, we have concluded that capital regulation in isolation does not provide sufficient oversight. Market discipline can curb risky behavior by the enterprises to the extent that the enterprises’ customers and creditors will demand that the enterprises stay fiscally strong in order to fulfill their obligations. Market discipline works best when firms fully and publicly disclose their financial conditions. Customers and creditors can then use the information to determine further interactions with the enterprises. In October 2000, the enterprises adopted six voluntary commitments aimed at increasing their disclosures. The commitments included, among other things, the issuance of subordinated debt and public disclosure of financial information. Enterprise officials stated that these commitments would improve transparency and market discipline because market participants could use the added information to better assess the financial condition of the enterprises. We acknowledge that financial disclosure as mandated by the voluntary accord may improve transparency. However, its impact on the enterprises and their customers or funding parties is limited if the enterprises are perceived to have implicit government backing. That is, other economic parties may believe that the federal government will ensure that the enterprises continue to operate and to perform satisfactorily on financial contracts such as loans and mortgage purchases. For this reason, while market discipline can play a role in curbing risky behavior by the enterprises, it also has its limitations. Supervisory review thus takes on more importance as a means for limiting inappropriate risk- taking behavior by the enterprises. The proposed revision of the Basle Accord recognizes the supervisory review process as one of three “pillars” that contribute to safety and soundness in the financial system. OFHEO’s supervisory review includes examining the operations of the enterprises and taking the supervisory and enforcement actions necessary to ensure that the enterprises are operating safely and soundly. In conjunction with other elements of supervision, supervisory review can also help ensure that the enterprises maintain sufficient capital to support the risks they undertake. Further, it can encourage the enterprises to develop and use better risk-management techniques to address the risks associated with both existing and new business. Supervisory review can focus on internal approaches to capital allocation and internal assessments that reflect management’s own expectations about future business opportunities and risks without the need for OFHEO to impose its own assumptions about new business. In addition, supervisory review allows OFHEO to address the enterprises’ management structure and business approaches to ensure that risk- management techniques and internal controls are appropriate and are protecting the public interest. Risk-management practices that sufficiently limit the credit and interest rate risks associated with new business and adequate OFHEO supervision of those practices can reduce the chances that an enterprise will take on risky new business that could jeopardize its capital adequacy in a stressful economic environment. For example, adequate supervision could inhibit an enterprise’s attempt to “grow” its way out of a problem situation in a stressful environment by means such as lowering underwriting standards or relaxing risk-management controls that address interest rate risk. While adequate supervision is not guaranteed by the presence of OFHEO and its legal authorities, inclusion of speculative new business assumptions in the stress test—based on plausible managerial behavior—would not reduce the importance of adequate supervision. OFHEO has several legal authorities that help in carrying out supervisory responsibilities relating to safety and soundness. These authorities include informal supervisory actions; formal enforcement actions involving notice to the affected enterprise; hearing opportunities; and, if warranted, imposition of sanctions such as cease and desist orders or civil monetary penalties. The Federal Deposit Insurance Corporation Improvement Act of 1991 mandates actions, known as prompt corrective actions (PCA), that depository institution regulators must take in response to specific capitalization levels. Similarly, the 1992 act contains PCA provisions that authorize and, depending on the level of undercapitalization, require OFHEO to take certain actions when an enterprise is undercapitalized.These mandates, which specify actions to be taken at certain levels of undercapitalization, limit the possibility that OFHEO might be lenient once an enterprise’s capital cushion is impaired. OFHEO has issued regulations implementing the PCA provisions and establishing prompt supervisory responses to be taken based on specified noncapital developments. The director of OFHEO has broad discretion over measures that can be taken beyond these required actions. Such discretionary powers allow the director of OFHEO to respond when specific enterprise practices occur that could pose a safety and soundness concern. Determining new business assumptions for inclusion in OFHEO’s stress test is inherently speculative, and OFHEO would have to develop plausible scenarios for managerial behavior to make such a determination. The assumptions about new business could easily dominate the cash and capital flows over the 10-year stress test period and therefore dominate the capital requirement. Thus, these assumptions could also determine whether the enterprises met or failed to meet the risk-based capital requirement. Adding new business assumptions would introduce more complexity to an already complex model and interfere with the public policy mandate that requires the model to be understandable and replicable. A stress test that concentrates on existing business rather than potential new business allows all parties to observe the risks embedded in current holdings and operations. In addition, OFHEO can use supervisory review in conjunction with the stress test to help limit the potential risks associated with new business. OFHEO should not incorporate new business assumptions into its risk- based capital stress test. Appropriate examination and supervisory review by the regulator can help ensure that the enterprises maintain capital appropriate to the financial stresses they are experiencing with regard to new business. We requested comments on a draft of this report from the heads, or their designees, of Freddie Mac, Fannie Mae, and OFHEO. Freddie Mac’s written comments, which agreed with our conclusions and recommendation, appear in appendix III. Fannie Mae and OFHEO provided technical comments that were incorporated where appropriate. OFHEO officials also stated that OFHEO does not model or predict enterprise behavior in its current stress test, but does make assumptions to project enterprise behavior in a stylized way, consistent with the stress conditions. For example, they cited assumptions about enterprise new debt issues and operating expenses in the current stress test. Such assumptions, they stated, are necessary for a capital requirement that is appropriately sensitive to risk. The OFHEO officials stated that incorporating new business into the stress test would entail making assumptions that address additional complicated managerial decisions on the full range of enterprise activities. They added that, in contrast to the assumptions that project managerial behavior in the current stress test, the assumptions necessary to incorporate new business have the potential to unduly influence the capital requirement and make it less sensitive to current risks. We made revisions based on OFHEO’s comments distinguishing between modeling or predicting enterprise behavior and developing reasonable assumptions for enterprise management. We agree with OFHEO officials that, compared to the current stress test, incorporating new business into the stress test would require assumptions that address additional complicated managerial decisions on the full range of enterprise activities. Our report notes, in particular, that the assumptions required to incorporate new business could dominate the capital requirement. We will send copies of this report to the Director of OFHEO, the Chief Executive Officer of Fannie Mae, and the Chief Executive Officer of Freddie Mac. We will also make copies available to others upon request. Please contact William Shear or me at (202) 512-8678 if you or your staff have any questions concerning this report. Key contributors to this report were Mitchell Rachlis, Darleen Wall, Paul Thompson, and Emily Chalmers. Except for matters under the Office of Federal Housing Enterprise Oversight’s (OFHEO) exclusive authority, which relate primarily to enterprise safety and soundness, the Secretary of the Department of Housing and Urban Development (HUD) has general regulatory power over the enterprises to ensure that they carry out the purposes of their charters. These purposes include (1) providing ongoing assistance to the secondary market for residential mortgages allowing for mortgages on housing for low- and moderate-income families involving lower returns than those earned on other activities and (2) promoting access to mortgage credit throughout the nation, including in central cities, rural areas, and underserved areas. Moreover, the act requires the Secretary to establish annual goals for the enterprises’ purchases of mortgages on low- and moderate-income housing; special affordable housing (housing for low-income families in low-income areas and for very low-income families); and housing in central cities, rural areas and other underserved areas. Based on the regulatory scheme established in the act, the Secretary of HUD could exercise these authorities during a stressful period in a way that might affect an enterprise’s new business. For example, housing goals would require an enterprise to conduct new business. Forecasting these goals and the potential for other mission- related requirements and their impact on new business would be speculative. The charter for each enterprise states that the purpose of the enterprise is to provide stability in the secondary market for residential mortgages; to respond appropriately to the private capital market; to provide ongoing assistance to the secondary market for residential mortgages (including activities relating to mortgages on housing for low- and moderate-income families, involving a reasonable economic return that may be less than the return earned on other activities) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing; and to promote access to mortgage credit throughout the nation (including central cities, rural areas, and underserved areas) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing. The numeric goals provisions of the 1992 act require the Secretary to consider the following factors in setting final housing goals for each of the three categories of housing: (1) national housing needs; (2) economic, housing, and demographic conditions; (3) the performance and effort of the enterprises in achieving the goals in previous years; (4) the size of the conventional mortgage market serving targeted borrowers relative to the size of the overall conventional mortgage market; (5) the ability of the enterprises to lead the industry in making mortgage credit available to targeted borrowers; and (6) the need to maintain the sound financial condition of the enterprises. Although the last factor requires consideration of an enterprise’s financial condition, nothing in the 1992 act suggests that the Secretary should refrain from establishing goals or taking other mission-related actions in the event of a stressful financial condition. However, we believe that the Secretary would not exercise mission and housing goal authorities in a way that would continue or increase an enterprise’s financial stress, because doing so would undermine the financial safety and soundness requirements of the 1992 act and compromise the enterprise’s ability to achieve its mission. As table 2 shows, most of the enterprises’ on-balance sheet assets are mortgages. The table also shows that most of the mortgage and other on- balance sheet financial activities of the enterprises are funded by debt. Each enterprise’s total mortgage portfolio (see table 3) consists of on- balance sheet mortgages and mortgage-backed securities (MBS) held by the enterprises, and off-balance sheet MBS owned by investors who receive their interest and principal from a pool of mortgages. At the end of 2001, Fannie Mae’s total mortgage portfolio was $1.56 trillion, and Freddie Mac’s was $1.14 trillion. A majority of the enterprises’ holdings consisted of off-balance sheet MBS pools. Over time, both enterprises have shifted a greater share of their mortgage assets on book, increasing their interest rate risk. The enterprises’ income and expenses reflect their basic operations. Fannie Mae’s 2001 net income was $5.9 billion and Freddie Mac’s $4.1 billion, largely from net interest and fee income (see table 4). Mortgages and MBS owned by the enterprises generated net interest income of $8.1 billion for Fannie Mae and $5.5 billion for Freddie Mac, while investor- owned MBS generated fee income that totaled about $1.6 billion for each enterprise. Actual and estimated expenses related to credit risk were $77.7 million for Fannie Mae and $84 million for Freddie Mac, while administrative expenses were about 17 percent of net income for both enterprises. Federal Home Loan Bank System: Establishment of a New Capital Structure. GAO-01-873. Washington, D.C.: July 20, 2001. Comparison of Financial Institution Regulators’ Enforcement and Prompt Corrective Action Authorities. GAO-01-322R. Washington, D.C.: January 31, 2001. Capital Structure of the Federal Home Loan Bank System. GAO/GGD-99- 177R. Washington, D.C.: August 31, 1999. Farmer Mac: Revised Charter Enhances Secondary Market Activity, but Growth Depends on Various Factors. GAO/GGD-99-85. Washington, D.C.: May 21, 1999. Federal Housing Finance Board: Actions Needed to Improve Regulatory Oversight. GAO/GGD-98-203. Washington, D.C.: September 18, 1998. Federal Housing Enterprises: HUD’s Mission Oversight Needs to Be Strengthened. GAO/GGD-98-173. Washington, D.C.: July 28, 1998. Risk-Based Capital: Regulatory and Industry Approaches to Capital and Risk. GAO/GGD-98-153. Washington, D.C.: July 20, 1998. Government-Sponsored Enterprises: Federal Oversight Needed for Nonmortgage Investments. GAO/GGD-98-48. Washington, D.C.: March 11, 1998. Federal Housing Enterprises: OFHEO Faces Challenges in Implementing a Comprehensive Oversight Program. GAO/GGD-98-6. Washington, D.C.: October 22, 1997. Government-Sponsored Enterprises: Advantages and Disadvantages of Creating a Single Housing GSE Regulator. GAO/GGD-97-139. Washington, D.C.: July 9, 1997. Housing Enterprises: Investment, Authority, Policies, and Practices. GAO/GGD-91-137R. Washington, D.C.: June 27, 1997. Comments on “The Enterprise Resource Bank Act of 1996.” GAO/GGD- 96-140R. Washington, D.C.: June 27, 1996. Housing Enterprises: Potential Impacts of Severing Government Sponsorship. GAO/GGD-96-120. Washington, D.C.: May 13, 1996. Letter from James L. Bothwell, Director, Financial Institutions and Markets Issues, GAO, to the Honorable James A. Leach, Chairman, Committee on Banking and Financial Services, U.S. House of Representatives, Re: GAO’s views on the “Federal Home Loan Bank System Modernization Act of 1995.” B-260498. Washington, D.C.: October 11, 1995. FHLBank System: Reforms Needed to Promote Its Safety, Soundness, and Effectiveness. GAO/T-GGD-95-244. Washington, D.C.: September 27, 1995. Housing Finance: Improving the Federal Home Loan Bank System’s Affordable Housing Program. GAO/RCED-95-82. Washington, D.C.: June 9, 1995. Government-Sponsored Enterprises: Development of the Federal Housing Enterprise Financial Regulator. GAO/GGD-95-123. Washington, D.C.: May 30, 1995. Farm Credit System: Repayment of Federal Assistance and Competitive Position. GAO/GGD-94-39. Washington, D.C.: March 10, 1994. Farm Credit System: Farm Credit Administration Effectively Addresses Identified Problems. GAO/GGD-94-14. Washington, D.C.: January 7, 1994. Federal Home Loan Bank System: Reforms Needed to Promote Its Safety, Soundness, and Effectiveness. GAO/GGD-94-38. Washington, D.C.: December 8, 1993. Improved Regulatory Structure and Minimum Capital Standards are Needed for Government-Sponsored Enterprises. GAO/T-GGD-91-41. Washington, D.C.: June 11, 1991. Government-Sponsored Enterprises: A Framework for Limiting the Government’s Exposure to Risks. GAO/GGD-91-90. Washington, D.C.: May 22, 1991. Government-Sponsored Enterprises: The Government’s Exposure to Risks. GAO/GGD-90-97. Washington, D.C.: August 15, 1990. | GAO reviewed whether the Office of Federal Housing Enterprise Oversight (OFHEO) should incorporate new business assumptions into the stress test used to establish risk-based capital requirements. The stress test is designed to estimate, for a 10-year period, how much capital the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac) would be required to hold to withstand potential economic shocks, such as sharp movements in interest rates or adverse credit conditions. Incorporating new business assumptions into the stress test would mean specifying details about the types and quality that would be acquired during the 10-year stress period, the types of funding that would be used to acquire such mortgages, and other operating and financial strategies that would be implemented by Fannie Mae's and Freddie Mac's managements. GAO found that data for the enterprises show that new business conducted over a 10-year period accounts for a large share of their on- and off-balance sheet holdings of assets and liabilities at the end of each 10-year period. Because new business represents such a large share of enterprise holdings over time, it would have a major impact on the enterprises' financial condition, risks, and capital adequacy in the face of stressful events. However, determining the appropriate new business assumptions to include in the model would be difficult and inherently speculative. with OFHEO having to develop plausible scenarios for how enterprise management and the market would respond in a stressful environment. OFHEO can use supervisory review, which includes examination of the enterprises' ongoing business activities and enforcement actions, and should work in conjunction with the capital requirement to help ensure the safety and soundness of the enterprises. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.